We mainly focus on the following research areas:

Robot skills, and skill-based programming of industrial robots

One focus area is the use of object-centered robot skills for intuitive programming of mobile industrial robots. In order to have truly flexible industrial robots, that can accommodate a high product variety and short changeover times in the factory, it is crucial that the shop floor workers can easily reprogram the robots to perform new tasks in the factory. This includes not only direct programming, by manually specifying a sequence of skills, but also task planning, using the skills as the planning domain.

We have previously analyzed 566 work descriptions (Standard Operation Procedures) at Grundfos A/S and identified 15 robot skills with which almost all work can be performed, and which are immediately intuitive for the factory workers. Our current focus is on implementing these skills in a meaningful manner on industrial robots.

Industrial Human-Robot Interaction

Intuitive HRI is something that has largely been overlooked in industrial robots for decades. In the factories of the future, robots should be usable be laymen, with little to no training. Industrial robot manufacturers are beginning to address this need lately, with the introduction of the intuitive compliant robot arms from Universal Robots, and, more recently, the Baxter robot from Rethink Robotics.

Our research focuses on using the robot skills described above as a middle layer, so the factory worker only has to relate to basic skills, and not complex parameters, such as contact forces or robot sensing routines. By using the skills, the factory worker only has to supply intuitive parameters for the skills, which can be done by simple gestures or through intuitive user interfaces.

The video below shows our recent results in gesture-based HRI on our skill-equipped Little Helper robot:

Vision for Robots

Robots need vision to perceive their environment in order to act in it. Vision for robots need to take into consideration the specific needs and limitations that a robot poses. As a result, robotic vision pursues a fragile balance between accuracy, efficiency and reliabilty.

Robots are finding their way out of confined work-cells where they are expected to repeat a precisely defined set of tasks. For robots to operate next to human workers in factory halls or to navigate and interact in unstructured home/office environments sensing is absolutely essential. Vision is naturally a very rich source of information, and robots can use vision sensors to accomplish their tasks in a more autonomous manner.

Robotic vision stands at the intersection of robotics and computer vision. Tasks such as autonomous robot navigation, object manipulation and human-robot interaction can be pursued using vision sensors. Apart from monocular cameras, robots commonly employ stereo cameras (like the one shown below) or RGB-D sensors (e.g. the Microsoft Kinect sensor) to perceive the 3D structure of the environmet. Such sensors are capable of capturing color images and providing depth information at the same time.