Autonomous Learning Robot Lab

“Learning as the core principle in autonomous systems which operate in complex and changing environments.”

The Robotics and Mechatronics Center (RMC) is DLR’s competence center for research and development in the areas of robotics, mechatronics and optical systems. Mechatronics is the closest integration of mechanics, electronics and information technology for the realization of “intelligent mechanisms” which interact with their environment.

The core competence of RMC is the interdisciplinary design, computer-aided optimization and simulation, as well as implementation of complex mechatronic systems and human-machine interfaces.

In the robotics community, the RMC is considered as one of the world leading institutions.

Equipment Used


Vantage is Vicon’s flagship range of cameras. The sensors have resolutions of 5, 8 and 16 megapixels, with sample rates up to 2000Hz – this allows you to capture fast movements with very high accuracy. The cameras also have built-in temperature and bump sensors, as well as a clear display, to warn you if cameras have moved physically or due to thermal expansion. High-powered LEDs and sunlight filters mean that the Vantage is also the best choice for outdoor use and large volumes.


The compact Vero cameras have sensor resolutions of either 1.3 or 2.2 megapixels. The camera has a variable zoom lens, which makes it especially suited for smaller capture volumes where it is especially important to have an optimum field of view. The Vero’s attractive price combined with its light weight and small size makes it a great choice for smaller labs and studios.

Vicon Tracker

Tracker has been designed for the requirements and workflow of Engineering users wanting to track the position and orientation of objects with as little effort and as low latency as possible. Perfect for many applications in robotics, UAV tracking, VR and human-machine interaction, Tracker lets you define what you want to track with a couple of mouse clicks – and then you can just leave in the background tracking. A simple SDK lets you connect the output data stream to your own software.

Challenges for Robot Justin

  • Experimental scenario for a robot autonomously building a habitat on mars before astronauts arrive there.
  • How can a robot reach human-level performance in complex manipulation tasks in unknown environments?
  • First prerequisite is generating a precise 3D model of its environment to be able to plan collision free actions and localize the objects needed for the task to solve.
  • Tactile sensing is then used to allow for dextrous fine manipulation

Ground Truth System

The method for generating in real-time precise 3D models is a variant of self-localization-and-mapping (SLAM) based on dense depth data from an RGB-D sensor mounted in the robot’s head. The SLAM methods jointly compute the current pose of the head by matching the current depth image from the RGB-D sensor with the already acquired part of the model. Then, now having an estimate of the head pose, the depth image is used to update the 3D model. The algorithms run in real-time with 30 Hz framerate. The resulting models have to have a precision of <1mm in typical manipulation distances of 1 m to 2 m.

To verify and debug this highly precise modeling algorithms ground truth measurements for the head pose are needed with an absolute precision of 0.5 mm in a typical manipulation volume of 6m x 6m x 2.5m.

Only together with the support of prophysics and Vicon this precision could finally be reached under all circumstances.

Are you interested in a similar solution?

Ask here!