March 3, 2014 NASA Robonaut Tool Manipulation

Image source: nasa.gov

In 2013, Topcoder had the privilege of working on the first Robonaut Challenge where we tasked the community with teaching Robonaut how to recognize several buttons and switches state and location on the task board by using real and simulated imagery. Community members were tasked with creation of vision algorithms for Robonaut to determine the 3D representations of tools.

The Challenge

Robonaut 2 (“R2”), a humanoid robot that operates both on Earth and on the International Space Station, commonly uses tools. For example, it manages inventory using an RFID reader and fastens bolts with a drill. In order to use a tool, R2 relies on an algorithm to determine a 3D representation of the tool. The algorithm works with the robot’s control system and allows R2 to create a plan for grasping objects and completing its tasks.

There exist several algorithms that could be used to determine the 3D representation of the tool. However, the robot employed an older, less capable set of vision sensors, due to its space heritage and having been exposed to high levels of environmental radiation over time. Many existing algorithms assume that the vision data being used is of relatively high resolution, detail, and quality, and such algorithms are not effective when used with the grade of vision data available to R2.

The Solution

We were able to create algorithms that would receive a pair of noisy stereo images of common space tools such as an RFID reader, an EVA handrail, or a softbox, among others, and determine the 3D representation of the object in the image pair.

Winners

1st place: wleite
2nd place: walrus71
3rd place: orirmi


Head of Community


categories & Tags


Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds