Key Information

The challenge is finished.

Challenge Overview

Challenge Overview

"Help us find where the Eagle has landed, Again!!"


Have you ever looked at images on your favorite mapping webpage and noticed changes in the world depicted? Maybe you looked at your childhood home and noticed an old car in the driveway or noticed stores on a street that have since closed down. More dramatically, have you noticed changes in the landscape like the differences in these glaciers over time?


NASA has new data from the moon and images from the early 1970s. They are looking to develop a software application - Lunar Mission Coregistration Tool (LMCT) that will process publicly available imagery files from past lunar missions and will enable manual comparison to imagery from the Lunar Reconnaissance Orbiter (LRO) mission. Check out this introductory video to find out more!


The imagery processed by this tool will be used in existing citizen science applications to identify long lost spacecraft components- such as The Eagle, the lunar module that returned Buzz and Neil back to the command module on the first Lunar Mission and has since been lost to time. It will also be used to identify and examine recent natural impact features on the lunar surface by comparing the old images against the new images. This is known as image (co)-registration in the field of computer vision. The successful result of this project will allow us to better understand what’s been going on on the moon for the past sixty years (maybe now is when we’ll discover if it’s really made of cheese).

Task Detail

In this challenge, you are asked to develop the first version of this image registration tool. Specifically, this tool should be able to process various images, such as images captured under different lighting conditions, different spacecraft/camera characteristics, different observation geometries etc. Once the images are processed, LMCT should co-register, color balance, and orthorectify them. The co-registration process needs to be fully automated.


The LMCT image processing tool should allow co-registration of single or multiple images to the reference system of LRO imagery. Co-registration process should generate correlation statistics such as the level of uncertainty. The software should accept ISIS Cubes, GeoTIFFs, JPEG2000 image file formats. The software shall reject images that do not include valid metadata that specify the Lunar projection, extent, resolution, and bits/pixel. The application software should output the co-registered images in GeoTiff format with the associated metadata and high-quality graphics.


Please check this example for a better understanding. It basically compares a modern LRO image with an Apollo image. More details are available here.


We also provide more sample data for your local test use. For example, you can play with the low-resolution TIF images here. Information on how to process LROC NAC images natively is available here. Your solution shall be also applicable to high-resolution images, which are also available on the page that we have linked before, so make sure it’s efficient enough. An analysis about its time and memory complexities will be helpful. 


One thing to note is that older pictures will lack the resolution of current pictures and may have been taken at different times of day. They also may not be fully aligned. Your program should account for this- it’s not always going to be as straightforward as the above example. The data we have  from older missions is imperfect and the task will be tricky at times. That’s why we turned to the crowd!


You are encouraged to use available open-source computer vision and image processing libraries (OpenCV or alternatives). If you have any questions, ask them in the forums. 




A mapping of pixel to pixel changes is good enough as a preliminary result. Ultimately, a registered image of the pre-LRO era image to LRO is key. 


For the ease of evaluation, please also plot offsets between the images to help us better understand the quality of your output.


Final Submission Guidelines


You submission should contain:

  • A working codebase. You are allowed to use C++/Python. It should be wrapped up as one single command line entry point. The image file names should be a part of your command line call. 

  • A detailed document about your algorithm. How did you end up with your final designs? Have you tried other algorithms?

  • A detailed deployment instruction. What are the required libs? How to install them? How to run your codebase? 

Judging Criteria

You will be judged on the quality of your algorithm and implementation, the quality of your documentation, and how promising it is as the base solution for the follow-up challenges. Note that the evaluation in this challenge may involve subjectivity from the client and Topcoder. However, the judging criteria will largely be the basis for the judgement.

  1. Effectiveness (40%)

    1. Is your algorithm effective, at least on the provided example images? 

    2. Is your codebase runnable to other new images? 

    3. Is your output on other new images reasonably good?

  2. Feasibility (40%)

    1. Is your algorithm efficient, scalable to large volumes of data? 

    2. Is your algorithm easy-to-implement? Is there any related toolkit that we can use?

  3. Clarity (20%)

    1. Please make sure your report is easy to read- explain your basic line of thinking

    2. Figures, charts, and tables are welcome.

Submission Guideline

We will only evaluate your last submission. Please try to include your great solution and details as much as possible in a single submission.



2021 Topcoder(R) Open


Final Review:

Community Review Board


User Sign-Off


ID: 30150074