Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Challenge Overview

"Help us find where the Eagle has landed, Again!!"

 

Have you ever looked at images on your favorite mapping webpage and noticed changes in the world depicted? Maybe you looked at your childhood home and noticed an old car in the driveway or noticed stores on a street that have since closed down. More dramatically, have you noticed changes in the landscape like the differences in these glaciers over time?

 

NASA has new data from the moon and images from the early 1970s. They are looking to develop a software application - Lunar Mission Coregistration Tool (LMCT) that will process publicly available imagery files from past lunar missions and will enable manual comparison to imagery from the Lunar Reconnaissance Orbiter (LRO) mission. Check out this introductory video to find out more!

 

The imagery processed by this tool will be used in existing citizen science applications to identify long lost spacecraft components- such as The Eagle, the lunar module that returned Buzz and Neil back to the command module on the first Lunar Mission and has since been lost to time. It will also be used to identify and examine recent natural impact features on the lunar surface by comparing the old images against the new images. This is known as image (co)-registration in the field of computer vision. The successful result of this project will allow us to better understand what’s been going on on the moon for the past sixty years (maybe now is when we’ll discover if it’s really made of cheese).

Task Detail

As you may know, we have run a code challenge and an ideation challenge before. We are happy with the ideas and some proof-of-concept effort has shown that it is feasible. 

 

Objective. Now we are running the code challenge again to implement the best codebase for based on the winning idea documentation. The solution is based on the USGS ISIS 3 library. 

 

Image Data: The client has provided a few examples for a better understanding. It basically compares a modern LRO image with an Apollo image. More details are available here.

We also provide more sample data for your local test use. For example, you can play with the low-resolution TIF images here. Information on how to process LROC NAC images natively is available here. Another example: an Lunar Orbiter image and an LROC image.

 

We would like to ask you to find more paired test cases with justification of how it will be useful for the later code challenge. Please try to explore the NASA website. Here are a few places that worth exploring:

 

One thing to note is that older pictures will lack the resolution of current pictures and may have been taken at different times of day. They also may not be fully aligned. Your program should account for this- it’s not always going to be as straightforward as the above example. The data we have from older missions is imperfect and the task will be tricky at times. That’s why we turned to the crowd!

 

You are encouraged to use available open-source computer vision and image processing libraries (OpenCV or alternatives). If you have any questions, ask them in the forums. 

 

Evaluation Method: Our plan is to eyeball the results, as we don’t expect to have many annotated test cases.

Final Submission Guidelines

Submission

You submission should contain:

  • A working codebase. You are allowed to use C++/Python. It should be wrapped up as one single command line entry point. The image file names should be a part of your command line call. 

  • A detailed document about your algorithm. How did you end up with your final designs? Have you tried other algorithms?

  • A detailed deployment instruction. What are the required libs? How to install them? How to run your codebase? 

  • Example results. Please use enough, diverse example results to showcase the effectiveness of your solution. 

Judging Criteria

You will be judged on the quality of your algorithm and implementation, the quality of your documentation, and how promising it is as the base solution for the follow-up challenges. Note that the evaluation in this challenge may involve subjectivity from the client and Topcoder. However, the judging criteria will largely be the basis for the judgement.

 
  1. Effectiveness (40%)

    1. Is your algorithm effective, at least on the provided example images? 

    2. Is your codebase runnable to other new images? 

    3. Is your output on other new images reasonably good?

  2. Feasibility (40%)

    1. Is your algorithm efficient, scalable to large volumes of data? 

    2. Is your algorithm easy to deploy? 

  3. Clarity (20%)

    1. Please make sure your documentation for algorithm, code, and results is easy to read. Figures, charts, and tables are welcome.

Submission Guideline

We will only evaluate your last submission. Please try to include your great solution and details as much as possible in a single submission.


 

ELIGIBLE EVENTS:

2021 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30166119