Challenge Overview

Problem Statement

    

Prize Distribution

              Prize             USD
  1st                         $20,000
  2nd                         $16,000
  3rd                         $11,000
  4th                          $7,000
  5th                          $5,000
  Week 2 Threshold Incentive   $6,000
  Week 3 Threshold Incentive   $6,000
  3 Open Source Incentives    $15,000
  Total Prizes                $86,000

Early Participation Incentives

End of Week 2 = $6,000 Threshold Incentive Fund (max $300/person)

Any competitor with a score above 600,000 will get a prize portion of the $6,000 fund evenly dispersed between the others who also hit the threshold. Any prizes not awarded at the end of week 2 will roll over to add to the week 3 prize budget. To determine these prizes a snapshot of the leaderboard will be taken on 9/20 at exactly 14*24 hours after the launch of the contest.

End of Week 3 = $6,000 Threshold Incentive Fund (no max/person)

Any competitor with a score above 700,000 will get a prize portion of the $6,000 fund evenly dispersed between the others who also hit the threshold. Note that this prize fund may increase if all prizes were not awarded at the end of the 2nd week incentive. To determine these prizes a snapshot of the leaderboard will be taken on 9/27 at exactly 21*24 hours after the launch of the contest.

3 Open Source Incentives - $5,000 per winning submission

The top competitors who Open Source their solution will be given a spot incentive of $5,000. Offer will be made to the top two winners initially. If the competitor opts to not Open Source their solutions then we will offer it to the next top ranking winner in order, and so on until the 2 awards have been given. In addition to the top 2 Open Source Incentives, we will give away one more open Source Incentive to the most unique solution which will be determined at the client's discretion. The code will need to undergo a Code Review for documentation and completeness prior to award. Code must be released on GitHub and will be publicized at the Conference Workshop at the end of November.

Overview

The Multi-View Stereo 3D Mapping Challenge is being offered by the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI).

The Master Challenge is the second of two challenges in the Multi-View Stereo 3D Mapping Challenge, which together combine for a total prize purse of over US $100,000. The first challenge in the series was the Explorer Challenge which completed in early August, and after the completion of the Master challenge, we will conclude with a workshop hosted by IARPA in Washington DC. Be sure to review the official challenge minisite for details, timeline, and information. Solvers will have the chance to have their work viewed by stakeholders from industry, government, and academic communities in attendance. Winners will be invited to present their solution and will get a paid trip to present at the workshop.

Numerous commercial satellites, including newly emerging CubeSats, cover large areas with high revisit rates and deliver high-quality imagery in near real-time to customers. Although the entire Earth has been, and continues to be, imaged multiple times, fully automated data analysis remains limited.

The Multi-View Stereo 3D Mapping Challenge is the first concerted effort to invite experts from across government, academia, industry and solver communities to derive accurate 3D point clouds from multi-view satellite imagery that will advance technology in this area and potentially foster enormous humanitarian impact.

IARPA is conducting this challenge to invite a very broad community to participate in a convenient, efficient and non-contractual way. IARPA's use of a crowdsourcing approach to stimulate breakthroughs in science and technology also supports the White House's Strategy for American Innovation, as well as government transparency and efficiency. The goals and objectives of the Challenge are to:

  • Promote and benchmark research in multiple view stereo algorithms applied to satellite imagery;
  • Stimulate various communities to develop and enhance automated methods to derive accurate 3D point clouds from multi-view satellite imagery, including computer vision, remote sensing and photogrammetry;
  • Foster innovation through crowdsourcing and moving beyond current research limitations for 3D point clouds;
  • Cultivate and sustain an ongoing collaborative community dedicated to this technology and research.

Objective

Participants are asked to develop a Multi-View Stereo (MVS) solution to generate dense 3D point clouds using provided high-resolution satellite 2D images.

There are regions of interest defined for this contest, each covered by multiple images. Contestants are asked to use 2D images, taken from different angles, to derive a ���point cloud��� of 3D coordinates that describe as precisely as possible the position and height of all features in the regions.

Each submitted point cloud will be registered to ground truth using X, Y, and Z translational search and then compared to determine accuracy and completeness metric scores. That is, the challenge will automatically superimpose the contestant���s 3D point cloud on the known correct (���ground truth���) 3D point cloud. The matching algorithm will shift the contestant���s point cloud left and right, and up and down, until the average distance between input and ground truth point is minimized. This process is called ���registration���. The scoring algorithm will compute scores by measuring any remaining Z-axis differences. Accuracy is defined as the root mean square Z error measured in meters compared to ground truth. Completeness is defined as the fraction of points with Z error less than 1 meter compared to ground truth.

Multiple high resolution satellite images of the same area
3D Point Cloud

Input Files

  • A general note on file input / output

    Compared to the Explorer challenge we made interfacing with the data easier for those contestants who don���t have the tools that domain experts use to read and write satellite image data and point clouds. For most of the data items there are two versions available: one for the ���folks in the industry���, and another one for those who don���t wish to install new software or to learn the intricacies of a new file format. This way our hope is that the problem gets solvable without any 3rd party tools or libraries for non-experts, but also remains solvable for those who already have the necessary tools.

  • 50 Digital Globe WorldView-3 images in NITF file format and cropped TIFF format

    Digital Globe grayscale images covering the challenge sites are provided as full NITF formatted images with RPC00B Tagged Record Extension (TRE) metadata, which can be read in part or in whole using open source software such as the Geospatial Data Abstraction Library (GDAL). Sample code (refer to "Additional Resources" section) using GDAL is provided for reading a NITF image and exercising the RPC00B sensor model for projection of XYZ coordinates into image coordinates.

    In addition to the NITF file format, we are offering a cropped TIFF file format for competitors to use if they choose. The cropped TIFF files are accompanied by text files that contain meta data (also extracted from the original NITF files) about the images. Either version of imagery can be used as input. More information about the content and format of these data sets are available in the readme.txt file contained in the downloaded package.

  • KML bounding box

    A Keyhole Markup Language (KML) file specifies the corner coordinates for each challenge area. Submitted point clouds for evaluation will be cropped to the bounding box specified in the KML file. It is important to note that the area covered by the KML files used in this are much smaller than the area covered by the provided NITF images. The provided TIFF files are also somewhat larger than the KML extents to ensure sufficient coverage and overlap for matching.

  • Ground truth file

    The true elevation map of the area used in the Explorer challenge is provided in GeoTIFF and also in textual format. Together with the corresponding KML file it can be used to train your algorithm locally.

  • Downloads

    The NITF satellite images (a total of 40GB) are available in a S3 bucket named ���multiview-stereo���, available at http://multiview-stereo.s3.amazonaws.com/. Client software, like AWS CLI, can be used to download all files with a single command (���aws s3 sync s3://multiview-stereo/ . --no-sign-request���).

    All other data (cropped TIFF versions, KML files and ground truth) are available here.

  • Note that the NITF files used in this Master Challenge are the same as were used previously in the Explorer challenge, no need to download them again if you participated in the Explorer. All other files are either new (like the cropped version of satellite images and the ground truth file) or changed (like the target KML regions).

Output File

3D Point Clouds in text format

You are required to predict point clouds for 3 different test areas. The point clouds must be submitted in plain text files with .txt extension having the format specified below. Optionally the file may be zipped, in which case it must have .zip extension. The submitted files must be no larger than 500MB and must contain no more than 50 million points. These limits apply to each file individually, not combined.

Point coordinates must be in the Universal Transverse Mercator (UTM) projection with horizontal and vertical datum World Geodetic System 1984 (WGS84).

Competitors are encouraged to include an intensity attribute for each point derived from the source imagery. This value will be used by stakeholders to better visualize your solution and understand it. However, intensity will not be evaluated, and it will neither contribute to nor detract from your score or rank.

File specification

Points are listed in the file one per line, in

x y z intensity

format, that is 4 real numbers separated by a single space, where

  • x is the ���easting��� value of the point���s UTM coordinate,
  • y is the ���northing��� value of the point���s UTM coordinate,
  • z is the height value of the point given in meters,
  • intensity is your custom output attribute in the [0..1] range.

For the sake of brevity the zone and band information of the UTM data need not be present in the output. Every location within this challenge falls into the 21H UTM grid and would be redundant to include with each point.

As an example a valid submission file may start like this:

354056.3 6182055.5 20.1 0.5
354056.3 6182055.6 20.2 0.5

To save bandwidth it is recommended but not required to output a maximum of 3 digits to the right of the decimal point in x, y, z values. This gives millimeter precision which should be enough for this contest.

Notes

  • Absolute accuracy of the point cloud must be less than 20 meters horizontal to ensure reliable registration for metric analysis using ground truth. Absolute accuracy of the source image metadata is significantly better than 20 meters. For reference, Google Earth image coordinates are also significantly more accurate than 20 meters for the Challenge areas.

  • The input KML and satellite image meta data files use the latitude / longitude coordinate system which is different from the UTM required in the output. See (and feel free to reuse) the visualizer code for a simple way to convert from latitude / longitude to UTM. Conversion is also demonstrated in the sample C++ code.

  • During evaluation points that lie outside the bounds defined in the KML are ignored so we recommend not submitting points that are further away from the target area���s edges than 20 meters (the maximum registration offset) to avoid unnecessarily large files.

  • The ground truth that your submission will be compared against has a point spacing around 0.3m, so solutions should have a similar point density. Take the ���50 million points��� as a limit and not as the recommended number of points to output. You will need much less points to generate a point cloud with a similar density as the ground truth.

  • The visualizer supports a somewhat optimized version of the format given above. To reduce the file size and improve loading time the x and y coordinates may be given relative to an origin. In this case the first two lines of the file must contain

    offset_x <x_origin_value>
    offset_x <y_origin_value>
    

    As an example, the following file is treated as identical with the one shown previously:

    offset_x 354056.2
    offset_y 6182055.4
    0.1 0.1 20.1 0.5
    0.1 0.2 20.2 0.5
    

    It is important to note that this optimized format MUST NOT be used in submissions, it is only for your convenience during development.

Functions

This match uses the result submission style, i.e. you will run your solution locally using the provided files as input, and produce 3 TXT or ZIP files with your answer.

In order for your solution to be evaluated by Topcoder���s marathon system, you must implement a class named Satellite3DMapping, which implements a single function - getAnswerURLs(). Your function will return an array of Strings corresponding to the URL of your submission files. The URLs should point to files that your algorithm generated for areas MasterProvisional1, MasterProvisional2, MasterProvisional3, in this order.

You may upload your files to a cloud hosting service such as Dropbox or Google Drive, which can provide a direct link to the file.

To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getAnswerURLs() function.

If you use Google Drive to share the link, then please use the following format: "https://drive.google.com/uc?export=download&id=" + id

You can use any other way to share your result file, but make sure the link you provide opens the filestream directly, and is available for anyone with the link (not only the file owner), to allow the automated tester to download and evaluate it.

An example of the code you have to submit, using Java:

public class Satellite3DMapping {
    public String[] getAnswerURLs() {
        //Replace the returned Strings with your submission files��� URL 
        return new String[] {
            "https://drive.google.com/uc?export=download&id=XYZ1",
            "https://drive.google.com/uc?export=download&id=XYZ2",
            "https://drive.google.com/uc?export=download&id=XYZ3"
        };
    }
}

Keep in mind that your complete code that generates these results will be verified at the end of the contest if you achieve a score in the top 10, as described later in the ���Requirements to Win a Prize��� section, i.e. participants will be required to provide fully automated executable software to allow for independent verification of software performance and the metric quality of the output data.

Scoring

A full submission will be processed by the Topcoder Marathon test system, which will download, validate and evaluate your submission files.

Then for each of the 3 files the following process is executed.

Validity checks. Any malformed or inaccessible file, or one that exceeds the maximum file size (500 MB) or the maximum number of points (50 million) will receive a zero score.

Registration. If your submission is valid, the test system will first register your point cloud to ground truth, as described in ���Objective��� section.

Quality checks. It will next measure the distance (error) along the Z-axis between your returned point cloud and the ground truth, for each ground truth point. During this process those points that (after translating them with the registration offset) lie outside the bounds defined in the KML are ignored. This process will produce two measures:

  • Accuracy, which is defined as the root mean square error in meters, and
  • Completeness, which is defined as the fraction of total points in your cloud with Z error less than 1 meter.
Your submission must have completeness greater than 0.3. If it fails on this requirement it gets a zero score.

Finally, if your submissions achieve the completeness threshold for all 3 test areas, then an overall accuracy and an overall completeness measure is calculated by taking the average of the 3 corresponding numbers. These two values will be combined into a single score, comparing them to the best values achieved by all submissions that have also met the completeness minimal requirement:

Score = 500,000 * ([Best Accuracy Among All Submissions] / [Your Accuracy] + 
        [Your Completeness ] / [Best Completeness Among All Submissions]) 

For the exact algorithm of the registration and scoring see the visualizer source code.

Example submissions can be used to verify that your chosen approach to upload submissions works. In this case you have to return 3 String from your getAnswerURLs() method as in the case of provisional testing, but only one of them will be checked. The tester will verify that the first String of the returned array contains a valid URL, its content is accessible, i.e. the tester is able to download the file from the returned URL. If your file is valid, it will be evaluated, and completeness and accuracy values will be available in the test results. The evaluation is based on the target KML and corresponding ground truth data that you can find in the /training directory of the downloaded data package.

Note that during the first week of the match, you will not be able to perform submissions. Meanwhile, you can work locally, with the provided images, sample files and resources. Starting 9/13 submissions are allowed and you will have another 3 weeks until the match closes.

Also note that the registration and scoring algorithm may be slightly modified during the first week of the match. If this happens, the offline tester will be updated so that you will always have access to the exact measurement process. You should expect only minor changes, if any. If your algorithm does well on the initial metrics, it will probably also do well on the modified one.

Final Scoring

The top 10 competitors according the provisional scores, will be given access to an AWS VM instance within 5 days from the end of submission phase. Submissions must have completeness greater than 0.3 for all point clouds in order to advance to this final scoring phase. If your score qualifies you as a top 10 competitor, you will need to load your code to your assigned VM, along with two basic scripts for running it:

  • compile.sh should perform any necessary compilation of the code so that it can be run.
  • run.sh [Input KML file] [Path to NITF image folder] [Path to Simplified data folder] [Output file] should run the code and generate an output file in one of the allowed formats. ���Simplified data folder��� contains the cropped TIFF images, metadata and RPC parameters in the same structure as in the downloaded training and test data.

Your solution will be subjected to three tests:

First, your solutions will be validated, i.e. we will check if they produce the same output files as your last submission, using the same input files used in this contest.

Second, your solutions will be tested against new KML files, i.e. other regions, contained in the same area covered by provided NITF images. These areas will have size similar to the provided KMLs used for provisional test, and the scene content will be similar.

Third, the resulting output from the steps above (check parameters of run.sh above) will be validated and scored. The final rankings will be based on this score alone.

Competitors who fail to provide their solution as expected will receive a zero score in this final scoring phase, and will not be eligible to win prizes.

Additional Resources

Visualizer

A visualizer is available here that you can use to test your solution locally. It displays your generated point cloud, the expected ground truth, and the difference of these two. It also calculates completeness and accuracy scores so it serves as an offline tester.

The visualizer code also contains utilities to:

  • Convert a longitude / latitude pair to UTM coordinates. See the UtmCoords class.
  • Execute an RPC projection. Based on the RPC parameters extracted from a satellite image it tells you where a given point in space (latitude, longitude, height) will be mapped in the image. See the geoToXY() method of the RPC class.

Sample C++ code

It uses reference libraries and provides utilities that may be useful but not strictly necessary in this challenge. In particular, the features related to LAZ file processing were used in the Explorer challenge but are not relevant in this Master challenge, they are left here only for information.

  • Read KML and LAZ and test LAZ file compliance with submission requirements used in the Explorer challenge.
  • Read KML and NITF and project KML polygon XYZ corner coordinates into image coordinates using GDAL.
  • Read KML and generate a synthetic LAS file with correct geospatial tags that satisfy the submission requirements used in the Explorer challenge.
  • Generate point cloud files with the expected metadata tags.

Tools and references

This challenge seeks to encourage research in multi-view stereo algorithms applied to satellite imagery. Open source software is available that may serve as useful references for those interested in learning more about multi-view stereo and commercial satellite imagery.

  • NASA has made its Ames Stereo Pipeline (ASP) available as open source (http://ti.arc.nasa.gov/tech/asr/intelligent-robotics/ngt/stereo/). ASP provides a reference for 3D mapping using Digital Globe stereo pairs ��� though not generally with large numbers of images ��� using a combination of traditional photogrammetry and computer vision techniques.
  • Multi-View Environment (MVE) is an end-to-end pipeline that is available as open source (http://www.gris.informatik.tu-darmstadt.de/projects/multiview-environment/). MVE provides a reference for structure-from-motion, multi-view-stereo, surface reconstruction, and surface texturing algorithms using frame cameras ��� though not generally with satellite imagery.

Open source and otherwise free software is available to help challenge participants get started.

Other previous and ongoing stereo benchmarks may also be useful as references. The metric analysis procedures used for this challenge are similar to those that have been used for these benchmarks:

General Notes

  • This match is NOT rated.

  • IMPORTANT: During the first week of the match, you will not be able to perform submissions. Meanwhile, you can work locally, with the provided images, sample files and resources. Starting 9/13 submissions are allowed and you will have another 3 weeks until the match closes.

  • Once submissions are enabled, you can make an example submission every 1 hour and a full submission every 3 hours.

  • Teaming - Contestants can Team up. Captains will recruit their own teams and then fill out this Topcoder Teaming Form to formally register the Team by the end of the 1st week of the Competition. In the form the Captain must list the handles, first name and last name and email of each teammate, along with a prize percent distribution. Topcoder will prepare a ���teaming agreement��� and send it to each team member to sign individually, all agreements must be signed by end of week 2 of the Competition in order to be a valid team in the competition.

    Teaming Milestones:

    • By end of Week 1
      • All Teams must register via the ���teaming form���.
      • Once registered a team cannot change it���s members/makeup.
      • Cancellations can only be done by the Team Captain prior to the ���teaming agreement��� being signed.
    • By end of Week 2
      • All Teammates must sign the ���teaming agreement���.
  • Relinquish - Topcoder is allowing registered competitors or teams to ���relinquish���. Relinquishing means the member will compete, and we will score their solution, but they will not be eligible for a prize. This is to allow contractors to compete. Once a person or team relinquishes, we post their name to a forum thread labeled ���Relinquished Competitors���. Their name still appears on the leaderboard.

  • If you are eligible to win a prize, your final code will be executed on an Amazon m4.xlarge or g2.2xlarge machine, running Ubuntu or Windows, and must run in 2 hours.

  • In this match you may use any programming language and libraries, including commercial solutions, provided Topcoder is able to run it free of any charge. You may also use open source languages and libraries, with the restrictions listed in the next section below. If your solution requires licenses, you must have these licenses and be able to legally install them in a testing VM (see ���Requirements to Win a Prize��� section). Submissions will be deleted/destroyed after they are confirmed. Topcoder will not purchase licenses to run your code. Prior to submission, please make absolutely sure your submission can be run by Topcoder free of cost, and with all necessary licenses pre-installed in your solution. Topcoder is not required to contact submitters for additional instructions if the code does not run. If we are unable to run your solution due to license problems, including any requirement to download a license, your submission might be rejected. Be sure to contact us right away if you have concerns about this requirement.

  • You may use open source languages and libraries provided they are equally free for your use, use by another competitor, or use by the client.

  • If your solution includes licensed software (e.g. commercial software, open source software, etc), you must include the full license agreements with your submission. Include your licenses in a folder labeled ���Licenses���. Within the same folder, include a text file labeled ���README��� that explains the purpose of each licensed software package as it is used in your solution.

  • The usage of any external resources, like additional images or point clouds of the focus region are NOT allowed. Your solution should rely only on the provided satellite images (NITF or TIFF) and KML files to generate the point clouds. To be clear, the restriction only applies to the data, not the solution, the software or to pre-trained models.

  • Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about the problem itself or possible solution techniques.

  • If you want to check the integrity of provided NITF images after download, their MD5 is available here.

Requirements to Win a Prize

  1. Place in TOP 5 according to final test results. See the "Final Scoring" section above.

  2. Submissions must have completeness greater than 0.3 to be eligible for prizes.

  3. Within 5 days from the announcement of the challenge winners, submit:

    • Report: a complete report outlining your final algorithm and explaining the logic behind and steps to its approach, which should contain:
      • Your Information: first and last name, Topcoder handle and email address.
      • Approach Used: a detailed description of your solution, which should include: approaches considered and ultimately chosen; advantages, disadvantages and potential improvements of your approach; which images were selected and why; processing run times; detailed comments on libraries used.
    • Local Programs Used: If any parameters were obtained from the provided images, you will also need to provide the program(s) used to generate these parameters.
    • Actual Source Code Used: See "Final Scoring" above for details.

If you place in the top 5 but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all of the above.

 

Definition

    
Class:Satellite3DMapping
Method:getAnswerURLs
Parameters:
Returns:String[]
Method signature:String[] getAnswerURLs()
(be sure your method is public)
    
 

Examples

0)
    
"1"
Returns: "Test case 1"

This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.