Challenge Overview
Problem Statement | |||||||||||||
![]() Prize DistributionPrize USD 1st $20,000 2nd $16,000 3rd $11,000 4th $7,000 5th $5,000 Week 2 Threshold Incentive $6,000 Week 3 Threshold Incentive $6,000 3 Open Source Incentives $15,000 Total Prizes $86,000 Early Participation Incentives End of Week 2 = $6,000 Threshold Incentive Fund (max $300/person) Any competitor with a score above 600,000 will get a prize portion of the $6,000 fund evenly dispersed between the others who also hit the threshold. Any prizes not awarded at the end of week 2 will roll over to add to the week 3 prize budget. To determine these prizes a snapshot of the leaderboard will be taken on 9/20 at exactly 14*24 hours after the launch of the contest. End of Week 3 = $6,000 Threshold Incentive Fund (no max/person) Any competitor with a score above 700,000 will get a prize portion of the $6,000 fund evenly dispersed between the others who also hit the threshold. Note that this prize fund may increase if all prizes were not awarded at the end of the 2nd week incentive. To determine these prizes a snapshot of the leaderboard will be taken on 9/27 at exactly 21*24 hours after the launch of the contest. 3 Open Source Incentives - $5,000 per winning submission The top competitors who Open Source their solution will be given a spot incentive of $5,000. Offer will be made to the top two winners initially. If the competitor opts to not Open Source their solutions then we will offer it to the next top ranking winner in order, and so on until the 2 awards have been given. In addition to the top 2 Open Source Incentives, we will give away one more open Source Incentive to the most unique solution which will be determined at the client's discretion. The code will need to undergo a Code Review for documentation and completeness prior to award. Code must be released on GitHub and will be publicized at the Conference Workshop at the end of November. OverviewThe Multi-View Stereo 3D Mapping Challenge is being offered by the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI). The Master Challenge is the second of two challenges in the Multi-View Stereo 3D Mapping Challenge, which together combine for a total prize purse of over US $100,000. The first challenge in the series was the Explorer Challenge which completed in early August, and after the completion of the Master challenge, we will conclude with a workshop hosted by IARPA in Washington DC. Be sure to review the official challenge minisite for details, timeline, and information. Solvers will have the chance to have their work viewed by stakeholders from industry, government, and academic communities in attendance. Winners will be invited to present their solution and will get a paid trip to present at the workshop. Numerous commercial satellites, including newly emerging CubeSats, cover large areas with high revisit rates and deliver high-quality imagery in near real-time to customers. Although the entire Earth has been, and continues to be, imaged multiple times, fully automated data analysis remains limited. The Multi-View Stereo 3D Mapping Challenge is the first concerted effort to invite experts from across government, academia, industry and solver communities to derive accurate 3D point clouds from multi-view satellite imagery that will advance technology in this area and potentially foster enormous humanitarian impact. IARPA is conducting this challenge to invite a very broad community to participate in a convenient, efficient and non-contractual way. IARPA's use of a crowdsourcing approach to stimulate breakthroughs in science and technology also supports the White House's Strategy for American Innovation, as well as government transparency and efficiency. The goals and objectives of the Challenge are to:
ObjectiveParticipants are asked to develop a Multi-View Stereo (MVS) solution to generate dense 3D point clouds using provided high-resolution satellite 2D images. There are regions of interest defined for this contest, each covered by multiple images. Contestants are asked to use 2D images, taken from different angles, to derive a ���point cloud��� of 3D coordinates that describe as precisely as possible the position and height of all features in the regions. Each submitted point cloud will be registered to ground truth using X, Y, and Z translational search and then compared to determine accuracy and completeness metric scores. That is, the challenge will automatically superimpose the contestant���s 3D point cloud on the known correct (���ground truth���) 3D point cloud. The matching algorithm will shift the contestant���s point cloud left and right, and up and down, until the average distance between input and ground truth point is minimized. This process is called ���registration���. The scoring algorithm will compute scores by measuring any remaining Z-axis differences. Accuracy is defined as the root mean square Z error measured in meters compared to ground truth. Completeness is defined as the fraction of points with Z error less than 1 meter compared to ground truth. ![]() ![]() Input Files
Output File3D Point Clouds in text format You are required to predict point clouds for 3 different test areas. The point clouds must be submitted in plain text files with .txt extension having the format specified below. Optionally the file may be zipped, in which case it must have .zip extension. The submitted files must be no larger than 500MB and must contain no more than 50 million points. These limits apply to each file individually, not combined. Point coordinates must be in the Universal Transverse Mercator (UTM) projection with horizontal and vertical datum World Geodetic System 1984 (WGS84). Competitors are encouraged to include an intensity attribute for each point derived from the source imagery. This value will be used by stakeholders to better visualize your solution and understand it. However, intensity will not be evaluated, and it will neither contribute to nor detract from your score or rank. File specification Points are listed in the file one per line, in x y z intensity format, that is 4 real numbers separated by a single space, where
For the sake of brevity the zone and band information of the UTM data need not be present in the output. Every location within this challenge falls into the 21H UTM grid and would be redundant to include with each point. As an example a valid submission file may start like this: 354056.3 6182055.5 20.1 0.5 354056.3 6182055.6 20.2 0.5 To save bandwidth it is recommended but not required to output a maximum of 3 digits to the right of the decimal point in x, y, z values. This gives millimeter precision which should be enough for this contest. Notes
FunctionsThis match uses the result submission style, i.e. you will run your solution locally using the provided files as input, and produce 3 TXT or ZIP files with your answer. In order for your solution to be evaluated by Topcoder���s marathon system, you must implement a class named Satellite3DMapping, which implements a single function - getAnswerURLs(). Your function will return an array of Strings corresponding to the URL of your submission files. The URLs should point to files that your algorithm generated for areas MasterProvisional1, MasterProvisional2, MasterProvisional3, in this order. You may upload your files to a cloud hosting service such as Dropbox or Google Drive, which can provide a direct link to the file. To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getAnswerURLs() function. If you use Google Drive to share the link, then please use the following format: "https://drive.google.com/uc?export=download&id=" + id You can use any other way to share your result file, but make sure the link you provide opens the filestream directly, and is available for anyone with the link (not only the file owner), to allow the automated tester to download and evaluate it. An example of the code you have to submit, using Java: public class Satellite3DMapping { public String[] getAnswerURLs() { //Replace the returned Strings with your submission files��� URL return new String[] { "https://drive.google.com/uc?export=download&id=XYZ1", "https://drive.google.com/uc?export=download&id=XYZ2", "https://drive.google.com/uc?export=download&id=XYZ3" }; } } Keep in mind that your complete code that generates these results will be verified at the end of the contest if you achieve a score in the top 10, as described later in the ���Requirements to Win a Prize��� section, i.e. participants will be required to provide fully automated executable software to allow for independent verification of software performance and the metric quality of the output data. ScoringA full submission will be processed by the Topcoder Marathon test system, which will download, validate and evaluate your submission files. Then for each of the 3 files the following process is executed. Validity checks. Any malformed or inaccessible file, or one that exceeds the maximum file size (500 MB) or the maximum number of points (50 million) will receive a zero score. Registration. If your submission is valid, the test system will first register your point cloud to ground truth, as described in ���Objective��� section. Quality checks. It will next measure the distance (error) along the Z-axis between your returned point cloud and the ground truth, for each ground truth point. During this process those points that (after translating them with the registration offset) lie outside the bounds defined in the KML are ignored. This process will produce two measures:
Finally, if your submissions achieve the completeness threshold for all 3 test areas, then an overall accuracy and an overall completeness measure is calculated by taking the average of the 3 corresponding numbers. These two values will be combined into a single score, comparing them to the best values achieved by all submissions that have also met the completeness minimal requirement: Score = 500,000 * ([Best Accuracy Among All Submissions] / [Your Accuracy] + [Your Completeness ] / [Best Completeness Among All Submissions]) For the exact algorithm of the registration and scoring see the visualizer source code. Example submissions can be used to verify that your chosen approach to upload submissions works. In this case you have to return 3 String from your getAnswerURLs() method as in the case of provisional testing, but only one of them will be checked. The tester will verify that the first String of the returned array contains a valid URL, its content is accessible, i.e. the tester is able to download the file from the returned URL. If your file is valid, it will be evaluated, and completeness and accuracy values will be available in the test results. The evaluation is based on the target KML and corresponding ground truth data that you can find in the /training directory of the downloaded data package. Note that during the first week of the match, you will not be able to perform submissions. Meanwhile, you can work locally, with the provided images, sample files and resources. Starting 9/13 submissions are allowed and you will have another 3 weeks until the match closes. Also note that the registration and scoring algorithm may be slightly modified during the first week of the match. If this happens, the offline tester will be updated so that you will always have access to the exact measurement process. You should expect only minor changes, if any. If your algorithm does well on the initial metrics, it will probably also do well on the modified one. Final ScoringThe top 10 competitors according the provisional scores, will be given access to an AWS VM instance within 5 days from the end of submission phase. Submissions must have completeness greater than 0.3 for all point clouds in order to advance to this final scoring phase. If your score qualifies you as a top 10 competitor, you will need to load your code to your assigned VM, along with two basic scripts for running it:
Your solution will be subjected to three tests: First, your solutions will be validated, i.e. we will check if they produce the same output files as your last submission, using the same input files used in this contest. Second, your solutions will be tested against new KML files, i.e. other regions, contained in the same area covered by provided NITF images. These areas will have size similar to the provided KMLs used for provisional test, and the scene content will be similar. Third, the resulting output from the steps above (check parameters of run.sh above) will be validated and scored. The final rankings will be based on this score alone. Competitors who fail to provide their solution as expected will receive a zero score in this final scoring phase, and will not be eligible to win prizes. Additional ResourcesVisualizer A visualizer is available here that you can use to test your solution locally. It displays your generated point cloud, the expected ground truth, and the difference of these two. It also calculates completeness and accuracy scores so it serves as an offline tester. The visualizer code also contains utilities to:
It uses reference libraries and provides utilities that may be useful but not strictly necessary in this challenge. In particular, the features related to LAZ file processing were used in the Explorer challenge but are not relevant in this Master challenge, they are left here only for information.
Tools and references This challenge seeks to encourage research in multi-view stereo algorithms applied to satellite imagery. Open source software is available that may serve as useful references for those interested in learning more about multi-view stereo and commercial satellite imagery.
Open source and otherwise free software is available to help challenge participants get started.
Other previous and ongoing stereo benchmarks may also be useful as references. The metric analysis procedures used for this challenge are similar to those that have been used for these benchmarks:
General Notes
Requirements to Win a Prize
If you place in the top 5 but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all of the above. | |||||||||||||
Definition | |||||||||||||
| |||||||||||||
Examples | |||||||||||||
0) | |||||||||||||
|
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.