Notifications

Good job! You’re all caught up

Join challenges and check your notification settings if you don’t receive notifications. We’re actively adding new notifications. Read our blog post for more info

Notifications
TOSCA Editor - Web Application Wireframe Challenge (1)
Dismiss notification
Earlier Dismiss All
You have no more notifications

May 22, 2019 SpaceNet's Topcoder Challenge Yields Interesting Insights

Topcoder often applies our crowdsourcing methodology to complex data science challenges. And within the sphere of data science, some of our most intriguing challenges focus on geospatial computer vision.

One such success story is SpaceNet’s Off-Nadir Building Footprint Extraction Challenge. It used Topcoder’s expertise to crowdsource computer vision algorithms in a push to advance mapping automation. Read on for a short value-prop summary of the Topcoder SpaceNet Challenge.

The Challenge Defined

Crowdsourcing the Algorithm

SpaceNet is on a mission to accelerate geospatial machine learning. To discover if off-nadir imagery can help automate mapping, SpaceNet launched an off-nadir building detection challenge by crowdsourcing with Topcoder. The challenge focused on off-nadir imagery for building footprint extraction.
In other words, ‘off-nadir’ is satellite imagery taken at an angle, not directly above, the location. The competing algorithms attempted to extract map-ready building footprints from high off-nadir imagery.

Mapping Off-Nadir Images

Images acquired after a disaster are frequently more off-nadir than standard mapping images, as the satellite is not always directly above the disaster area. The ability to work with these off-nadir images and accurately extract building footprints is vital. It would help create better maps in urgent situations.

The Dataset

The dataset covered 665 square kilometers of downtown Atlanta with 27 worldview images from 7 to 54 degrees off-nadir, with approximately 126,000 buildings labeled with a footprint. The competitors were to develop algorithms that generate polygons correctly outlining the boundary of each identifiable building.
The created polygons in the top five algorithms were compared with the actual buildings. The results were measured using the SpaceNet metric.

How Did the Algorithms Measure Up?

As a way to measure success, SpaceNet assessed the results against five key components. Below are the summaries of each component. If you want more detailed information directly from SpaceNet, read The good and the bad in the SpaceNet Off-Nadir Building Footprint Extraction Challenge, by Nick Weir, Data Scientist at CosmiQ Works and SpaceNet 4 Challenge Director at the SpaceNet LLC.

Look Angle and Direction

Firstly, in the very off-nadir imagery, the algorithms identified approximately the same number of buildings, albeit, using different methodologies. Problems arose with south-facing images where shadows obscured many building features. However, the north-facing images had brighter sunlight reflections on the buildings, proving that look angle isn’t all that matters. Look direction is also a big factor.

Nadir vs Off-Nadir

All in all, the algorithms differed slightly in their ability to identify buildings in the nadir and off-nadir images. Over 80% of the buildings were identifiable by none or all of the five competitors, with algorithms differing in their ability to identify approximately 20% of the buildings. Differences were greatest in the very off-nadir range. But only 30% of the buildings found by one or more of the competitors were not found by all of them.

False Positives

Although there was more variability in false positives than in the correct predictions, of the five different algorithms, they all came up with very similar incorrect predictions. Two competitors used gradient boosting machines to filter false positives out of their predictions, which likely gave them the upper hand in precision.

Building Size

All the algorithms found the smaller buildings harder to find. And the best one identified 20% of the buildings smaller than 40 square meters. At the other end of the range, 90% of buildings larger than 105 square meters were correctly identified in the best solution.

Occlusion

The best algorithm also performed well at detecting buildings partially blocked by trees. There was only a small drop in detection, indicating the algorithm worked around occlusions to correctly identify buildings.

Key Takeaways

So, if you don’t have time to read the detailed post from SpaceNet, here are the key takeaways that showcase the efficacy of using crowdsourcing for the creation of sophisticated computer vision algorithms.
In conclusion, the top five algorithms emerged from the challenge with excellent results and relatively low false positive predictions. They generated very similar predictions, even though they approached the problem with different neural net architectures and inputs. Object size, look angle, direction, and occlusion are significant factors to consider when developing a geospatial deep learning product. Overall, the algorithms produced interesting insights for model deployability.
Do you need expertise in computer vision algorithms or a complex data science project? Crowdsourcing with Topcoder connects the best talent in the field to your ideas, ensuring a timely, amazing solution, with no technical expertise required.


Andy LaMora

Global Director, Data Science, Analytics & AI



UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds