Component Development Finals Summary

by the Development Review Board
TopCoder Members
Friday, May 5, 2006

Congratulations to sindu, the overall Development Competition winner! See below for a summary of how the Development Competition went...

The TCO Component Competition started with 5 weeks of Online Qualification Rounds, with many components each week to develop. The participants got scores based on their placement in each component, with a cumulative score calculated from up to the four highest scores. First place was taken by visualage with the perfect score: 40 points (10 points for each first place), though he actually got seven first places and one third place. Congratulations for such a success!

As the Development Competition closed, the leading eight competitors were: visualage, sindu, traugust, nhzp339, oodinary, colau, cnettel and biotrail. But nhzp339 couldn't attend the finals, leaving his place to zjq.

The Championship Round consisted of 3 online portions. During each, the developers had to submit a component -- either in Java or C# -- with similar requirements. One big difference from regular component competitions: after the reviews ended, the developers didn't get the results to start the appeals.

The first online component was the "Document Indexer Persistence," which implements the persistence layer for the Document Indexer component to save and load the data in XML or databases. The Document Indexer component handles indexes on multiple documents and can group them into collections. Document Indexer Persistence was designed with scalability in mind and, for that purpose, xml files can be split in case they are bigger than the maximum size permitted by the file system, and the storage in database can be de-normalized to be faster. Those requirements made the component harder than it seems at first sight, so developers have had a tough time developing that component! For that component, 6 developers choose to implement it in Java and 2 in C#.

The second component was the "Testing Framework," which wraps some common tasks when testing on servers, like initializing and cleaning a database, or starting up and shutting down a web server. This component will surely be very useful for testing components using those technologies.

Even if this component has many classes and methods, most of them aren't very hard to implement. For this second component 5 developers chose C# and 3 chose Java. All the developers did a great job, having very high scores.

The third component was the "Bread Crumb Trail Tag" in Java and "Bread Crumb Trail Control" in C#. Those components are intended to be used in web pages to help the user know where he is, for example "Forums > Round Tables > General Discussion." This component involves much more than just what you see; the hard part is discovering where the page is located in the site map, and identifying the best path from the root up to that page. The user can specify a site map in an XML file using regular expression patterns to match URLs.

This time, there were 5 developers that chose Java and 3 that choose C#. Even if this component has many classes and methods and involves slightly more complex algorithms, like finding shortest paths, as well as the additional complexity in testing Web components, most developers did a great job.

After developing those components, the onsite portion started in Las Vegas, where each day the contestants have 2 hours to appeal each one of the components. To be in tune with Las Vegas, before the appeals each contestant has to wager points on each of the components.

In the first round, there were 128 (nice number!) appeals, of which 62 were successful. The developers started by looking at their scorecards and submitting a few appeals, but as the round progressed more and more appeals were submitted, and it was impossible for the reviewers to keep up and provide real time response.

The appeals had a broad range of reasons: some came because the reviewer was just wrong in his correction (hey, we're human!), some stemmed from the reviewers not quite understanding the developer's ideas, and some were more subjective issues, like whether a part of code should be factored or not. There were also appeals that weren't about whether the reviewer's corrections were good, but rather if their punctuation was accurate. And finally, a few appeals were just a desperate effort to get a few more points.

Before the appeals, colau was taking the first position, closely followed by sindu and visualage, but visualage was very convincing with his appeals and got 5.66 points in the appeals phase, moving to the first place. sindu raised his score by 2.56 points, bringing him to second place, and colau could just raise his score by 0.85, leaving him in third. traugust was 6th before appeals and, after raising his score by 3.68 points, he moved to the 5th position.

The second round of appeals was for the Testing Framework component, in which there were much higher scores, so less appeals were expected. Effectively, less appeals were done, but still 66 appeals were done, of which only 26 were successful.

Probably in this round the funniest "appeal" was because a developer was scored down for a method that was not used anywhere, so his appeal just said " Arrrgghhh. Silly me. This is not an appeal. Just complaining myself so I feel better. Thank you :-)."

sindu was first before the appeals, and after 5 failed appeals continued in the top. biotrail was second and 4 succesfull appeals out of 9, raising his score in 0.68 points, but still not enough to reach sindu.

visualage moved from the 4th position to the 3rd after raising 3.05 points in appeals, and cnettel moved from 6th to 4th thanks to his almost 4 points raised in appeals.

The third round of appeals was for the Bread Crumb Trail. There were 99 appeals, and 56 succeeded. This component had some complex Algorithms; this could be a reason why the accuracy reviewer gave some developers lower scores than the other two reviewer. Probably his tests were able to detect subtle issues that were not so easy to find by just looking at code.

The top 4 developers were cnettel, sindu, colau and biotrail in that order, and they had just a small difference in scores; from the 1st to the 4th just 1.5 points. cnettel raised his score in 2.5 points, but the other 3 of the top four could raise more points, and cnettel dropped to the 4th position, leaving sindu in the first place.