TCO Preview: Component Design and Development Competition
What is it?
Every year, the TopCoder Open (TCO) includes two component tracks allowing members to compete on top of the usual
design and development challenges that appear on the site weekly. While not the most popular or spectator-friendly tournament track,
the component challenge always proves to be a tough, five-week competition, and this year hundreds of members registered, with over
100 submitting. Over the five weeks, a large amount of quality work was generated, with both new members and component regulars
raising their work intensity to compete for the final 8 design and 8 development positions in Las Vegas.
For those new to the format, the TCO component challenge works as follows:
During a five week period, members can submit in the usual TopCoder Software component challenges, to gain themselves points in the TCO.
Unlike the Digital run, points depend solely on final placement for everyone who scores over 75%; If you win your component, you get 10 points,
second place gets you 7 points, a third placing is worth 5 points, 4 points for fourth, and zero points for fifth or below. While
these point values may seem strange, if you play around with them you may notice some complex orderings, such as:
- Three 2nd places (21 points) beats two wins (20 points)
- First and 3rd (15) beats two 2nds (14)
- 2nd and 4th (11) beats one 1st (10), equivalent to two 3rds (10) which beats a 3rd and 4th (9)
Additionally, only your best four component placings are counted (i.e. maximum score 40), and ties are broken by component review score.
This leads to a number of possibilities for the lower placements, however to gain a spot in the top 5 or 6 in both design and development,
there is no doubt that you need mostly firsts - something that gets increasingly difficult each year as more and more people submit.
The Digital Run effect
This was the first year that the TCO component challenge was run at the same time as a stage of the
Digital Run (DR), and it was interesting to see how
the two concurrent competitions affected the results of each other. For one, any member who had placed in the top five in a DR stage
already had won a trip to the finals as a spectator. The four-submission cap for the Open meant that
the submit-a-lot strategy required to gain points in the DR would not be as successful in gaining a TCO spot -- while concentrating
on submitting fewer, higher-quality submissions in the TCO could result in losing some places in the DR. Overall, the DR didn't seem
to disrupt the usual TCO results, with the possible exception that the more active submitters did not stop submitting even after they
had locked up a guaranteed place in the finals. In past years, early TCO leaders might have been more likely to back off in the later rounds and allow other competitors to notch some critical first-place finishes, but there was no relenting this year.
Design challenge (results):
|# registrants||109||# who submitted||52
|# of components||52||# of submissions||157
This year's designers were provided with a large number of components from which they could gain points - with the usual mixture of mainly
Java components, but with enough .NET ones to give C# designers a chance of making the finals. The early weeks consisted of a number
of generic catalog components (Logging Wrapper, Collection Wrappers, Object Diff, ...) as well as components to help organise
TC's new assembly challenges (Phase / Project / Team Management). Many early projects proved popular, as designers took the opportunity
to get some early points on the board. Part way through, the relatively simple
Null Streams component proved one of the most popular
design challenges ever with 12 submissions -- all of them passing review. Then in the final week, a number of Survey components gave
members a last chance to climb up the leaderboard towards the top 8.
- Three designers - argolite, kyky and Pops - all managed to reach the 40-point mark, with flying2hk also getting 30 from 3 challenges.
- argolite submitted in 10 components, with an impressive 7 wins and 3 seconds.
- Not to be outdone, AleaActaEst made 17 submissions, with 14 passing review, 3 wins and 6 seconds.
- kyky managed to score the second ever perfect score for design, with 100% for Survey Model.
Development challenge (results):
|# registrants||243||# who submitted||99
|# of components||82||# of submissions||204
The development side has certainly grown in popularity, and the increase in competitors was matched by an increase in available components
-- during the TCO there was an average of around 16 components per week (including reposts) which gave many chances to get on the scoreboard.
Included among them were a large number of the Time Tracker custom components, to be used on what looks like a large
user- and project-tracking website. In addition to these were some of the regular siren components that seem to draw developers towards them,
such as Object Diff (19 submissions) and
Text Normalization (24 submissions). The final weeks proved to be more of
the same, however developers now were able to enjoy asking questions to designers who were near the end of their own busy 5 week schedule.
- Four developers - biotrail, magicpig, PE and zsudraco all obtained 40 points, and liuliquan won 30 points from 3 challenges.
- zsudraco was the most active submitter in the contest, with 8 components, including 5 wins.
- Three of biotrail's wins came from the difficult Time Tracker projects, netting $5600 in prizes and 1825 Digital Run points.
- magicpig joined an exclusive club -- the small handful of developers who have reached red -- after scoring 99.6 for Text Normalization.
Due to travel problems, many of the best-performing competitors -- especially among the developers -- won't be able to able to make it to
Las Vegas to participate in the finals. Nevertheless, there's enough talent and experience among the finalists that I expect an interesting match.
Congratulations to the finalists - 8 for design and 6 for
However, their finals experience begins long before setting foot in Las Vegas -- over the next five weeks, each finalist will
be working on three components of a traditionally tough difficulty, with a week's break between each round. All finalists
work on the same components (choosing Java or .NET versions), which are then judged by two panels of hand-picked reviewers, 3 for design
and 3 for development. The contestants, however, do not find out their score immediately, and so the suspense builds until the trip to Las Vegas.
Once onsite, the finalists participate in what is called the "Appeals Phase" for each of their three submissions, one per day.
First, before each phase, the contestants get to wager part of a total of 100 'points' on how they think they'll do for the component --
this comes into play once final results are known. Scores are then revealed, along with the comments from reviewers detailing why
certain scores were given. Each finalist has two hours to enter 'appeal' text for sections of the scorecard if they feel they have not
been awarded the appropriate marks. If successful, their scores may be increased, and it's possible for the scores and leaders to change
many times during this phase (score changes in the order of 15% are not unheard of).
Each contestant's score is revealed after reviewers have responded to all the appeals -- which may be a long time after the appeals phase
ends, as eight contestants can generate a lot of appeals in two hours! Then, the contestant's initial wager is divided by their place --
for instance, if you wagered 40 and came 1st, you get 40. If you come in fourth, you get 40 / 4 = 10 points. After all the wagers have
been scaled down by placement, the totals are accumulated, and whoever has the most will be the TCO winner!