Competing Successfully in a Topcoder Marathon Match
Marathon Matches (MMs) are competitions that run for an extended period of time and ask competitors to tackle the toughest algorithmic challenges — both real world and theoretical– in a format where submissions are fairly open-ended, and not simply pass/fail. Examples include optimizing solar energy generation on ISS, cracking WWII Enigma codes, or simply putting out a forest fire.
MMs are held 2-4 times per month with approximately 2-week long durations. The day and time of MMs varies from match to match. The matches are placed on the Marathon Match calendar anywhere from 1-2 weeks before the match begins, with some matches placed earlier with a “tentative” tag.
Most Marathon Matches are “rated events”, which means your participation affects your Topcoder Rating. Each match will list whether it is a rated event or not.
Topcoder holds one advancement tournament (the Topcoder Open) each year, which includes qualification rounds and a championship Marathon Match competition, which generally has a substantial prize purse involved. Each round of a tournament will affect the ratings of all participants, so they are also considered “rated events”.
(screenshot of graph and someone’s rating here)
Types of Marathon Matches
There are three types of Marathon Matches available. Each type has the same process, but may have small differences, such as code library allowances or restrictions.
“Fun” Marathon matches are theoretical and help hone algorithmic skills while providing an interesting challenge to solve. Client MMs are held for Topcoder’s customers, and ask competitors to solve a real problem, possibly in a restated format. The Topcoder Open (TCO) qualifiers and final championship rounds can be either theoretical or real-world problems.
The Phases of the Marathon Match
Each Marathon Match (MM) consists of the following three phases:
Submission & Preliminary Testing
Final System Testing
Competitors must register and agree to the rules of the event (and possibly answer a survey question) before competing. Once registered, you may view the problem statement for the Marathon Match event by clicking the Problem Name from within the Active Contests list. Problem statements will be made available to all members, however you may not submit a solution until you are registered for that specific event.
Submission Phase & Preliminary Testing
During the submission phase, competitors are able to submit code for the current event. There are two types of submissions that may be made during this time: Example Test and Full Submission. Example tests are a small (typically 10 or so) set of test cases. Information about these test cases is provided with the problem statement, and competitors are given details for each test case, including raw score, runtime, any error messages generated, and any standard output. A full test tests the solution against a larger set of test cases. The only feedback to competitors following a full submission is combined/total score for the set of test cases. The scores resulting from full submissions are shown in the leaderboard throughout the match. Each competitor may only have a single submission (example or full) in the test queue at any given time. The frequency of submissions for example and full testing is also limited: 15 minutes and 2 hours are common intervals. A competitor may continue to make as many example or full submissions as they like during the competition for as long as the submission phase is open. However, only the last full submission will be used for final scoring.
Final System Testing & Scoring
After a competition completes all full submissions will be placed on a queue for automated system testing. During system tests, each submission will be run against a larger set of test cases and be issued a final score. Only this final score will be used to determine final rankings.
Scoring in a Marathon Match
There are several types of scores to be considered in a Marathon Match. For each test case, a competitor’s submission will be assigned a raw score. This raw score is typically a relative measure of how “well” a solution performs. This will be defined within the problem statement, and may include things such as, but not limited to, how close to optimal of a return is produced, or runtime of the solution. For full submissions and system tests, the set of raw scores is combined into a final score. The manner in which this is calculated is likewise defined in the problem statement. The most common methods are simple summation of raw scores, or summation of relative scores (that is, the sum of your_score/best_score for each test case), but the specific details for each problem should be carefully considered.
The details of how the score is calculated should determine the approach a competitor takes to working on a problem. It is important to note that the type of solution which obtains the best score may not always be the same as what might otherwise be considered the optimal approach. For example, on a problem that uses runtime in the scoring formula, it may be better to have a solution that runs very fast, even at the cost of some accuracy.