approaching-marathon-match-task-part-1
AlgorithmData Science

Approaching Marathon Match Task, Pt. 1



The following guest post is from our 2017 Topcoder Open blogger, gorbunov.



Once you have decided to participate in a Marathon Match, are waiting for the start of a new one, or joined an ongoing one; you have a task to solve.
Usually the tasks provided during MMs belong to the Nondeterministic Polynomial (NP) class. That said, no exact solution can solve this task with the given size of the input data at all! All humans can do is to approximate the solution. Sometimes these approximations can be quite good, and even prove to fit within 1-2% out of the ideal solution.

Before You Start


Before you start, you may want to register for the MM newsletter from Topcoder. This way you will not accidentally miss a competition. It also may be tricky if you are not familiar with the website interface. Open the community portal, login, hover over the username in the top-right corner, choose Settings, Email.
Doing this step will save you from the frustration of missing a good MM.

The problem statement


Once you read it, you can start developing your solution. Do not hurry to start coding straight away as the tasks for MMs are chosen in such a way that a good solution have to be prepared on paper first. If you want a trivial solution to start playing with the visualizer, it is often packaged with the contestant’s distribution. You can use, modify, and even submit it (if you really want to). In case it is not there, write a simple one, possibly scoring 0 points, modify the visualizer to allow a manual play, in case it is feasible for a particular task.

Common Techniques


There are several common methods you absolutely have to be aware of if you want to do well in a MM.
The one that pops up most frequently and is thus the most important is probably Simulated Annealing (SA) metaheuristic. Actually there are a plenty of so-called metaheuristics, or meta-algorithms that can be used to solve the NP tasks. The others include Genetic Algorithms + other Evolutionary algorithms, Ant Colony optimization, Hill Climbing...
Thus one might ask, why is the Annealing technique is so good?
A short answer is that if implemented properly:

  • it has a very fast evaluation of neighbor states;

  • transition to the neighbor state is relatively fast compared to the other methods;

  • no need to store many different states, a single one is enough;

  • following from that, no need to process entire population on an each iteration, thus an enormous speed gain.

A long answer


Let us look at how SA is implemented instead.
I will use the C++ language here instead of the pseudocode to be more concrete. Note that your code during the match must be written on your own!
The basic procedure may look like this:


#include <random>
void simulated_annealing() {
double skip_time = get_time() - start_time;
double used_time = skip_time;
std::uniform_real_distribution<double> type_dist(0, 1);
std::mt19937 rnd;
while (used_time < timeout) {
double temperature = (1.0 - (used_time - skip_time) / (timeout – skip_time))
* (max_temperature - min_temperature) + min_temperature;
for (int iteration = 0; iteration < 1000; iteration++) {
double type = type_dist(rnd);
// prob_change_1 + prob_change_2 + prob_change_3 == 1.0
if (type < prob_change_1) {
state_change_1 sd1 = random_state_change_1();
if (accept(sd1.delta, temperature)) {
sd1.apply();
}
}
else if (type < prob_change_1 + prob_change_2) {
state_change_2 sd2 = random_state_change_2();
if (accept(sd2.delta, temperature)) {
sd2.apply();
}
}
else {
state_change_3 sd3 = random_state_change_3();
if (accept(sd3.delta, temperature)) {
sd3.apply();
}
}
}
}
used_time = get_time() - start_time;
}


As can be seen from the code, in general SA walks through the so-called state space; meaning all the possible solutions of the task space. At each step, it either greedily accepts a better solution by slightly modifying the current one, or does a backtrack by accepting a generally worse solution. This worse solution acceptance allows avoiding local minimum, or a solution that “looks optimal”, but is not indeed. While time goes, the probability of accepting a bad solution decreases, thus better solutions are more probable with the time.
A few questions may arise after looking at this code.
First of all, why do we use some get_time function here instead of the all the way good and standard one, std::chrono::system_clock::now()? That is a good question and all I know is that Topcoder’s servers currently return a garbage out of the system_clock::now(). As a side note, we must use Wall clock here, because that is how Topcoder servers will measure your program’s execution time.
Here is a possible implementation of the get_time routine. It works only on the GNU toolchain, or the CygWin for Windows.


const double ticks_per_sec = 2500000000;
inline double get_time() {
uint32_t lo, hi;
asm volatile ("rdtsc" : "=a" (lo), "=d" (hi));
return (((uint64_t)hi << 32) | lo) / ticks_per_sec;
}


The next interesting point is the temperature. It is a feature of the SA heuristic and its main source of the power. As it can be seen from the formula, the temperature decreases with the time. The min_temperature and max_temperature are selected experimentally. Usually that means that you just run a couple of tests with different values and select the values that provide better results. This is of course easy to say, but mastering this allows you to master the SA in general.
Another question you may have. Why do we use a separate object to store just a state change, instead of modifying the state in place, re-calculating the total score, and subtracting from it the previous score? That is purely an optimization. However, this optimization is usually so important that it may cost you too much if you do not implement it. The author of this article faced this himself several times.
The question why do we need a 3 different state change functions is an easy one. They are needed to reduce the amount of steps needed to go from a given state to a completely different one. In this case there would be just one function, everything would work, but it would converge to good results much slower.

Last Notes


We have discussed the very basics of using common metaheuristic in a MM called Simulated Annealing. To master it, you need a practice, reading other contestants’ source code, and participating in the post-match forum discussions.