April 13, 2018 Reliving Marathon Match 1
Marathon Match 100 begins in 6 days. We have already begun the celebrations. Just in case you missed it, we have some amazing prizes and goodies planned for you plus some good reads on our blog so make sure you are following along!
Looking back, many of you must have wondered how the first Marathon Match was. So, let’s go back to the past! We will be revisiting the Marathon Match 1 Editorials and Leaderboard to relive the match. You might also want to take a look at the forums and also practice solving MM 1 problem in the Practice Rooms.
The match was held from May 10, 2006 to May 17, 2006.
Match Editorials by ctrucza
Looking at the “RandomWalking” problem statement, it was clear that one could count the minimum number of distinct nodes visited at any point by using fingerprinting (# of incoming and outgoing edges), counting the distinct successors and predecessors and adjacent repetition of a fingerprint. Having a minimum number of nodes was of little help, however, as we could have nodes with the same fingerprint.
The general idea behind my (ctrucza) approach was to ask: What is the probability of all the nodes being visited?
First I asked the question: What is the probability of two nodes having the same fingerprint? Which can be reduced to What is the probability p (ei,eo) of a node having ei incoming and eo outgoing edges? Which can be answered by first calculating the probability p(ei) of a node having ei incoming edges, the probability p(eo) of an node having eo outgoing edges, and then p(ei,eo) = p(ei)*p(eo).
Calculating the probability that a node has ei incoming edges is a simple combinatorical problem: p(ei) = C(n-1)(ei) = fact(n-1)/fact(ei)*fact(n-1-ei). The same is true for p(oi).
So we can calculate p(ei), p(oi), p(ei,oi) = p(ei)*p(oi), and from here the probability of two nodes having the same fingerprint is p(ei,oi)^2.
With this result in hand, and knowing the minimum number of visited nodes (m), we can calculate the probability of having visited (m+1), (m+2), …, (n) nodes.
Next, we must how many steps we want to make in the next walk?
For this I made a simple simulation of how the probabilities computed above will change if I make 1, 2, … steps.
The reasoning is pretty simple.
Let’s consider a situation where I know that I have visited m nodes, and I calculated the probabilities of having visited m+1, m+2, …, n nodes:
One can consider each row as a possible scenario.
In any scenario in which you make a step, two things can happen:
- you end up on a new node
- you end up on a node which was already visited
The probability of a visiting a new node is pNEW(m) = (N-m)/(N-1) and the probability of visiting an old node is pOLD(m) = (m-1)/N-1), where m denotes the number of visited nodes in the scenario.
Applying this computation to the table we get:
|Original #||Scenario||New #||probability|
Note that the resulting table has n-m+1 rows, which makes this computation scalable.
One can calculate the most likely score as follows:
For the scenarios where m < n one calculates the score for the scenario s(m) = 100/(2^(n-m)), and multiplies this by the scenarios probability p(m).
For the scneario where m = n one calculates 100, and multiplies by its probability.
Now we have to account for the penalties for visiting more nodes than necessary.
Let’s note the current step number with k.
While we are stepping through the scenarios, we take a note of the probability that all nodes were visited in that step. This will be the probability of having visited n nodes p(n).
Then we can always have a table which contains the probabilities of having visited the whole graph at each step:
|Step #||Probability of completion|
To find out the penalty for visiting more nodes, one first calculates the number of extra nodes visited, then uses the score calculation formula to calculate the penalty:
|Step||Probability of completion||Extra steps||Penalty|
Then we weight the penalties with their probabilities, and we get the most likely penalty for the current scenario.
We combine this with the scores calculated above, and we get the most likely score for the current scenario.
When to stop
We have set up a simulator which uses the current probability distributions to calculate the score for the next step. We can use this simulator to calculate the most likely score for the second step, then for the third, and so on.
We just let the simulator run, until the most likely score is higher then a wisely chosen threshold. Then we query the simulator to see how many steps it took, and we return it from the walk function.
For each call to the walk function I repeated the procedure above: calculate the probability distribution (based on the new set of information received), setting up a simulation, stepping through and observing the score.
In my submission I also considered the probability mentioned in the graph-generation procedure description. I calculated this p from the number of edges observed (p=AverageEdgeCount/NumberOfNodes), and used it to adjust the probability of pOLD and pNEW above.
Also I guess several errors can be found in my implementation: the calculation of the most likely score does not account correctly for the cases when one does not visited all nodes. Also the calculation of the p (from the graph-generation procedure description) is fishy: I calculate the average node count of every node I receive in the walk function, not only the uniquely visited ones.
Choosing the threshold of the score where one should stop was interesting. Common sense dictated that one should stop as soon as it went above a score of 90. When I tried it, it gave timeouts on most of the test cases. I then tried 80, which placed me at the top of the list. I was content, until rado42 displaced me.
I was looking for ways to improve my score, and one trial led me to a threshold of 70, which placed me first place.
After running several test cases, I observed that the highest score was achieved with the threshold of 50. I still have problems understanding why it is so.
|Handle||Provisional Rank||Provisional Score||Final Score||Language|