January 11, 2021 Single Round Match 797 Editorials

Thanks to misof for writing the editorial.


Suppose we order all the items according to their cost.

Let X be the most expensive item of all. Clearly, we cannot get X for free: if we pair it with any other item, the cost of the pair will still be the cost of X.

Now that we know that we have to pay for X, we can do it immediately. Which item should we pair with X? It should be quite obvious that we want the second most expensive item, Y. (For a formal proof, consider any other solution and think what happens if you alter it by swapping Y and the item that was paired with X in that solution.)

Thus, now we know that we want to buy the two most expensive items together. Once we do that, we can repeat the thought process and determine that we want to buy the next two most expensive items together. And so on.

Sample implementation:

public int buy(int[] prices) {
    	int answer = 0;
    	for (int i=prices.length-1; i>=0; i-=2) answer += prices[i];
    	return answer;

The constraints were low enough to allow all kinds of implementations: e.g., a solution that doesn’t sort the array but instead always finds the maximum by iterating over all values that haven’t been processed yet. Another way to solve the task was to iterate over all permutations of the given array — i.e., different orders in which to purchase the items. The optimal one will surely be among them.


Imagine each O, N, C atom as a tiny sphere with 2, 3, or 4 “connecting interfaces”.

In the beginning, all of these interfaces are free. We can easily compute their total count by iterating over the atom types and adding 2 for each O, 3 for each N, and 4 for each C.

The number of hydrogens we need to add to the molecule is equal to the number of interfaces that will remain free once we add all the bonds that are specified in the input.

We could now track which atom has how many remaining free interfaces, but it turns out that we don’t even have to do that. We just need to observe that each bond connects two of these interfaces, so it decreases the number of free interfaces by 2. This gives us a really simple implementation:

public int countHydrogens(String atoms, int[] X, int[] Y) {
    	int answer = 0;
    	for (char c : atoms.toCharArray()) {
        	if (c == 'O') answer += 2;
        	if (c == 'N') answer += 3;
        	if (c == 'C') answer += 4;
    	answer -= 2*X.length;
    	return answer;


During the solution, the helicopter sometimes flies up and sometimes down. Let’s think about that a bit. Does it make sense to fly down and later fly up again? No, it doesn’t: we could skip both steps and still have a valid solution, the helicopter would just traverse a part of its journey at a higher altitude.

Thus, in an optimal solution the helicopter has two phases: during the first it only flies up but never down, during the other it’s reversed. 

Now let’s think about flying up. Some solutions might do it “on demand”, but if we knew the final altitude, it clearly does not hurt to fly to it immediately. And the same goes for descending: we clearly don’t lose anything if we only fly down as our very last actions.

And what is the maximum altitude we’ll reach in an optimal solution? It’s either max(start altitude, goal altitude), or it’s the altitude of one of the obstacles we have to pass along the way.

Thus, we only have O(RC) possible maximal altitudes for the trip. And once we fix the flight altitude, we know the cost for flying up and down, and we can compute the optimal cost of horizontal travel using a simple BFS. This gives us a solution in O(R^2 C^2) which is succinct and fast enough.


If we are looking for a specific sequence S of length N, at any moment of the process the relevant information about the past is the length of the longest prefix of S that currently appears at the end of the sequence of rolls we already made. For example, if we are waiting for “12134” and the last few rolls were “…003121”, we are in state 3, as the longest usable prefix of S is “121”.

For any such process with O(N) states we can find its expected length in O(N^3) by solving a system of linear equations: for each state we have one unknown that represents the expected time from that state to the finish, and for each state we have one linear equation that describes what happens in the next step.

Continuing the above example, if we are in state 3, in the next roll we may roll a 1 and go to state 1, we may roll a 2 and go to state 2, we may roll a 3 and go to state 4, or we may roll any other value and go to state 0. Thus, the expected time to finish from state 3 is one for the current roll, plus the weighted average of expected times for the states we may reach after the roll.

Applying this approach blindly would be too slow, but I still mention it because it gives us insight that the answer is determined by the patterns in the sequence S we are waiting for – more precisely, by its KMP failure function. (Or, if you prefer, by the shape of the smallest DFA that looks for an occurrence of S in a word.) And it turns out that there is a reasonably simple closed form solution.

For an unbiased N-sided die, the expected time until S appears is the sum of all N^k, where 1 <= k <= N and the first k characters of S match the last k characters of S. Below we’ll give a proof of this statement, and from the proof it will also be obvious how to generalize it to biased dice.

For a proof, consider that we’ll open a casino. As we roll the die and record the outcomes, we will allow people to come and bet on those die rolls. Before each roll we will get one new player. The player will start by betting $1 and guessing that the next die roll will be S[0]. If they are right, they will get $N back (as is fair for an event with probability 1/N), if they are wrong, they get nothing back and stop playing. Each player that won continues playing, betting everything they have on the next roll and guessing that their next element of S will come up.

The expected profit from each bet is zero, and therefore at any moment the total expected profit of our casino must be zero.

Let’s look at the moment when S actually occurs for the first time. Let X be the random variable giving the number of rolls after which it happens. The expected income of our casino until then is E[X]: the income is each player’s initial dollar, and we have one for each roll we made. On the other hand, the money that is currently paid out is uniquely determined by S: the player that started when S started just got their full win of N^|S| dollars, and we may also have some other active players. For example, if S = “1213121”, we have three active players: one that started 7 rolls ago and saw the entire S, one that started 3 rools ago and saw “121”, and one that started one roll ago and only saw “1” so far. The total amount of money currently in their hands is N^7 + N^3 + N.

And as the expected profit of our casino must be zero, E[X] must be equal to this amount of money.

The generalization to biased dice is now straightforward. The only thing that changes in the above argument is the payoff from each correct bet. If a player is betting that an event with probability p occurs, we’ll multiply their money by (1/p) if it does. This maintains the property that the bet is fair and that the casino’s expected profit is zero.

For the constraints used, an O(N^2) implementation of this formula was sufficient.


This problem has two components. First, we have to understand how we can look at a game board and determine the expected outcome. Only once we know that, we can think about how to construct a game with the given expected outcome.

First of all, the knight isn’t really adding anything special in the first part of the solution. Once we look at a game board, we can mark all the cells that are actually reachable by the knight. What we’ll get is a graph, and we are asking about the expected length of a closed random walk starting in the given vertex of the graph. It’s worth noting that the graph has two useful properties: it is connected (reachability is transitive) and we can say that it is undirected (knight moves are reversible).

It turns out that in all such graphs (not just the ones for knights) the expected time until we re-enter a vertex can be calculated using a surprisingly simple formula: S/D, where S is the sum of all vertex degrees (S=2*E is twice the number of edges) and D is the degree of this particular vertex. I will show a proof of this statement below.

Once we know this result, we can proceed to the second part of the problem: for which P/Q does a game board exist, and how can we construct it? Clearly, in our graph D must be in [1,8] as 8 is the maximum degree in a knight jump graph, and E must be at least D. If we are lazy to implement casework, we can simply iterate over all viable combinations of D and E and check whether for any of them (2E)/D = P/Q. If there is no such combination, there is no solution. If there is one, we can take the smallest such pair. All we now need to do is to construct a knight reachability graph in which there are E undirected edges and one of the vertices has degree D.

There are many possible constructions, some much more annoying to implement than others. Thinking about one that isn’t painful pays off.

One fairly convenient way to construct such a graph looks as follows:

  • I reserve a 7×7 area in the top left corner for the start. I place the start at (2,2) and its D neighbors around it, starting at (1,4). With the possible exception of (1,4), the other neighbors of the starting cell will have degree 1 in the final graph.
  • The next few cells that get added if needed are, in order, (0,6), (1,8), (3,9).
  • I now construct almost the entire rest of the graph randomly. As long as I need at least 8 more edges I repeat the following process:
    • Pick a random blocked cell in rows 4-29, columns 7-29.
    • Verify that it’s adjacent to some already empty cells.
    • Make it empty and adjust the number of edges we still need.
    • Each successfully added cell decreases the number of edges needed by something from the range [1,8].
  • As soon as I need fewer than 8 more edges, I add these as a path going from (1,8) to the right: (0,10), (1,12), and so on as needed.

One thing remains: the proof of the result about the expected return time.

Our random walk is a Markov process. As we have an undirected connected graph, its stationary distribution is unique. We can easily verify that in this distribution the weight of each vertex is proportional to its degree. To do that, imagine that in each vertex of the graph we have as many people as its degree. When we blow the whistle, the people in each vertex will leave it, each using a different edge. Clearly, once each person traverses their edge, we will get the exact same distribution of people: for each vertex, exactly one person will enter it from each edge.

Let’s now pick a specific vertex with degree D, and let S be the sum of all degrees. The proportion of time spent by an infinite random walk in this vertex is D/S, and therefore the average time between visits to this vertex must be S/D, q.e.d.


categories & Tags


Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds