December 5, 2020 2020 TCO Algorithm Finals Editorials

BFSOrder

In this problem we are given the sequence in which breadth-first search processed the vertices of a graph, and we need to count all ways in which we can reconstruct the graph.

One possible way of counting works as follows:

  • We want to divide the given sequence into layers that correspond to different distances from the starting vertex. For each valid way to divide the sequence into layers we will count the corresponding graphs, and as these sets of graphs are clearly disjoint, the answer is simply the sum of all these counts.
  • Once we know the layers, we can only have two types of edges: within the same layer, and from a layer to the immediately next layer.
  • Edges of the first type are easy: we can place them wherever we want. This gives us 2^(x(x-1)/2) options for a layer with x vertices.
  • Ways to select edges of the second type can be counted in polynomial time using dynamic programming. For each vertex v in the earlier layer we need to select a contiguous increasing sequence of vertices in the following layer that will be discovered from v. Each of these vertices can then have additional edges to vertices from the earlier layer that are processed after v.

The above solution can be reasonably easily implemented in O(n^4) time and O(n^3) memory, but with some care it’s possible to get the complexities down to O(n^3) time and O(n^2) memory.

But we can do better than that. We just need to make the observation that we don’t actually need the full information about layers when doing the counting. Instead, the states of our algorithm will simply be pairs of indices: the beginning and the end of the queue.

For each pair (l,r) we can ask the following question: Suppose that

  • the vertices in bfsOrder[:r] were already discovered by the BFS, 
  • we already chose all the edges between the discovered vertices, and for these
  • precisely the vertices in bfsOrder[l:r] are still in the queue.

In how many ways can we choose the edges in the rest of the graph?

The vertex bfsOrder[l] will be processed next. For some k >= 0, the vertices bfsOrder[r:r+k] will be its newly discovered neighbors. Obviously, we should only consider those k for which this sequence is still increasing. Once we fix k, we can now count the ways in which we can add extra edges. We can do whatever we want between pairs of newly added vertices, and we can connect each of them to an arbitrary subset of vertices that are currently in the queue. 

The above algorithm runs in O(n^3) if we, for each pair (l,r), iterate over all valid k. However, we can easily speed it up by precomputing the maximum k for each r, and then using prefix sums.

FivePileSpecial

Most game theory problems share a common feature: the best way to solve them in a competition is to brute-force small cases, analyse the outputs and guess/discover a pattern and then use the pattern to actually solve the task.

This task was somewhat different.

First, if you do your brute force, you might be surprised / suspicious, because you won’t see any deviations from what you would see in a regular NIM: the losing positions all seem to have xor of pile sizes equal zero. Is it really the case, or are we overlooking something?

Now comes the time to think. Or, if you are really good, maybe the time to think came before you gave in to your urge to implement a brute force for every game theory problem. Who knows?

It turns out that the problem statement is a bit deceiving in that it pretends that the allowed moves are special in some way. They aren’t. There is a much more general property in play here, and the problem statement just aims to camouflage it a little. That key property is that in each move we are removing the same number of tokens from an odd number of piles. We can add more piles and more allowed moves as we want, as long as the number of piles in each allowed move remains odd, the analysis shown below will still work.

We now claim that the losing and winning positions for our game are indeed the exact same positions as for a traditional n-pile NIM. That is, a position is losing if and only if the bitwise xor of all pile sizes is zero.

To prove this statement, imagine that we color all positions with xor=0 red and all other positions green. If we want to prove that red positions are precisely the losing ones, it is enough to show two things: that all moves from a red position lead to a green one, and that from each green position there is a move into a red one.

If we are in a green position (xor is non-zero) we can make a move on a single pile that would be a winning move in traditional NIM (and that makes xor zero).

The other statement is where the parity of the number of piles becomes relevant. Suppose we are in a red position (xor is 0), we selected an odd number of piles and we removed x tokens from each of them. Let’s write all the pile sizes and the value x in binary. Let i be the least significant bit in x that is set to 1. When we subtract x from the pile sizes, the bits less significant than i will remain unchanged, and we will flip bit number i in the size of an odd number of piles. Thus, bit i gets flipped in the xor of all pile sizes, and therefore the xor is no longer zero.

That concludes the proof. But what it also does is give us an idea how to find all the winning moves.

Suppose we already selected the piles we want to change. How can we find all values x that work? We can take a look at the current bitwise xor of pile sizes. If it’s non-zero, we need to change its least significant 1 into a 0. This implies that any good x must have the corresponding bit set to 1 and all lower bits set to 0.

But now that we know this about x, we can subtract the corresponding power of two from all selected piles and continue with the same thought process: if the xor is still non-zero, we repeat the process with the new least significant bit with value 1 (which must be a higher bit than before). If any pile size becomes negative during the process, a suitable x does not exist, otherwise it will be uniquely determined when the process terminates.

DifferenceSequence

The sequences defined in the problem statement are sometimes known as Ducci sequences.

It is clear that each Ducci sequence must be eventually periodic: each element of each vector is at most equal to the maximum of the initial vector, so there are only finitely many possible vectors and one of them will eventually repeat.

During a contest a stronger property can be discovered: each vector that appears in the period of a Ducci sequence must be a constant multiple of a binary vector. That is, for some k, each element is either k or zero. A full proof of this can be based on the observation that if we have two different non-zero elements in the sequence then after at most n-1 steps its maximum will decrease. One such proof (of a more general statement about the sequence) can be found in the paper “On numbers on a circle” by Misiurewicz and Schinzel (1988).

Given this observation, we can now precompute which 0/1 vectors are representatives. Due to the low memory limit we should do this by only using 2^n bits to mark which ones were processed already. But that’s easy: for each vector we can compute the following one in constant time, and we can use an ordinary DFS to find all cycles in the resulting graph.

Conveniently, for each valid n the actual representatives fit into memory, so we can just sort them that way, but even if they didn’t we could simply use another bitset to store them. The final answer is then computed by taking the index and dividing it with the number of representatives: the integer part tells us the constant to use, the remainder the pattern. Don’t forget the special case (n = powers of two) when the only representative is the all-zeros vector.


misof



UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds