September 18, 2019 Single Round Match 766 Editorials
For each person, keep track of the room where they are. (A hashmap is usually the most convenient way to do this: use the name of the person as the key into the hashmap and the name of the room as the value associated with the key.)
Process the events in the given order, until you reach their end or the first event that happened after the murder. For each event before the murder, update the room of the person who moved.
Finally, check whether exactly one person is in the room with the victim.
There are many different strategies to get the queens to meet. The limit of 50 is quite loose, we just need to be careful and make sure that we don’t accidentally move a queen somewhere where it does not fit.
One strategy that is easy to implement uses at most 2N moves for N queens. We’ll first move each queen as far left as we can. This requires at most N moves and creates a block of queens at the left end of some rows. In the second half of the solution we then move these blocks of queens as far up as we can. (E.g., if we had some queens in rows 4, 7, and 100, we will move all queens from row 4 to row 0, all queens from row 7 to row 1, and all queens from row 100 to row 2.) It is clear that after the second phase all queens have to form a 4-connected component.
Another similar strategy is to move the queens individually in the second phase as well. That is, we first move each queen as far left as we can and then we move each queen as far up as we can. The resulting configuration is almost the same as in the previous solution, and the argument why it works is also similar.
When moving the queens we need to make sure that other queens are not in the way. E.g., we cannot move a queen from (100,100) to (100,0) if there is another queen at (100,20). In order to ensure that all moves are valid, we can simply process the queens ordered from left to right (and in the second step, ordered from top to bottom). In our example, we first move the queen from (100,20) to (100,0) and only later the queen from (100,100) to (100,1).
This is a fairly standard problem that screams “dynamic programming / memoization”. Any moment during the series can be described by four numbers: (the number of matches won by team A, the number of matches won by team B, the team currently having a streak, the length of the streak).
As each match in a streak increases the team’s win probability by 5 percent, the longest streak we have to consider is 19 matches. This brings the number of states down to 600^2 * 2 * 20 < 15,000,000.
For each state of the series, we ask the same question: what is the probability of A winning the series from this point? In order to answer the question, we compute the probability of A winning the next game and then we recursively solve each of the two new scenarios: one in which A won that game and one in which they lost.
Obviously, the friendships define an undirected graph. If we are moving the book between two connected components of that graph, the move is always high-risk. Thus, if there are C connected components, there will always be at least C-1 high-risk moves. Below we will show that a connected graph can always be solved without high-risk moves, which will give us an optimal solution: solve each component separately and concatenate those solutions.
If we want to find a solution for an arbitrary connected component, that solution has to work for trees. And on the other hand, once we know how to solve the problem for trees, we can solve it for any other connected graphs by simply picking one spanning tree and ignoring the other edges. Thus, it’s enough to solve the problem on trees.
A valid solution for any tree can be constructed by doing a simple DFS traversal of the tree, with the rules “step on vertices in even depth on the way down and on vertices in odd depth on the way back up”.
More formally, we’ll have two recursive functions:
def DFS_last(v): for each child w of v: DFS_first(w) step on v def DFS_first(v): step on v for each child w of v: DFS_last(w)
This modified DFS produces a valid sequence of vertices. To show this, first notice that DFS_last produces a sequence of vertices that starts with a child of v (or v itself, if it’s a leaf) and ends with v, and DFS_first produces the same thing but in reverse. Once we know this, we can easily verify that we never make a jump too far. E.g., if we call DFS_first(w1) and then DFS_first(w2), the first call will end with a child of w1 (or w1 itself), and the distance from there to w2 is (at most) 3.
The above DFS implementation can directly be used on the graph given as the input.
This problem has a pretty linear solution without any fancy data structures. We’ll obtain it by cleverly speeding up a slower solution.
One solution in O(G*N) looks as follows: We will process the given sequence A backwards. Suppose we already processed A[n+1:] for some n. For each gene x, let L[x] be the length of the shortest invalid subsequence that starts with gene x, and C[x] be the number of such sequences.
Note that once we know all L[x] and C[x], we can compute the final answer for the sequence we already processed in O(G) time by taking all x for which L[x] is minimal, and summing the corresponding C[x].
How do the values L and C change when we consider the sequence A[n:] instead of the sequence A[n+1:]? Clearly, the best counterexamples that start with genes other than A[n] don’t change, all of them still work. We just need to recompute the values L[A[n]] and C[A[n]]. Doing this is fairly easy: a DNA of the form A[n]+w is not a subsequence of A[n:] if and only if w is not a subsequence of A[n+1:]. Thus, the new value L[A[n]] is 1 + the optimal length for A[n+1:], and the new C[A[n]] is the number of optimal sequences for A[n+1:].
The key observation that will lead to the faster solution we promised is as follows: at any moment, the values in L can differ only by at most 1. For example, if there is a bad DNA of length 7 that starts with gene 47, you can have a bad DNA of length 8 starting with any gene you like, just append that 7-element DNA to it.
Given this observation, we can get rid of the O(G) step. Instead, we can simply maintain two integer variables: the number of x such that L[x] = min(L), and the sum of C[x] over all such x. Whenever you update a value in L and C, both of these variables can be updated in amortized constant time. This gives us an O(N) solution.
(In my implementation I first handle cases with G > N as a special case, and then I can use plain arrays to store L and C.)