December 5, 2020 Single Round Match 794 Editorials
Even though the statement pretends that the data should be random, the easiest way is to come up with a fixed pattern.
What is a nice five-digit number that has all digits distinct? For example, 12345.
What’s another one, but now starting with a different digit? For example, 23456. Oh look, it conveniently ends in a different digit, too.
And now it’s easy to generalize. The N numbers will start with the digits 1, 2, …, N and continue cyclically through them. So, for example, for N=5 and D=7 we would have the answer 1234567, 2345678, 3456789, 4567890, and 5678901.
Interestingly, the problem is also solvable using random numbers, “as intended”. We can simply generate and check random numbers until we have enough of them, so the following pseudocode is actually fast enough by a wide margin:
while you don’t have enough numbers: generate a random D-digit number check whether its first and last digits are still unused check whether all its digits are distinct if it passed, add it to the collection
Even for N=D=9 this almost certainly requires much fewer than 100,000 attempts: on average we need a few hundred attempts to hit a number with all digits distinct, and even in the worst case (when we already have eight 9-digit numbers and we are looking for the last one) we need to check, on average, only 45 such numbers until we find one with the unused first digit (one remaining out of 1-9) and one of the two unused last digits (two remaining out of 0-9).
Imagine that the pages are placed in a row from left to right. We will use phrases like “to the left” as shorthand for “on a smaller index”.
The key observation is that all text pages are equivalent, all image pages are equivalent, so it never makes any sense to swap two adjacent text pages or two adjacent image pages with each other. Each swap in the optimal solution will involve one text and one image.
Once we know this, we can easily compute the answer. Let T be an array with the current indices of pages of text, in increasing order. As we know that texts don’t change order, we know that the text from T should end at position 0, the text from T at position 2, and so on – in general, the text that is currently at position T[k] should end at position 2*k.
Thus, we get a lower bound on the optimal solution: the text from position T[k] will need to be moved at least abs( T[k] – 2*k ) times. And it is easy to see that we always have a valid solution that only uses exactly that many swaps: If we have texts that are too far to the right from their correct position, the leftmost such text must have an image on its left and we can make the swap to bring that text one step closer to its destination. And if we have texts that are too far to the left, we can do the same with the rightmost one of them.
So, the answer is simply the sum over all k of abs( T[k] – 2*k ).
Another way of looking at the solution is as follows: In the final configuration the text from T wants to have exactly 0 images on its left (i.e., on smaller indices), the text from T exactly one image, and so on. If the text from T[k] currently has x images on its left, we will need to swap it with abs(T[k] – x) images during the sorting – either moving some more images from the right to the left, or moving the extra ones from the left to the right.
Yet another way of looking at the problem is that once we established which text and which image goes where, we can explicitly number them from 0 to n-1 (assigning the desired position to each of them) and the minimum number of swaps we need is then simply the number of inversions of this permutation.
Implement a general breadth-first search or depth-first search that will find connected components (with a parameter whether to use 4 or 8 directions).
Run it once with 4 directions to find the sea. Change the sea to land. Now all water that remained are lakes.
Iterate over the whole map once to find all water cells that touch a land cell (at least with a corner). These cannot contain the new island, so once you found them all, change them to land too.
Now all water that remained is free for building: on the original map it’s lake water that does not touch the shore.
Run the search again with 8 directions to find the new connected components. For each of them that is water, calculate its size and return the maximum of those.
The problem would be solvable for a bigger number of water sources, but their number was intentionally kept low to allow more convenient solutions to pass. Below I describe a fairly short and convenient implementation.
If there is only one water source, the cells flooded during the x-th second form the boundary of a square with side 2x+1, centered at the water source.
If there are multiple water sources, the cells actually flooded during the x-th second are the cells that would be flooded by one of the water sources and haven’t been flooded by another one sooner.
Given that the number of water sources is so low, we can use coordinate compression to “draw” the situation during the k-th second. For each water source we will make four horizontal cuts and four vertical cuts to the plane, so that the sides of its current water boundary get separate rows and columns in the drawing. E.g., if we have one water source at x = 10 and 5 seconds, we will divide the plane into five vertical stripes that represent the ranges (-inf,5), [5,6), [6,15), [15,16), and [16,inf).
Once we make all the cuts we need for each of the N water sources, we’ll have the whole plane divided into O(N^2) rectangular regions. Each of those regions is either completely filled, or completely empty.
One easy way of producing the correct drawing in O(N^3) is to do it in two steps. First, for each water source, color all the regions that have already been flooded, then, for each water source, erase all regions that have been flooded sooner.
Now for the full solution:
If we use the function described above but skip the part when we erase the insides of squares, we will get a drawing of what’s flooded after x seconds. Having this as a tool, we can use binary search to find the correct time when cell A gets flooded. We then compute the total number of cells flooded in the previous second and subtract it from A. Finally, we use the above function again to draw the situation during the correct second and then we simply iterate through that drawing (row block by row block) until we find the desired cell.
If we have x <= y, we can easily determine in constant time whether Nam can partition the stones so that the smallest pile has size exactly x and the largest pile has size exactly y. We will call such pairs (x, y) “query pairs”.
For any L, R we can look at the set S(L, R) of all query pairs that would get the answer “yes”. Nam can win the game for some specific L, R if the set S(L, R) is unique among all sets S(L’, R’).
As a special case, the set S(L, R) can be empty. We can easily check whether we have exactly one such set (which happens for some small N) and count it separately.
In general, a non-empty set S(L, R) is unique if and only if the “adjacent” sets are all different:
- the sets S(L+1, R) and S(L, R-1) must each miss some of the query pairs that are in S(L, R)
- the sets S(L-1, R) and S(L, R+1), if they exist, must each contain some additional query pairs
In other words, in the general situation when both “bigger” pairs are valid:
- there has to be some query pair (L, x) with x <= R
- there has to be some query pair (x, R) with x >= L
- there has to be some query pair (L-1, x) with x <= R
- there has to be some query pair (x, R+1) with x >= L
With some math we can now, in O(N), determine the following values:
- for each L, the smallest value X(L) such that (L, X(L)) is a query pair
- for each R, the largest value Y(R) such that (Y(R), R) is a query pair
Given the values X and Y, we can now rephrase the condition as follows: a non-empty S(L, R) is unique if and only if R >= max( X(L), X(L-1) ) and at the same time L <= min( Y(R), Y(R+1) ).
This now gives us an efficient way of counting all the valid solutions. For example, we can iterate over all L while we use some data structure (e.g., a Fenwick tree) to maintain the set of all Rs that still satisfy the condition L <= min( Y(R), Y(R+1) ), and answer the query how many of those satisfy R >= max( X(L), X(L-1) ). The time complexity of this solution is O(N log N).