March 18, 2021 2021 Humblefool Cup Prelims Editorials

DivideThePlane

Suppose we already have H horizontal lines and we draw a new vertical line.

If we mark the H points where the new line crosses the old ones, the new line will be divided into H+1 segments. Each of those segments splits some region of the plane into two new regions. Thus, the total number of regions increases by H+1.

From this observation (or directly by drawing a picture and counting) we can derive that if we have H horizontal lines and V vertical lines, they divide the plane into exactly (H+1)*(V+1) regions.

If we have H > V, adding a vertical line adds more regions. If we have H < V, adding a horizontal line produces more regions. If we have H = V, both choices are equally good.

This observation can lead us to a simple greedy algorithm: repeatedly add the line that produces the most new regions until you have enough. This algorithm actually works. Below we will first show one possible implementation and then we’ll explain why it works and why it’s fast enough.

public long makeCuts(long H, long V, long N) {
    int answer = 0;
    while (true) {
        long regions = (H+1) * (V+1);
        if (regions >= N) return answer;
        if (H < V) ++H; else ++V;
        ++answer;
    }
}

Why is this algorithm fast enough? The maximum N is 10^14. Already for H = V = 10^7 – 1 we have exactly 10^14 regions, and therefore we will never need to draw more than 2*10^7 lines.

Why is this algorithm correct? As we are adding the lines one at a time, we are always producing the configuration that has the most regions possible for that number of lines (and our starting configuration). This is because we want to keep the numbers H and V as close to each other as possible, and that is precisely what our algorithm does.

For a formal proof, suppose we have a configuration in which H >= V+2 and at least one of the horizontal lines is “new”. What happens if we erase one horizontal line and draw one vertical line instead? Erasing the horizontal line will remove V+1 regions, adding a new vertical line will add H regions, so the change in the number of regions is H-V-1 >= 1: the number of regions has increased.

Thus, among all configurations with the same total number of lines, the configuration in which the numbers H and V are as close to each other as possible is the one with the most regions. If there is any solution with that total number of lines, this particular configuration must be a valid solution.

Faster solutions also exist. One quick and dirty speedup is to add more lines at a time: e.g., if H=V go to (H+10,V+10) and if H+10<=V go to (H+10,V) whenever this doesn’t exceed the number of regions.

For an asymptotically faster solution we can binary search the correct number of added lines. Once we know the total number of lines, we can compute the corresponding H and V and then the maximum number of regions in constant time.

MoveAllMarbles

As marbles cannot enter the destination before all of them have left the source, the solution will consist of two phases: first we need to get all the marbles out of the source compartment and then we need to get all of them into the target compartment.

There are many ways to do it, and the core of solving this problem well is selecting a way that will be reasonably easy to implement.

First, let’s think about the total number of steps. Each individual marble must make at least R+C-2 steps to get from the top left to the bottom right corner. Thus, we need at least N*(R+C-2) steps total. If we can find a solution with this number of steps, we’ll know it’s optimal. In other words, we will be looking for a solution in which we only move marbles to the right or down (i.e., column++ or row++). As long as we do that, we can be sure that the total number of movements will be optimal.

One clean solution looks as follows. Suppose we number all the cells of the grid except for the two special corners as follows:

**|17|16|15|14
--+--+--+--+--
13|12|11|10| 9
--+--+--+--+--
 8| 7| 6| 5| 4
--+--+--+--+--
 3| 2| 1| 0|**

We will number the marbles from 0 to N-1. Our solution will now look as follows:

  • For each i from 0 to N-1: move marble i from the source to to location number i (using only steps down and to the right).
  • For each i from 0 to N-1: move marble i from location number i to the goal (again, only using the allowed steps).

Observe the following properties of our clever numbering of locations:

  • If cells 0 to x-1 contain marbles and we are going to move marble x from the source, the entire rectangle between the source and location x is free — and therefore we can use any valid sequence of moves to transport marble x to its destination.
  • By symmetry, if cells 0 to x-1 are already free and we want to transport marble x to the goal, all the possible paths are marble-free and we can use any of them.

RingCountry

To start, we can precompute the following information: for each town the shortest path length from the capital there, and the same from there to the capital. Let’s denote these Sout and Sin.

An obvious recurrence is that for each town we either go there directly, or we go optimally to the previous town and from there we use the around-edge. Thus, Sout[n] = min( Lout[n], Sout[n-1] + around[n-1] ). But these recurrences are cyclic, where do we start? Easy: in the town x with the minimum value of Lout[x]. For this town it’s clearly optimal to go there directly. And once we know Sout[x], we can compute Sout[x+1], and so on around the whole cycle. The Sin[] values are computed in the same way. This takes O(N) time.

Once we have these values, we can check whether the answer is 1 or -1. The answer is 1 if, for some town, I can go there optimally, go around to visit the other N-1 towns, and then go back home optimally, all within a day. The answer is -1 if for some town already Sout[x] + Sin[x] is too much. All this can also be checked in O(N) time, and once we do so, we know that the answer exists and is at least 2.

Now we can compute new information: for each town n, if we start by optimally going to n, what is the maximum number of towns we can visit? (We count n as the first town visited, even if the optimal way of going there leads through some earlier town. The reason why we ignore those is that when constructing a solution they have already been visited on an earlier day.) This can also be precomputed in O(N) time: first we use brute force to find the answer for n=0, and then we use two pointers for the rest. (If we can start in town x and reach town y, when we start in town x+1 we can reach at least town y, too.)

Note that in the previous step we use Sout and Sin as the distances to/from the individual towns. (Sout is by definition and already explained above, Sin is to make sure we actually find the farthest reachable town. If you just used Lin, you could get bitten by a situation where it’s not possible to return from x but possible to return from x+1.)

From this point, the easiest way to complete the solution is to aim for the final time and memory complexity O(N log N). For this, we will use binary lifting. Currently we know, for each starting point, what is the longest segment of consecutive towns that can be visited in a day. Now suppose that we already know these lengths for each possible start and 2^k days. In 2^(k+1) days, starting from n, we first find out the maximum length l1 for the first 2^k days, and then the maximum length l2 for the next 2^k days, starting from (n+l1) modulo N. Thus, for each k we can compute the new segment lengths from the old segment lengths in O(N) time.

Having this information, we can now try all possible starts for day 1 of the trip, and for each of them compute the minimum number of days needed in O(log N) time. One clean way of implementing this step is to find the largest number of days that isn’t enough. To do this, we simply iterate over k=16, 15, …, 0 and always when adding the next 2^k days doesn’t complete a full circle we do so. The minimum number of days that is enough is then one greater than what we computed. This concludes the solution.

It is also possible to solve the task in O(N) time. The main idea is that if the answer is D, then starting from any start and simulating one day at a time will require O(D) time, and if we consider some O(N/D) suitably chosen consecutive starts (which ones?), one of them has to give us an optimal solution. Thus, instead of fast simulation in O(log N) time we can also use slow simulation in O(D) time and the total time complexity will be O(D*(N/D)) = O(N).


misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds