December 14, 2020 Single Round Match 795 Editorials
Thanks to misof and Jy_25 for writing the editorials.
There are multiple different approaches possible. I’ll list a few of them below.
Option 1: sum. We can sum the cost of all items and divide it by N. If there is any remainder, fair division is impossible. Otherwise we know the amount A each pirate should receive. We can now iterate through all the items. If we see one that’s too expensive, there is no solution. If we see one with the correct cost, it must be given to one pirate. And if we see a cheaper one worth x, we must find another item worth A-x to pair with it. If we fail to find one, there is no solution. If we do, we mark it as used (to avoid using the same item in multiple pairs) and go on. Once we successfully process all items, we can conclude that we constructed a valid loot division.
Option 2: largest element. We can sort all the items. As there are fewer than 2N items, some pirate must only receive one item. Then, clearly, it must be the largest item. Now we either use its value as the value A from the previous solution, or we use the approach given in the following solution.
Option 3: padding with zeros. We can add items worth 0 to the collection until we have exactly 2N items. Now, we are looking for a solution in which each pirate gets exactly two items: the 0-cost items will be assigned to pirates that originally only got one item.
In this new situation, once we sort all the items, we can make a simple observation: the more expensive a pirate’s first item, the cheaper the second item. Hence, we just need to check exactly one solution: pair the cheapest item with the most expensive one, the second cheapest with the second most expensive, and so on.
Formally, a solution exists if and only if all sums of the form cost[i] + cost[2N-1-i] are equal.
The most traditional formula for the area of the triangle is base * height / 2.
The two vertices we don’t move determine the base. This side will remain in the triangle. By moving the third vertex, we can now change the height. As we can move that vertex by at most 1, clearly the optimal thing to do is to move it away from the base (in the so-called “normal” direction) by 1, thus increasing the height by exactly 1. This increases the area of the triangle from (base * height) / 2 to (base * (height+1)) / 2. In other words, new area equals old area plus (base / 2).
The triangle has three sides. Clearly, the best solution is to take the longest of them as the base. Hence, the final answer is the current area of the triangle plus max(a,b,c)/2.
The only thing that remains is calculating the area of the current triangle. There are multiple ways to do this.
The most straightforward way is to compute it directly from a,b,c using Heron’s formula. (In English, the name of this formula is sometimes incorrectly distorted to “Hero’s formula”.) If we use s=(a+b+c)/2 to denote half of the perimeter, the area of the triangle can be calculated as sqrt( s*(s-a)*(s-b)*(s-c) ). This formula isn’t numerically stable, so be careful when using it for long narrow triangles, but in our problem where a,b,c are small integers it’s precise enough.
Another option is to place the triangle into some specific location, calculate the coordinates of its vertices, and use those to determine the area. Suppose we place vertex A at (0,0) and vertex B at (c,0), so that we have AB = c. Now we are looking for the location C = (x,y) with y > 0 such that AC = b and BC = a. This gives us:
x^2 + y^2 = b^2
(c-x)^2 + y^2 = a^2
We can subtract the second equation from the first one, we get x = (b^2 – a^2 – c^2) / (2c). From the first equation we then get y, and the area of our triangle is then c*y/2.
We will use two tools to solve this problem.
The first tool is the Floyd-Warshall shortest path algorithm. In O(n^3) this algorithm computes all-pairs shortest paths: for each u and v, the cheapest way of getting from u to v if we only care about travel costs and nothing else.
The second tool is matrix multiplication (using the (min,+) Kleene algebra).
Suppose D is the distance matrix of the graph we have. Let’s define that for matrices A, B the matrix C = A*B is the matrix such that C[i,j] = min_k( A[i,k] + B[k,j] ). This type of a matrix product can naively be computed in O(n^3) for square matrices of size n. We can now easily verify that D^2 = D*D is a matrix that corresponds to best ways to get from i to j in exactly two steps. And this generalizes: D^k is a matrix such that D^k[i,j] is the length of the shortest walk from i to j that uses exactly k edges. Using multiplication by squaring, we can compute D^k in O(log k) matrix multiplications, which gives us O(n^3 log k).
Now there are at least two different ways in which we can use these tools to compute the answer. I will show two, because each has its own merits.
Probably the more straightforward way is to proceed as follows: Suppose we want the shortest walk from u to v with at least k edges. Let w be the vertex in which we are after exactly k edges. If the walk is shortest, we must get from u to w using a shortest walk with exactly k edges (computed in D^k) and then from w to v using the shortest walk without any restrictions (computed by Floyd-Warshall). For each pair u,v we will iterate over all w and pick the best one. This part of the solution runs in O(n^3).
Another solution begins by computing the matrix S of all-pairs shortest paths. Note that we won’t allow paths of length 0 (we’ll keep infinity on the main diagonal of D), so the main diagonal of S will correspond to the shortest cycle containing that vertex – S[i,i] is the shortest non-trivial way of going from i to i. It can now be shown that S^k is precisely the answer we want. (For the proof, take the optimal walk from u to v and cut it arbitrarily into k non-empty pieces. As the whole walk is shortest, each of those pieces must be the shortest walk between its endpoints, and thus it corresponds to some element of S.)
The main observation we have to make is that even though the setting of the problem is 2D, it’s actually two separate one-dimensional problems. Each step of a token changes one of its coordinates by 1 and leaves the other untouched. Thus, we can simply look at the two arrays of coordinates. For each of them separately we can ask the question: what is the minimum number of steps needed to change these numbers into distinct consecutive ones? The same problem with tokens in 1D: we have a strip of cells, some of them contain tokens. We want to make a contiguous segment of cells containing one token each. What is the minimum number of swaps?
Let’s look at the tokens now and number them 0 to n-1 from left to right. We can now show that there is an optimal solution in which they are still arranged in the order 0 to n-1 from left to right. (If, at any moment, we want to move token j to the left of some token i < j, both tokens are in the same place and we can swap their labels and move token i instead. In other words, swapping the relative order of tokens never improves our solution.)
Thus, the only question that remains is the position x where the segment of tokens should start. If we know x, we know the destination for each token and thus we can compute the number of steps needed in O(n).
One way to proceed is to show (or make an educated guess) that the total number of steps, as a function of x, is unimodal: first it strictly decreases, then it possibly remains constant at the optimum, and then it strictly increases. Once we make this assumption, we can find the optimal x using either ternary search or a modified binary search. Either approach requires O(log R) probes, where R is the length of the range of values x can have. Thus, we get a solution that runs in O(N log R).
Another, nicer way to proceed is as follows. Suppose X[n] is the current coordinate of token n. (Remember that we renumbered them left to right, so X is non-decreasing.) Now let Y[i] = X[i] – i, for all i, be a new set of coordinates. In words, we moved token i exactly i steps to the left. Clearly, any sequence of moves that solves the original problem for starting positions X is a sequence of moves that will bring all tokens to the same cell for starting positions Y, and vice versa.
And this is already a well-known problem: the optimal meeting point for tokens whose coordinates are in Y is the median of Y. (If the length of Y is even, any position between and including the middle two elements is an optimal meeting point. For the proof, consider what happens when you move the meeting point away from the median: you move it some distance away from at least a half of elements, and at most the same distance towards each of the remaining elements.)
Thus, the full solution in O(n log n) is to sort the coordinates to get X, calculate Y, and sort Y to find the median. (We don’t care that the median can be found in linear time, we already had to sort anyway.)
(Problem and solution by Jatin Yadav)
Consider the problem where you have to find the minimum discounted spanning tree. That is, find the minimum cost of a tree after applying discounts on edges. It is easy to prove using an exchange argument that it is optimal to apply the largest discounts in order to the edges of the actual MST found using Kruskal’s.
Now, f(s,t) can be found as follows: Iterate over subsets S of [n] containing both s and t. For each such subset, find the minimum discounted spanning tree in the graph spanned by S. The value f(s,t) is the minimum MST cost over all subsets. To prove this you can see that in any subset’s discounted MST there is an s-t discounted path, and for any path there is a corresponding subset S: the nodes of the path.
So, the solution is to iterate over all possible sets and find the discounted MST for them. Then, update the answer for all pairs of nodes in the current set. The complexity of this solution is O(n^2 * 2^n) if we use Jarník-Prim for the MST, or very slightly worse if we use Kruskal. Either way, the solution only needs O(n^2) memory.
The solution can further be optimized to O(n2^n) time and O(2^n) memory, but this was not necessary.