Single Round Match 836 Editorials
ThreeDarts
A good idea is to start by generating an array with all possible single-dart scores. Once we have that, we can easily iterate over every possibility for one, two, and then three darts. As the array only has 62 elements, we can easily check all 62^3 combinations of three darts.
public int[] throwDarts(int N) {
int[] values = new int[62];
int k = 0;
for (int i=1; i<=20; ++i) for (int j=1; j<=3; ++j) values[k++] = i*j;
values[k++] = 25;
values[k++] = 50;
for (int a : values)
if (N == a) return new int[] {a};
for (int a : values) for (int b : values)
if (N == a+b) return new int[] {a,b};
for (int a : values) for (int b : values) for (int c : values)
if (N == a+b+c) return new int[] {a,b,c};
return new int[0];
}
PartialSumPyramid
The main observation is that the solution actually always exists and it is unique. More precisely, we can always use the given information to reconstruct the whole pyramid. We’ll do this from the top to the bottom.
We are always given the topmost value, so we have the full topmost row. We will now show that if we know a complete row and a single value in the next row, we can reconstruct that whole row. This is easiest to see on an example. Consider the following pyramid:
+---+
| 0 |
+---+---+
| 8 | 2 |
+---+---+---+
| 3 | 5 | 7 |
+---+---+---+---+
| x | y | 3 | z |
+---+---+---+---+
We only know one value in the bottom row, but that’s enough. We can compute all other values from this information. In this example:
We see that 3+z = 7, hence z = 4.
We see that y+3 = 5, hence y = 2.
Now that we know y, we see that x+2 = 3, hence x = 1.
In the general case we do exactly the same thing: starting from the one known value we can go right to compute all the following values, and we can also go left to compute all the preceding values. When computing modulo M the unknown value can always be uniquely determined: if a+b = c, knowing (a,c) allows us to compute b as (c-a) modulo M.
public int[] reconstruct(int N, int M, int[] index, int[] value) {
int[][] pyramid = new int[N][];
for (int r=0; r<N; ++r) {
pyramid[r] = new int[r+1];
pyramid[r][ index[r] ] = value[r];
for (int c=index[r]+1; c<=r; ++c) {
pyramid[r][c] = pyramid[r-1][c-1] - pyramid[r][c-1];
if (pyramid[r][c] < 0) pyramid[r][c] += M;
}
for (int c=index[r]-1; c=0; --c) {
pyramid[r][c] = pyramid[r-1][c] - pyramid[r][c+1];
if (pyramid[r][c] < 0) pyramid[r][c] += M;
}
}
return pyramid[N-1];
}
IntersectionArea
The main ideas of this problem are easy:
The intersection S of the given rectangles is a rectangle (or a degenerate rectangle, or nothing at all).
Once you add your new rectangle T, the final result will be the intersection of S and T.
The final result must always be a rectangle.
That rectangle has to fit into S and have area exactly A.
We can use brute force to look for such a rectangle: e.g., iterate over all x from 1 to width(S), for each x that divides A compute y = A/x and check whether y <= height(S).
If no such (x, y) exist, there is no solution.
Otherwise you can simply pick the x*y rectangle in a corner of S as your T.
Still, we need to be aware of corner cases that are not covered by the above general solution. There are two such cases: first, the given collection can be empty, and second, the requested area can be zero.
The first special case is easy to handle: we can pretend that S (the intersection of the existing rectangles) is simply the whole 10^6 x 10^6 square that is our world. Then, regardless of what T we add, that T will be the final intersection - which is precisely what should happen.
The second special case is more tricky. The reason why it is a special case is that we are not allowed to create a rectangle with area equal to zero. (Example 4 was designed to catch solutions that would accidentally do this, and its annotation contained a reminder that this is something we need to keep in mind.)
In cases with A = 0 we need to add the new T in such a way that the final intersection has zero area.
Almost always we can simply take T to be a 1x1 square that is disjoint with S. This has the desired effect.
There is exactly one case when this isn’t possible: the case when S is the entire square. In that case we cannot place T outside S, and regardless of what T we choose and where we place it, the intersection of S and T will be T and thus its area won’t be zero.
(Note that this situation also occurs when the current collection of rectangles is empty. Regardless of what T you add, that T will be the whole collection, and thus the intersection will never have an area of zero. If you handle the “empty collection” special case in the way we described above, it is now automatically handled.)
public int[] addOne(int[] X1, int[] Y1, int[] X2, int[] Y2, long A) {
int N = X1.length;
// special-case the situation where the current collection is empty
if (N == 0) {
if (A == 0) return new int[0];
for (long x=1; x<=1_000_000; ++x) {
long y = A/x;
if (y 1_000_000) continue;
if (x*y != A) continue;
return new int[] {0,0,(int)x,(int)y};
}
return new int[0];
}
// find the intersection of the current collection
int x1 = X1[0], y1 = Y1[0], x2 = X2[0], y2 = Y2[0];
for (int n=0; n<N; ++n) {
x1 = Math.max( x1, X1[n] );
y1 = Math.max( y1, Y1[n] );
x2 = Math.min( x2, X2[n] );
y2 = Math.min( y2, Y2[n] );
}
// fix it to a point if it’s empty
if (x2 < x1) x2 = x1;
if (y2 < y1) y2 = y1;
// handle the case with desired zero area
if (A == 0) {
if (x1 == 0 && y1 == 0 && x2 == 1_000_000 && y2 == 1_000_000)
return new int[0];
if (x1 0 || y1
0) return new int[] {0,0,1,1};
return new int[] {1_000_000-1, 1_000_000-1, 1_000_000, 1_000_000};
}
// handle the case with desired positive area
for (long x=1; x<=x2-x1; ++x) {
long y = A/x;
if (y y2-y1) continue;
if (x*y != A) continue;
return new int[] {x1, y1, (int)(x1+x), (int)(y1+y)};
}
return new int[0];
}
DoubleWordLadder
As the title suggests, each index that gets changed must eventually get changed at least one more time: this is because after the first change the letter at that index cannot be the correct letter we want at that index.
Thus:
If A and B are identical, we do nothing.
If A and B differ on k > 1 positions, the optimal solution has 2*k steps. For a lower bound note that each of the k positions needs at least two changes. For an upper bound, we can change each position where A and B differ once and then we can change each position the second time. If both passes process the positions from left to right, it will be a valid solution.
If A and B differ on 1 position, the optimal solution has 4 steps. This is because:
We must change the position where they differ at least twice.
We cannot change the same position twice in a row, so we must change at least one other position.
That position will eventually need at least one more change, too, so we need at least 4 steps.
Solutions in 4 steps exist: as above, we change one position, then the other position, then the first one again, and then the second one again.
Now we know the correct length of the solution we are constructing, and we also know that we change some k positions exactly twice (with the special case of also changing an extra position of our choice if k=1).
Thanks to how lexicographic order works, we can construct the lexicographically smallest solution greedily. In each step we have to take the locally optimal step (i.e., the one that produces the lexicographically smallest string) that is still a prefix of some valid solution of an optimal length.
As the input is pretty small, we can brute force this: iterate over all positions and over all new letters for that position. (At each position considering the first usable letter is enough. If you want to optimize, you can insert a break into the inner cycle as soon as you find a letter that can currently be used at the given position. Or iterate over just a,b,c for the first change, at least one of those three will always be different from both A[i] and B[i].)
All we need to do is to keep track about the positions we are editing and the number of times each of them has already been changed. Almost all partial solutions are valid and can be extended into a full solution, with just one exception: we need to avoid the case where we already changed k-1 positions twice and now we are left with the last position and we are forced to change it twice in a row. The edits that would get us into this situation must be avoided.
BoxWorld
Claim: In an optimal configuration no two cubes are placed next to each other (i.e., so that their vertical faces touch).
Proof: Consider any such situation. We’ll show that it’s not optimal by showing a way how to create a new situation that’s better. To do this, find any two non-empty stacks of cubes that stand next to each other. Take the topmost cube from the shorter of those two stacks (either if they are tied). Instead, place that cube on top of the currently globally tallest stack. We can easily see that this change increased the total number of visible faces. (Only one face stopped being visible: the previous top face of the globally tallest stack. At least two new faces became visible.)
If our stacks of cubes cannot touch, we can have at most L = ceiling(RC/2) of them on an RxC board.
Additionally, we want to have as many stacks as possible. This is because splitting a stack into two smaller ones always adds one new visible face.
Thus:
If N <= L, each cube will be its own stack.
if N > L, we will have exactly L stacks
Perhaps paradoxically, the second case is much easier to count - and thus this problem is one of the rare ones where the largest inputs aren’t the hardest.
Let’s address the second case (N > L) now. The total number of configurations is equal to the number of ways to form exactly L stacks out of N cubes, times the number of ways to place those stacks onto the table.
Imagine the table colored like a chessboard. If the area is odd, the stacks have to go on the majority color: only 1 possible placement. If the area is even, the stacks can go on either color: 2 possible placements. As a theme for this SRM, this problem also has a non-trivial special case: 1x(2k) boards. On those there are actually k+1 different ways to place the k stacks so that they don’t touch. E.g., for k=3 we can do SeSeSe, SeSeeS, SeeSeS, or eSeSeS (where e is an empty cell and S is a stack of cubes).
Now that we have chosen one of the available sets of positions for the stacks, we need to fill them. We can start by placing one cube on each spot. This leaves us with N-L cubes we can now distribute arbitrarily. The ways to do this can be easily counted using the “stars and bars” technique. Imagine that we start above the first of the L stacks. Each sequence of N-L commands “drop a cube” and L-1 commands “move to the next stack” produces a different valid arrangement, and each valid arrangement can be produced by such a sequence of commands. Thus, the number of arrangements equals the number of such sequences, and as each such sequence consists of (N-L)+(L-1) = N-1 commands out of which some L-1 are “move” commands, there are choose(N-1, L-1) valid arrangements.
For the small case (N <= L) we can use some variation of bitmask DP. We are essentially counting independent sets of size N in the R*C grid graph. There are many versions of the DP that will differ in their exact time complexity. One option is to do something dumb (on the order of 2^(2*C) operations). Once we have it, we can either try optimizing it, or we can try using it to precompute and hard-wire answers to the cases where the dumb approach exceeds the time limit. But a more careful implementation can solve all the given inputs within the given time limit.
The reference solution uses two main ideas: it uses DP along broken profiles, and it uses the observation that the number of valid profiles is actually much smaller than 2^C: as we never place two stacks next to each other, the number of valid profiles is actually bounded by phi^C where phi is the golden ratio (about 1.6 instead of 2).
The state of the DP can be described by:
the coordinates (r, c) of the last processed cell
the number n of stacks=cubes we already placed onto the processed cells
a bitmask that records where the cubes have been placed on the boundary of the already processed area
Each state can be processed in constant time by trying at most two possibilities what happens with the next cell. Thus, the full time complexity of the DP is proportional to the number of states, which is O(R*C*N*phi^C).