# January 29, 2020 Single Round Match 776 Editorials

##### CheatingAfterTests

There are multiple good ways how to implement a solution to this problem. One possible solution is pure brute force that can be summarized using the following pseudocode:

```
for each old_score in the report:
for each new_score in 0..99:
if new_score can be obtained from old_score:
compare the new sum to the best one we have so far
```

A slightly more clever solution is based on doing some casework:

- If we have a score that is between 0 and 9, the best we can do is to improve it to a 9.
- If we have a score that is between 10 and 89, the best we can do is to improve its first digit to a 9. (This will always increase the number at least by 10, and thus it is better than changing the second digit.)
- If we have a score that is between 90 and 99, the best we can do is to improve it to a 99.

Having this insight we can now iterate over all scores in the report and for each score we can compute the optimal way to change it. In the end, we will just pick the option that improves the sum by the largest margin.

```
public int cheat(int[] report) {
int sumScores = 0;
int bestImprovement = 0;
for (int score : report) {
sumScores += score;
int newScore = 0;
if (score < 10) newScore = 9;
if (10 <= score && score < 90) newScore = 90 + (score % 10);
if (score >= 90) newScore = 99;
bestImprovement = Math.max( bestImprovement, newScore - score );
}
return sumScores + bestImprovement;
}
```

#### BinaryHeapLeaf

Clearly, maxleaf >= the number of leaves, because if we have X leaves, the best we can do is to fill them with values 1, 2, …, X.

The above is actually always a valid solution: if we fill the heap with values N, N-1, … from the top to the bottom, we will get a valid heap with maxleaf = number of leaves. Thus, the first value we should return is simply the number of leaves in the heap.

What is the *biggest* value we can place into a leaf? If the leaf is in depth D under the root (with the root itself being at depth 0), clearly it cannot contain a value bigger than N-D. This is because if we start in the leaf and go towards the root, in each step we have to move to a bigger number. Thus, maxleaf <= N-D.

And again, this bound can actually always be obtained. We have just determined all values on the path from the root to the chosen leaf. Once we fix those, we can simply enter the remaining values as before (going from the top of the heap to the bottom, and placing the values from largest to smallest). Again, it’s clear that this way we’ll create a valid heap: whenever we place a value somewhere, the value in its parent is guaranteed to be larger.

Example for N = 9. Here’s an empty heap, with just an X in place of each value:

```
X
/ \
X X
/ \ / \
X X X X
/ \
X X
```

We pick one of the leaves that are closest to the root, and we put the biggest values onto the path to that leaf:

```
9
/ \
8 X
/ \ / \
X 7 X X
/ \
X X
```

And then we fill the remaining values (6-1) top to bottom:

```
9
/ \
8 6
/ \ / \
5 7 4 3
/ \
2 1
```

and voila, we have a valid heap with the number 7 in a leaf.

Thus, the answer is {the number of leaves, N – the depth of the most shallow leaf}. There is a formula for both, but given how small N is, the easiest solution is to actually iterate over the whole heap and check each vertex. This is what I do in the reference solution posted below.

```
boolean isLeaf(int N, int v) {
return 2*v > N;
}
public int[] maxDiff(int N) {
int numberOfLeaves = 0;
int minLeafDepth = N + 47;
// iterate over all possible depths
for (int depth = 0; ; ++depth) {
// given the depth, compute the IDs of the vertices in this depth
int lo = 1 << depth, hi = 1 << (depth+1);
// if we are already too deep, stop
if (lo > N) break;
// look at all vertices and count leaves among them
for (int v = lo; v < hi; ++v) {
if (v > N) break;
if (isLeaf(N,v)) {
++numberOfLeaves;
minLeafDepth = Math.min( minLeafDepth, depth );
}
}
}
return new int[] { numberOfLeaves, N - minLeafDepth };
}
```

#### EncloseArea

A bit of experimentation will quickly show that the enclosed area must always be an even integer. Here’s how to show that. Imagine that we already placed the first wall somewhere:

The thick red line is the first wall we placed. Once we did so, we can sketch a new grid (the thin red lines in the figure). It is obvious that the whole boundary of the enclosed area will lie on this new red grid. And therefore, the enclosed area will consist of some red squares. To conclude the proof, note that the area of a single red square is 2.

What is the biggest area we can enclose? The 50×50 board has 196 cells on the boundary. From each of these cells we can enclose at most one half, thus the best we can do is 2500 – 98 = 2402. This is indeed possible (and the last example shows how: just draw a wavy line along the entire border of the chessboard). Thus, the valid areas are even numbers between 2 and 2402, inclusive.

And the above observation with the red grid also gives us a neat idea how to enclose a specific area: if you want to enclose an area A, pick any A/2 adjacent squares and draw their boundary.

Below is one nice systematic numbering of the red squares (illustrated on an 8×8 grid instead of 50×50) with the property that for any K, the smallest K numbers form a connected area.

For one final trick to make the implementation pleasant, just note that the boundary of a set of squares is the xor of the boundaries of the individual squares, so we can simply draw each of the first A/2 squares, making sure that the walls drawn a second time disappear. The full implementation of this solution is shown below.

```
void toggle(char[][] board, int r, int c, char wanted) {
if (board[r][c] == wanted) board[r][c] = '.'; else board[r][c] = wanted;
}
public String[] enclose(int A) {
if (A % 2 == 1) return new String[0];
if (A > 2500 - 50 - 48) return new String[0];
char[][] board = new char[50][50];
for (int i=0; i<50; ++i) for (int j=0; j<50; ++j) board[i][j] = '.';
int squaresWanted = A/2;
int[] X = new int[squaresWanted];
int[] Y = new int[squaresWanted];
int id = 0;
for (int x=0; x<=24; ++x) for (int y=x; y<=48-x; ++y)
if (id < squaresWanted) { X[id] = x; Y[id] = y; ++id; }
for (int x=-1; x>=-24; --x) for (int y=-x; y<=48+x; ++y)
if (id < squaresWanted) { X[id] = x; Y[id] = y; ++id; }
for (int i=0; i<squaresWanted; ++i) {
int dr = Y[i]-X[i], dc = Y[i]+X[i];
toggle(board,dr,dc,'/');
toggle(board,dr+1,dc,'\\');
toggle(board,dr,dc+1,'\\');
toggle(board,dr+1,dc+1,'/');
}
String[] answer = new String[50];
for (int r=0; r<50; ++r) {
answer[r] = "";
for (int c=0; c<50; ++c) answer[r] += board[r][c];
}
return answer;
}
```

#### StringRings

This would be easy to solve for A, B <= 1000 using dynamic programming, but the bounds in the statement look way more scary. Let’s not get scared and let’s try to go through the same thought process, we’ll see what happens if we start to unravel this problem by pulling the right string (pun intended).

Suppose we have at least one red-green string (i.e., B > 0). Let’s pick an arbitrary one red-green string and let’s look at its green end. This end has to be paired with one of the 2A+B available red ends, and each of those should be equally likely. Let’s examine what happens:

- With probability 2A / (2A+B) we will pick an end of a red-red rope. If we tie our two ends together, we will get a new rope that is red on both ends. So, the net effect is that we have one fewer red-green rope and no new rings.
- With probability 1 / (2A+B) we will pick its own red end. We will have one fewer red-green rope and one new ring.
- With probability (B-1) / (2A+B) we will pick the red end of another red-green string. Tying them together changes the two separate red-green strings into one new red-green string. So, yet again, the net effect is one fewer red-green rope and no new rings.

So, perhaps surprisingly, the effect is always exactly the same: one red-green string disappears. The only difference between the three cases is that in one of them we also get a new ring.

If we only have A red-red and A green-green strings, we can proceed as follows. We pick an arbitrary red-red rope, an arbitrary green-green rope, and we tie their ends together. Once we do that, we decreased A by 1 but increased B to 1, so in the next step we can again apply the previous paragraph once.

By linearity of expectation, the expected total number of rings is the sum of expected numbers of rings created in each step of the above process. Thus, whenever we process a red-green string, we increment the answer by 1 / (2A+B).

```
public double expectedRings(int A, int B) {
double answerB = 0;
for (int b=B; b>0; --b) answerB += 1. / (2*A+b);
double answerA = 0;
for (int a=A; a>0; --a) answerA += 1. / (2*a-1);
return answerA + answerB;
}
```

#### FinishingDice

The distribution of all possible rolls on a die with numbers A[0], …, A[n-1] can be represented using its generating function: sum_i x^A[i]. In particular, for a standard die this generating function is S_n(x) = (x+x^2+…+x^(n-1)+x^n).

The distribution of possible rolls for two dice corresponds to the product of their generating functions. For example, if we have two standard 4-sided dice, the product of their generating functions is (x+x^2+x^3+x^4)^2 = (1x^2 + 2x^3 + 3x^4 + 4x^5 + 3x^6 + 2x^7 + x^8). The term s*x^t tells us that there are s ways to roll the sum t.

Sometimes it is possible to find pairs of dice other than the two regular ones with the property that they produce the same probability distribution on results. Instead of directly looking for the values on the sides of those dice, we can look for their generating functions instead. More precisely, we want two polynomials P, Q such that:

- Their product is (S_n(x))^2.
- The constant term of each polynomial is 0 (no empty faces allowed).
- All coefficients are nonnegative (we cannot have -3 faces with 7 pips each).
- The sum of coefficients of each polynomial is n (we want two n-sided dice).

The key observation is that the number of such pairs (P,Q) has to be quite small. This is because each P and each Q must be a factor of (S_n(x))^2 in polynomials over Z.

Thus, a viable approach looks as follows: We know that both P and Q have a factor x each. Factor S(x)/x into irreducible polynomials. (See https://en.wikipedia.org/wiki/Cyclotomic_polynomial for explanation of this step.) This gives you the factorization of (S(x)/x)^2. Try all possibilities for how to split the factors between P and Q. For each of those, check whether P and Q satisfy the requirements above, and if they do, check whether the corresponding dice can be produced from the partial dice that were given in the input.

For the given constraints the factorization of S(x)/x always has at most 11 terms, which gives us at most 3^11 = 177,147 possibilities for the pair (P,Q). (Only a few of those actually produce valid pairs of dice, so if it turns out that your program cannot try 3^11 possibilities fast enough, you can precompute and store the indices of those possibilities that actually produce valid pairs of dice.)

**misof**