# December 14, 2018 Single Round Match 744 Editorials

The match was held on December 14th. The problem set and editorials were prepared by boba5551

**Div2 Easy – ThreePartSplit**

Define k = (**d**–**a**) div 3. We set b = **a** + k and c = **a** + 2*k. From the construction of b and c, we have that [**a**,b) and [b,c) have length k each. Now, the only part left is to verify that interval [c,**d**) has length at least k. If it wouldn’t be the case, then the sum of lengths of [**a**, b), [b, c) and [c, **d**) would be less than 3 * k. But this is a contradiction with the fact that **d**–**a **>= 3k.

public int[] split(int a, int d) { int minLength = (d - a) / 3; return new int[] { a + minLength, a + 2 * minLength }; }

**Div2 Medium – MagicNumbersAgain**

We split the explanation in two parts: we explain how to verify the second condition, and then we discuss how to implement the first condition efficiently.

**The second condition of being magic number**:

We assume that v is a square of an integer, and now we want to check whether it is magic or not. We can achieve that by the following simple procedure, that directly implements the statement of the condition:

private boolean isMagic(long v) { ArrayList < Long > digs = new ArrayList < Long > (); while (v > 0) { digs.add(v % 10); v /= 10; } boolean OK = true; boolean odd = true; for (int i = digs.size() - 1; i > 0; i--) { if (odd && digs.get(i) >= digs.get(i - 1)) OK = false; if (!odd && digs.get(i) <= digs.get(i - 1)) OK = false; odd ^= true; } return OK; }

**Implementing the first condition efficiently for all the numbers in [A,B]**:

Our next goal is to run the above procedure isMagic on all the squares between **A** and **B**. Since **A** and **B** can be as big as 10^10, it would be too slow to test each of the number from that interval whether it is a square or not. However, if for each Y that “makes sense” we test whether Y^2 is between** A** and **B**, then we obtain significantly more efficient solution. Namely, if v is between **A** and **B** and v=Y*Y, then Y is at most 10^5. So, we iterate over all such values of Y, and whenever Y*Y is between **A** and **B** we invoke the procedure above. The code in Java follows:

public int count(long A, long B) { int ret = 0; for (long v = 1; v * v <= B; v++) if (v * v >= A && isMagic(v * v)) ret++; return ret; }

**Div2 Hard and Div1 Easy**

Let cnt(r,c) be a procedure that calculates the sum of rectangle [0,r] x [0,c]. Then, the final output is cnt(**r2**+1,**c2**+1) – cnt(**r1**,**c2**+1) – cnt(**r2**+1,**c1**) + cnt(**r1**,**c1**)

**Calculating the sum of rectangle [0,r] x [0,c]**:

Without loss of generality, assume that r < c (as the grid is symmetric with respect to r=c). Next, we divide the rectangle [0, r] x [0, c] into two parts: square in the corner r x r and vertical bars to the right of this square.

To handle the square, we observe that they have very regular form. This is how the first 6×6 part of infinite grid looks like:

2 2 2 2 2 2

1 1 1 1 1 2

0 0 0 0 1 2

2 2 2 0 1 2

1 1 2 0 1 2

0 1 2 0 1 2

On the other hand, each bar to the right of the square consists of the same number. And those numbers are a subset of the infinite sequences 0, 1, 2 (repeat).

So, we separately handle the square and separately handle the bars. Formulas for each of those cases can be derived relatively easily. One such implementation is provided below.

Code in C++ (courtesy of majk):

ll cnt(int r, int c) { if (r > c) swap(r, c); ll x = r - r % 3; ll y = c - x; y -= y % 3; return 4 * x / 3 + x * x + (c % 3 == 2) * r + y * r + (x + 1) * (r % 3 == 2); } ll sum(int r1, int r2, int c1, int c2) { return cnt(r2 + 1, c2 + 1) - cnt(r1, c2 + 1) - cnt(r2 + 1, c1) + cnt(r1, c1); }

**Div1 Medium – UniformingMatrix**

The main point here is to observe that flipping rows and columns is almost independent. Namely, consider an operation applied on **M**(i, j). The problem statement says that we flip the i-th row by not changing **M**(i, j), and then we flip the j-th column by not changing **M**(i, j). But exactly the same outcome would be achieved by flipping the *entire* row i (including **M**(i, j)), and then flipping the entire column j. This observations allows us to perform row and column flips almost separately.

The second observation that we make is that applying the entire-row flipping operation on a row for even number of times does not change that row. Also, flipping odd many times the entire row results in the same state of the row as by applying the operation only once. The same holds for entire-column operations. Hence, what matters is the parity of the number of times an operations is applied on a row/column.

Finally, observe that the order in which the entire-row and entire-column operations are performed does not matter.

So, we apply entire-row operations first. Consider separately the cases in which the first row is flipped zero times (i.e., an even number of times) and flipped once (i.e., an odd number of times). After flipping the entire first row for b many times (where b equals 0 or 1), and before applying entire-column operations, perform entire-row operations on each of the other rows so that they are *the same* as the first row. If it is not possible to achieve, than our choice of flipping the first row b many times does not lead to all-ones matrix.

Once all the rows are the same, we apply the column operations. Applying a column operation is now trivial — the j-column should be entirely flipped iff M(0, j) equals ‘0’.

At the end, we check whether the parity of the applied entire-row and entire-column operations is the same. If it is the same, the output is “possible”, and otherwise our choice of b does not lead to all-ones matrix.

If no choice of b leads to all-ones matrix, output “impossible”.

Code in Java:

private String[] M; private int n, m; private void flipRow(int i) { char[] newRow = new char[m]; for (int j = 0; j < m; j++) if (M[i].charAt(j) == '0') newRow[j] = '1'; else newRow[j] = '0'; M[i] = new String(newRow); } public String isPossible(String[] MM) { M = MM; n = M.length; m = M[0].length(); String[] origM = new String[n]; for (int i = 0; i < n; i++) origM[i] = M[i]; for (int changes = 0; changes < 2; changes++) { int cntRow = changes; for (int i = 1; i < n; i++) { if (M[0].charAt(0) == M[i].charAt(0)) continue; cntRow++; flipRow(i); } int cntCol = 0; boolean OK = true; for (int j = 0; j < m; j++) { if (M[0].charAt(j) == '0') cntCol++; for (int i = 1; i < n; i++) if (M[0].charAt(j) != M[i].charAt(j)) OK = false; } cntRow %= 2; cntCol %= 2; if (OK && cntRow == cntCol) return "possible"; for (int i = 0; i < n; i++) M[i] = origM[i]; flipRow(0); } return "impossible"; }

**Div1 Hard – CoverTreePaths**

There are two natural approaches for this problem — satisfy demands in top-down fashion (i.e., starting from the root and moving towards the leaves), and satisfy demands in bottom-up fashion (i.e., starting from the leaves and moving towards the root). The official solution takes the bottom-up approach, that we describe next.

Initially, we set v[i] = 0 for each vertex i, and update v in n steps. After the k-th step we maintain the following invariant: if v[i] = 0 for each i < **n** – k, then the remaining entries of v are set so to minimize the cost and satisfy demands of the vertices i >= **n** – k.

Assume that we have executed k steps, and now we are executing the (k+1)-step. Let i = **n** – (k + 1).

If i is a leaf, then we set v[i] = **d**[i]. (This value might be updated later.)

If i is a non-leaf, in this step we will eventually have to set v[i] >= **d**[i], as the only way to satisfy the demand of i is to let v[i] being at least **d**[i]. Also, the descendants of i have already satisfied their demands even if v[i] equals 0. So, when we increase v[i], it means that some of v[j], where j is a descendant of i, could potentially be reduced. But v[j] of which descendants?

We repeat the following:

- Let S_i be the set of all descendants of i such that for each descendant w all the values of the vertices on the path between w and i (excluding w and i) is zero.
- If v[i] >=
**d**[i] and sum_{w in S_i}**c**[w] <**c**[i]- break

- Update v[w] = v[w] – 1 for each w in S_i
- Update v[i] = v[i] + 1.

Clearly, this process does maintain the invariant that the demands of all the vertices [i+1, n-1] are satisfied. Furthermore, the demand of vertex i is satisfied as well, as the updates will continue as long as v[i] < **d**[i]. But, is the choice of S_i the best one?

Namely, consider the following example. Vertex i has a direct child 2, and 2 has two direct children 0 and 1. Assume that each v[0], v[1] and v[2] is at least 1. When we increase v[i] by 1, instead of decreasing v[2] by 1 we could have as well decreased v[0] and v[1] by 1 without violating the demands. But, would decreasing v[0] and v[1] reduce the cost even further than decreasing only v[2]? If it would reduce the cost even further, then it would mean that c[0]+c[1] > c[2]. But now this violates our invariant that after the step when we processed vertex 2 the values were set optimally assuming that the unprocessed vertices have value 0. Namely, if c[0]+c[1]>c[2], an optimal solution should *not* keep both v[0] and v[1] positive as it pays off to increase v[2] by 1 and decrease each v[0] and v[1] by 1.

It is easy to generalize this argument in the following way. If we assume that in Step 1 it is better to choose a set of descendants {x_1, …, x_r} of w instead of w in S_i (and all the vertices x have positive v[x]), than it would imply that c[x_1]+…+c[x_r] > c[w]. But, again as before, this would contradict the invariant that when processing w the values in v are set optimally. (Notice that after a vertex i is processed and its value v[i] is set, in the further steps it never increases.)

To implement these steps efficiently, the official solution maintains a heap for each S_i together with the total cost of all the vertices that are in S_i. The heap for S_i is ordered by the current values of the vertices. In that way, after we update v[i] and need to reduce v[w] of w in S_i, it is easy to find the vertex in S_i that has the smallest value. Once v[w] get reduced to zero due to the updates, then we merge S_i and S_w. To do that efficiently, we should make sure to merge smaller to the larger heap.

The full implementation in Java is provided below.

private class Pair implements Comparable { public long first; public int second; Pair(long first, int second) { this.first = first; this.second = second; } public int compareTo(Pair pair) { if (this.first < pair.first) return -1; else if (this.first > pair.first) return 1; else { if (this.second < pair.second) return -1; else if (this.second > pair.second) return 1; return 0; } } } private ArrayList children[]; private PriorityQueue heap[]; private long[] sumOfCosts; private long[] valueToSubtract; private long[] value; private long totalCost; private long[] d, c, p; private void mergeFirstToSecondHeap(int f, int s) { while (heap[f].size() > 0) { Pair p = heap[f].poll(); p.first = p.first - valueToSubtract[f] + valueToSubtract[s]; heap[s].add(p); } } public long minimumCost(int n, int[] op, int[] od, int[] oc, int[] params){ d = new long[n]; c = new long[n]; p = new long[n]; for (int i = 0; i < od.length; i++) { d[i] = od[i]; c[i] = oc[i]; if (i < op.length) p[i] = op[i]; } int t = od.length; for (int i = t - 1; i <= n - 2; i++) p[i] = (params[0] * p[i-1] + params[1]) % (i+1); for (int i = t; i <= n - 1; i++) d[i] = 1 + ((params[2] * d[i-1] + params[3]) % params[4]); for (int i = t; i <= n - 1; i++) c[i] = 1 + ((params[5] * c[i-1] + params[6]) % params[7]); children = new ArrayList[n]; for (int i = 0; i < n; i++) children[i] = new ArrayList(); for (int i = 0; i < n - 1; i++) children[(int)p[i]].add(i + 1); totalCost = 0; heap = new PriorityQueue[n]; sumOfCosts = new long[n]; valueToSubtract = new long[n]; value = new long[n]; for (int v = n - 1; v > -1; v--) { sumOfCosts[v] = 0; value[v] = 0; heap[v] = new PriorityQueue(); for (Integer child : children[v]) { sumOfCosts[v] += c[child]; heap[v].add(new Pair(value[child], child)); } long increaseBy; while (true) { if (heap[v].size() == 0) { if (d[v] <= value[v]) break; increaseBy = d[v] - value[v]; } else if (c[v] < sumOfCosts[v]) increaseBy = heap[v].peek().first - valueToSubtract[v]; else if (value[v] < d[v]) increaseBy = Math.min(d[v] - value[v], heap[v].peek().first - valueToSubtract[v]); else break; totalCost += increaseBy * c[v]; totalCost -= increaseBy * sumOfCosts[v]; value[v] += increaseBy; valueToSubtract[v] += increaseBy; ArrayList toMerge = new ArrayList(); while (heap[v].size() > 0 && heap[v].peek().first == valueToSubtract[v]) { Pair p = heap[v].poll(); toMerge.add(p); sumOfCosts[v] -= c[p.second]; } // Merge the heaps. for (Pair p : toMerge) { int u = p.second; sumOfCosts[v] += sumOfCosts[u]; if (heap[u].size() <= heap[v].size()) mergeFirstToSecondHeap(u, v); else { mergeFirstToSecondHeap(v, u); heap[v] = heap[u]; valueToSubtract[v] = valueToSubtract[u]; } } } } return totalCost; }

**boba5551**

Guest Blogger