# October 28, 2022 Single Round Match 839 Editorials

If the column is ZZZ, the next column is AAAA. We can handle this as a special case: check whether the input is all Zs, and if yes, return one more As. (Note that for the largest possible input (10 Zs) the output has 11 characters.)

In all other cases the next column will have the same number of letters in its name, we just have to “increment” it by 1. This works almost the same as with numbers. For example, 12347 + 1 = 12348 and in the same spirit “SRM” + 1 = “SRN”: usually it’s enough to replace the last letter by the next one. The only case where more changes are needed is when the last letter(s) are already as big as possible. Just like 12399 + 1 = 12400, the next column after “BUZZZ” is “BVAAA”: going from right to left each Z becomes A and then the next letter is incremented.

In other words, note that “BUZZZ” is the last of all five-character strings of the form “BU???”. After all these strings we have all the strings of the form “BV???” starting with the smallest one of those: “BVAAA”. Which is precisely what our algorithm will construct.

String constant(int length, char content) {
for (int i=0; i<length; ++i) answer += content;
}

public String goRight(String column) {
int L = column.length();
if (column.equals(constant(L,'Z'))) return constant(L+1,'A');

char[] letters = column.toCharArray();
int where = letters.length - 1;
while (letters[where] == 'Z') {
letters[where] = 'A';
--where;
}
++letters[where];
return new String(letters);
}


## CutAwaySquares

This problem is essentially a disguised version of the Euclidean algorithm to compute the greatest common divisor (GCD) of A and B.

If you have a rectangle AxB with A > B, the only way to make a valid cut is to cut it so that one of the pieces is a BxB square. The other piece then has dimensions (A-B) and B.

We can view the above operation as subtracting the smaller number (B) from the larger number (A). Our statement is telling us to repeat this operation until we get a square – that is, two equal numbers.

This is exactly what the slower version of the Euclidean algorithm does. In this version of the algorithm we observe that the numbers A and B, with A > B > 0, always have the same GCD as the numbers A-B and B. Thus, we can determine GCD(A,B) by starting with the pair (A,B) and repeatedly going from the current pair (X,Y) to the pair (X-Y,Y) until we get to some pair (D,D) that clearly has GCD equal to D.

Of course, we can easily speed up the Euclidean algorithm: e.g., instead of going from (150,20) to (130,20) and then to (110,20) and then to (90,20) and then … we can skip all these steps and realize that after we subtract the last possible 20, whatever’s left must be the remainder of 150 when divided by 20. Thus, we can go directly from (150,20) to (10,20). In general, the numbers A and B have the same GCD as the numbers (A mod B) and B.

Again, we can look at our cuts of rectangles in exactly the same way. As long as the first dimension (A) remains longer, we are going to cut off BxB squares. For example, if we start with a 150×20 rectangle, we will make seven cuts that each cut off a 20×20 square. After the last of those cuts we will be left with a 10×20 rectangle.

In general, there are two cases:

• If B does not divide A, we will make (A div B) cuts that each cut off a BxB square and after those cuts we will be left with a rectangle of dimensions (A mod B)xB.
• If B divides A, we will make (A div B)-1 cuts that cut off a BxB square and we will be completely done. (The remainder after the last cut is the last BxB square. I.e., these cuts divide the AxB rectangle into (A div B) pieces, each of the size BxB.)

We know that the Euclidean algorithm that uses modulo terminates in O(log(A+B)) steps. For A, B <= 10^18 < 2^64 the algorithm never makes more than 128 steps.

public long countCuts(long A, long B) {
if (A < B) { long C=A; A=B; B=C; }
long cuts = 0;
while (true) {
if (A % B == 0) {
cuts += (A/B) - 1;
return cuts;
} else {
cuts += A/B;
A %= B;
long C=A; A=B; B=C;
}
}
}


Extra 1 (Proof that Euclidean algorithm works): Any D that divides both A and B must also divide their difference, A-B. And on the other hand, any D that divides both A-B and B must also divide their sum, (A-B)+B = A. Thus, the pair (A,B) has exactly the same divisors as the pair (A-B,B), and notably therefore GCD(A,B) = GCD(A-B,B).

Extra 2 (Proof of the time complexity of its faster version): Suppose that we start from some (A,B), take one step to go to (B,X) where X = A mod B, and then another step to (X,Y) where Y = B mod X. Let’s now go back. From Y = B mod X we know that B was equal to k*X + Y for some k > 0. Hence, B >= X+Y. By repeating the same thought process we get A >= B+X >= 2X+Y. Hence, A+B >= 3X+2Y >= 2(X+Y). In other words, each time the Euclidean algorithm takes two steps, the new sum of the two numbers will be at most equal to one half of the original sum. Hence, in 2*log_2(A+B) steps the algorithm must terminate.

## PerfectParty

For any group of friends we can look at the total number T of candies they have and at their count C. The group leads to a perfect party if and only if T is a multiple of (C+1).

Notably, C is at most 50 and T is at most 50*max(candies) <= 50*2500 = 125,000. Thus, there are only a few million possible combinations T and C. If we could determine, for each T and C, the number count[T][C] of groups that have these two attributes, we could easily count all perfect parties by iterating over all viable combinations (T,C).

The values count[T][C] are easy to compute using dynamic programming.

If we have no friends, count[0][0] = 1 (invite no friends, get no candies) and all other values are 0.

If we already know the values count[T][C] for some group of friends and now a new friend with x candies appears, each new value newCount[T][C] is the sum of oldCount[T][C] and oldCount[T-x][C-1]: either we invite C old friends who will bring T candies, or we invite the new friend and then we need to invite C-1 old friends who bring a total of T-x candies.

public long invite(int[] candies) {
int N = candies.length;
int S = 0;
for (int x : candies) S += x;
long[][] ways = new long[N+1][S+1];
ways[0][0] = 1;
for (int n=0; n<N; ++n) {
// try adding candies[n] to each set constructed so far
for (int c=n; c>=0; --c) for (int s=0; s<=S; ++s) if (ways[c][s] > 0)
ways[c+1][s+candies[n]] += ways[c][s];
}
for (int c=0; c<=N; ++c) for (int s=0; s<=S; ++s) if (s % (c+1) == 0)
}


## LimitedSwaps

One efficient solution that only uses very simple data structures can be implemented as follows:

Let’s call an element large if its weight is more than L/2 and small otherwise. Then:

• Two large elements can never be swapped.
• Two small elements can always be swapped.
• A large and a small element sometimes can and sometimes cannot be swapped.

Thus, in the final solution:

• The relative order of large elements will remain the same as in the original sequence.
• We can consider each small element independently of the other small elements (as we can always swap those).
• Each small element will go as far left as it can – until it either passes all large elements, or hits the first large element that is too big for the swap. This is what we need to determine: for each small element, what is the first too-big element on its left?
• Each small element now landed in one of the B+1 gaps between the B big elements.
• Clearly, in the optimal solution the small elements in each of those groups will be sorted in ascending order.
• Now we know the exact final sequence of items and the only step that remains is to count the remaining inversions.

The only non-trivial step is moving a small element to the left across some big elements. An easy way to do this is to take the small element and try moving it past 2^17, 2^16, …, 4, 2, 1 big elements at the same time. Such a move is possible if and only if the maximum of all big elements in that block is small enough. These maxima can easily all be precomputed in O(N log N) time and then each small element can be correctly placed in O(log N) time.

public long minInversions(int N, int L, int M, int[] Wprefix, int seed) {
// removed for brevity: generating the sequence W as described

// count and extract all big elements, in order
int B = 0;
for (int w : W) if (2*w > L) ++B;
int[] big = new int[B];
B = 0;
for (int w : W) if (2*w > L) big[B++] = w;

// precompute biggest[i][j] = what's the biggest element we'll encounter
// if we start to the right of big[i] and take 2^j steps left
int[][] biggest = new int[B][18];
for (int b=0; b<B; ++b) biggest[b][0] = big[b];
for (int s=1; s<18; ++s) for (int b=(1<<s)-1; b<B; ++b) {
int half = 1<<(s-1);
biggest[b][s] = Math.max( biggest[b][s-1], biggest[b-half][s-1] );
}

// for each small element, find the index of the leftmost reachable gap
// between the large elements
int bigLeft = 0;
int[] small = new int[N - B];
int[] smallGroup = new int[N - B];
int S = 0;
for (int n=0; n<N; ++n) {
if (2*W[n] > L) { ++bigLeft; continue; }
small[S] = W[n];
int remain = bigLeft;
for (int step=17; step>=0; --step) {
if (remain < (1<<step)) continue;
if (biggest[remain-1][step] > L - W[n]) continue;
remain -= 1<<step;
}
smallGroup[S] = remain;
++S;
}

// group the small items according to where they landed, sort each group
int[] groupSizes = new int[B+1];
for (int x : smallGroup) ++groupSizes[x];
int[][] groups = new int[B+1][];
for (int b=0; b<=B; ++b) groups[b] = new int[groupSizes[b]];
for (int s=0; s<S; ++s) {
int g = smallGroup[s];
groups[g][--groupSizes[g]] = small[s];
}
for (int b=0; b<=B; ++b) Arrays.sort(groups[b]);

// construct the optimal shelf and return the answer
int[] shelf = new int[N];
int w = 0;
for (int b=0; b<=B; ++b) {
for (int x : groups[b]) shelf[w++] = x;
if (b < B) shelf[w++] = big[b];
}
return inversions(shelf, 0, N); // standard implementation - omitted
}


## Proximity

Suppose we have just computed the answer for some interval [lo,hi] of indices. How can we compute the answer for the interval [lo,hi+1]? We can look at the value C[hi+1]. This value will form new pairs with all already present values that lie in the range from C[hi+1]-D to C[hi+1]+D, inclusive. If we maintained the collection of values on our cards in some efficient data structure (e.g., a Fenwick tree where we store the number of occurrences for each value), we could easily determine the number of these new close pairs in logarithmic time. Once we do that, we can update the data structure by adding the new value to the collection – this can also easily be done in logarithmic time.

In exactly the same way we can grow the query interval one step to the left. And in a much the same way we can do the reverse operations as well. In order to shrink the query interval by one on either side, we first remove the “departing” value from the data structure and then we query the data structure to count the pairs we are losing.

As all queries are known in advance (“offline”), we can choose the order in which we answer them. One well-known trick (sometimes called “Mo’s algorithm” after one of its early adopters in the competitive programming community) is to divide the queries into sqrt(N) buckets according to (A[i] div sqrt(N)).

Each bucket can now be processed as follows:

• sort the queries according to where they end
• process the queries in the sorted order, each time growing the query on the right and growing/shrinking on the left as necessary

For each bucket, we will do O(N) steps where we grow the right end. As there are sqrt(N) buckets, there are O(N*sqrt(N)) such steps in the whole solution.

For each query we will do O(sqrt(N)) steps where we grow or shrink the left end. This is because the previous query is in the same bucket and thus their starts are close to each other. As there are Q queries, there are O(Q*sqrt(N)) such steps in the whole solution.

This gives us a grand total of O( (N+Q)*sqrt(N) ) steps. As shown above, each of those can be processed in O(log M) time.

A technical detail: We don’t need any special treatment for the first query in a bucket. Instead, we start from an empty query and an empty Fenwick tree. Then, for each bucket we process its first query simply by growing and then shrinking the last query of the previous bucket, as necessary. For each bucket this is just O(N) steps which doesn’t mess with our asymptotic estimates.

// Fenwick tree and the current number of close pairs
int size;
long[] T;
long currentPairs;

// input variables
int D;
int[] A, B, C;

void updateFenwick(int x, int delta) {
while (x <= size) { T[x] += delta; x += x & -x; }
}

long sumFenwick(int x1, int x2) { // sum in the closed interval [x1,x2]
long res=0;
--x1;
while (x2 != 0) { res += T[x2]; x2 -= x2 & -x2; }
while (x1 != 0) { res -= T[x1]; x1 -= x1 & -x1; }
return res;
}

int val = C[idx];
int lo = Math.max(1, val-D);
int hi = Math.min(size, val+D);
currentPairs += sumFenwick(lo,hi);
updateFenwick(val,1);
}

void removeElement(int idx) {
int val = C[idx];
int lo = Math.max(1, val-D);
int hi = Math.min(size, val+D);
updateFenwick(val,-1);
currentPairs -= sumFenwick(lo,hi);
}

public int count(int N, int Q, int _D, int M, int L, int seed) {
D = _D;
// omitted for brevity: generating A, B, C as described

// reorder queries into blocks
long sqrtmax = 1;
while (sqrtmax*sqrtmax < N) ++sqrtmax;

long[] queries = new long[Q];
for (int q=0; q<Q; ++q)
queries[q] = ((1L * A[q] / sqrtmax)*(N+7) + B[q])*(N+7) + A[q];
Arrays.sort(queries);
for (int q=0; q<Q; ++q) {
A[q] = (int)(queries[q] % (N+7));
queries[q] /= N+7;
B[q] = (int)(queries[q] % (N+7));
}

// change all numbers to positive and all intervals to half-open
for (int n=0; n<N; ++n) ++C[n];
for (int q=0; q<Q; ++q) ++B[q];

// initialize an empty Fenwick tree and an empty query
size = 1;
while (size < M+7) size *= 2;
T = new long[size+1];
int lo=0, hi=0;
currentPairs = 0;

for (int q=0; q<Q; ++q) {
// increase query size if needed
while (hi < B[q]) { addElement(hi); ++hi; }
while (lo > A[q]) { --lo; addElement(lo); }
// decrease query size if needed
while (hi > B[q]) { --hi; removeElement(hi); }
while (lo < A[q]) { removeElement(lo); ++lo; }
}
}


misof

categories & Tags

UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close