## January 19, 2022 Single Round Match 822 Editorials

#### MakeItDivisible

If the number is not divisible by 7, it is always possible to make it divisible by 7 by changing only one of its digits. In particular, we can always do it by changing its last digit. Why? Imagine that we try all possible options for the last digit: 0,1, 2, …, 9. We just tried ten consecutive numbers. At least one of them must always be divisible by 7. (In fact, it is always sufficient to try seven consecutive numbers. Exactly one of those will always be divisible by 7.)

With this approach probably the only caveat is that we do need to first check whether the number N is already divisible by 7. If we don’t, we may for example incorrectly change 49 to 42 instead of keeping it intact.

public int change(int N) {
if (N % 7 == 0) return N;
int answer = (N / 10) * 10; // change the last digit to 0
while (answer % 7 != 0) ++answer; // find the first multiple of 7
}


#### TwoStamps

If there is a ‘)’ along the left edge or a ‘(‘ along the right edge, it’s clearly impossible to create the page using our stamps. We can check this at the beginning of the solution. In the rest of this writeup we assume that this check was made and that we have a solvable input.

In all other cases it’s possible. The optimal answer can always be computed using a very simple formula: it’s the number of “()” in the picture, plus the number of all the remaining parentheses in the picture.

public int minCircles(String[] page) {
int R = page.length, C = page[0].length();
char[][] grid = new char[R][C];
for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) {
grid[r][c] = page[r].charAt(c);
}

// check whether a solution exists
for (int r=0; r<R; ++r) {
if (grid[r][0] == ')') return -1;
if (grid[r][C-1] == '(') return -1;
}

// first find and remove all circles
for (int r=0; r<R; ++r) for (int c=0; c+1<C; ++c) {
if (grid[r][c] == '(' && grid[r][c+1] == ')') {
grid[r][c] = grid[r][c+1] = ' ';
}
}

// then count all other parentheses
for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) {
if (grid[r][c] != ' ') {
grid[r][c] = ' ';
}
}
}


For completeness, below we give a proof that the above algorithm always computes the optimal answer.

We can easily show that the value computed above is a lower bound – there cannot be any way of producing the page that uses the circle stamp fewer times? Why? Because the only time we can produce two characters of the final picture using the same stamping action are precisely those cases where the entire “()” ends being visible. Once we process and ignore those, no two of the remaining parentheses can be produced using the same stamping action, so we need at least one stamp for each of them.

We can also show that the value is an upper bound by showing how to actually construct the given page using that exact number of uses of the circle stamp. First, imagine the page on which each “()” has been replaced by spaces. On this page, to the right of each ‘(‘ there is either another ‘(‘ or a space, and to the left of each ‘)’ there is either another ‘)’ or a space. We can now process all ‘(‘ from left to right. For each of them we will use the circle stamp once. Then, for each ‘(‘ that should have a space on its right, we will use the eraser stamp. Afterwards, we will do the same for all the ‘)’s, but going from right to left. In this way we will produce the desired page using exactly one circle stamp per parenthesis on the page. Ultimately, we finish the construction by using the circle stamp to produce all the circles that should be fully visible.

#### ProbabilisticStreamMinimum

A large part of the statement consists of red herrings. In particular, the number N does not matter, and the order of elements in the input also does not matter. The probability only depends on K, it is the same for all N, and for all possible orders of elements in the input.

To see why, imagine that we looked at the input before our program starts, and that we have colored the K smallest elements red and all other elements blue.

Clearly, the algorithm will succeed if we assign the K red elements to K distinct buckets (each of them will necessarily remain as the minimum of its bucket) and the algorithm will fail if any two red elements are assigned to the same bucket (because then we will be forced to discard one of them).

The probability of getting all K red elements to distinct buckets can be calculated easily. It’s essentially the same calculation that appears, among other places, in the birthday paradox. When the first red element appears, we can be sure that it will get a new bucket. When the second red element arrives, there is already one bucket with a red element, so the probability of not choosing that one is (K*K – 1) / (K*K). And so on. For the k-th element the probability of placing it successfully is (K*K – k + 1) / (K*K). And as all these random events must successfully happen, one after another, the probability of never getting a collision is the product of all those individual probabilities.

public double calculate(int N, int K) {
for (int i=0; i<K; ++i) answer *= (K*K - i) / (1.*K*K);
}


Interestingly, the algorithm described in the statement only uses N + O(K^2) comparisons. and it can be shown that for any K its probability of success is greater than 0.6. The exact limit of the probability as K goes to infinity is 1/sqrt(e) = approximately 0.60653.

#### RedGreenBlueLight

Let’s assign the weight 3^(-x) to a player who is x steps away from the goal. The reason behind this choice is that this way three players on distance x have the same weight as one player on distance x-1.

If the sum of values of all players is less than 1, the doll has a winning strategy. Regardless of how the players split themselves, the color with the smallest sum of weights must have a total that is less than 1/3. The doll can select that group to survive. The rest is eliminated. Then, the chosen group takes a step forward. The step forward multiplies their weight by 3. Hence, the new weight of all survivors is again less than 1. Additionally, some positive number of players surely got eliminated. This way the doll can maintain the weight to be less than 1, and therefore prevent the players from reaching the goal. (A player who has reached the goal would have weight 1.)

If the sum of values of all players is at least 1, they can always win. To show that, we must show that they can maintain the opposite invariant. And to do that, they must always be able to split themselves into three groups so that the weight of each group is at least 1/3. We will show that they can always split themselves so that groups 0 and 1 each have sum exactly 1/3.

Lemma: if we have people who are still in the game and whose sum of weights is at least 1/3, we can select a subset with sum exactly 1/3.

Construction: We sort the people in descending order of weights. Some prefix must have the desided property.

Proof: As we process the people in the above order, maintain two variables: the amount we still need (initially 1/3) and the current weight (initially also 1/3). The invariant is that the current weight divides the amount we still need. Whenever the current weight decreases, the invariant is maintained because the new weight divides the previous one. Whenever we add a player to the group the invariant is maintained because if d divides x, then d also divides x-d. If x is positive and d divides x, x-d is nonnegative. Hence, we can never skip over the value we are trying to hit. And as the sum of weights is at least equal to the value we are trying to hit, eventually we have to hit it.

One final technical detail is that the input can contain people who are very far from the goal line. These can safely be ignored – we can prove that with N people anyone who’s substantially over the distance N/2 away from the goal has no influence on the result.

To show this, realize that the only way they could matter would be if the total weight were at least 1, but removing the far players would decrease the total weight below 1. But this can never happen. Whenever the total weight is at least 1, there has to be a group of players with weight=1 such that all these players are at distance at most N/2 (plus or minus 1) from the goal, so removing the ones that are farther will never bring us below 1.

To prove the claim we just made, consider the algorithm from the above lemma, just with weight=1 as the goal. If we want the farthest possible person to matter, we must make the last step of that algorithm (the step when the total exactly reaches 1) as small as possible. In other words, we must make the amount we need at that moment to be as close to 1 as possible. When doing that, the best we can do is to be greedy: we can have two players of weight 1/3, then two of weight 1/9, then two of weight 1/27, and so on. The N-th person’s distance will be roughly N/2 steps in this greedy solution, and in any other way of reaching total weight 1 with at most N people the last person’s distance will be smaller or equal to this one.

public int[] move(int[] steps) {
int N = steps.length;

// move everyone who's too far closer to the goal line
// this does not affect the answer
for (int n=0; n<N; ++n) steps[n] = Math.min( steps[n], 999 );

// calculate whether sum of 3^(-distance) is at least 1
int[] sum = new int[1000];
for (int s : steps) {
++sum[s];
while (s > 0 && sum[s] == 3) { ++sum[s-1]; sum[s]=0; --s; }
}

// if not, there is no solution
if (sum[0] == 0) return new int[] {-1};

// if yes, start with everyone on color 2 and then switch
// exactly enough people to each of the other two colors
int[] answer = new int[N]; for (int n=0; n<N; ++n) answer[n] = 2;
for (int color=0; color<2; ++color) {
int[] currentsum = new int[1000];
for (int v=1; v<1000; ++v) {
for (int n=0; n<N; ++n) {
if (steps[n] != v) continue;
int s = steps[n];
++currentsum[s];
while (s > 0 && currentsum[s] == 3) {
++currentsum[s-1]; currentsum[s]=0; --s;
}
if (currentsum[0] > 0 || currentsum[1] > 0) break;
}
if (currentsum[0] > 0 || currentsum[1] > 0) break;
}
}
}


#### Frogs

Algorithmically the frog problem isn’t that complicated. The limit on the number of jumps is about twice the number of frog pairs. And asymptotically we clearly cannot do better than Theta(N^2) jumps because each jump only changes the order of two adjacent frogs, and there can be Theta(N^2) inversions.

Still, getting those pesky frogs into their proper places can be surprisingly tricky if we want to keep our constants small and our code reasonably simple. We’ll need to do quite a bit of juggling to get all the frogs to their dream destinations. (The general advice for problems of this type is that time spent thinking about reducing the number of special cases almost always pays off.)

Our general plan of attack will be as follows:

• [Phase 1] In linear time, move all frogs to the left N lilypads: FFFFFF——-. Note that we don’t care about preserving their initial order.
• [Phase 2] In linear time, spread the frogs out, starting at the opposite end: –-F-F-F-F-F-F. Note that we still don’t care about preserving the order of the frogs.
• [Phase 3a] With this setup, we can now reorder the frogs arbitrarily, again so that they all touch the left edge. This can be done using roughly 3N^2/4 jumps if we are smart. (Details below.)
• [Phase 3b] We will special-case the last three frogs: we will write code that will append all three of them to the sorted order in any order we like.
• [Phase 4] Finally, in linear time we move the frogs from the left (where we have them ready in some suitable order) to their proper destinations.

Phase 1 can be implemented in N + O(1) jumps: repeatedly take the second unprocessed frog and jump it to the leftmost available lilypad, and in the end special-case the last frog.

Phase 4 can now be done at no cost by reusing the same function: the actions of frogs are reversible, and thus going from the left to the goal positions is the same as the reversal of jumps that get you from the goal positions to left. We will do this now, because it will serve an important purpose: once we executed phase 4 backwards, we know the order in which the frogs are now arranged on the left. And this is precisely the information we’ll need for phase 3: what exact order of frogs should this phase produce?

Phase 2 can be done in many different ways. For example, there is a sequence of five jumps that moves the rightmost unprocessed frog to the rightmost available destination while returning the other unprocessed frogs

to the same set of locations (not necessarily in the same order). There is also a faster way where we just maintain a frog as a pivot, and as long as the desired destination is to the right of the pivot, the next frog just jumps over the pivot to the destination. Only O(log N) times will we need to move the pivot to join the other unprocessed frogs.

In Phase 3 we proceed as follows:

• Take the next frog you want to add to the “sorted” order: –-F-F-F-F-*-F
• Using a series of jumps (the last of which is longer), move it to the leftmost free lilypad:
*-F-F-F-F—F
• Shift the remaining frogs to fill in the gap. (This can be done in a dumb way by moving the leftmost not-yet-processed frog to the gap, but there is also a smarter way: only making every second frog make a jump.)
*-F-F––F-F-F
*—F-F-F-F-F

The worst case for the algorithm described above (with phase 2 using a pivot frog and phase 3 using smarter filling of gaps) is 793 jumps for 30 frogs. As a function of N, the number of jumps is dominated by the 3N^2/4 jumps in phase 3, the rest is linear. The statement tried to leave enough leeway to allow somewhat less efficient implementations to pass. (Doing just phase 2 the less efficient way has a worst case of 899 jumps, doing just phase 3 the less efficient way has a worst case of 961 jumps.)

Another way to save jumps is to realize that in phase 3a we only need to fill in the gap while there are some unprocessed frogs in the left half. As long as all the remaining frogs are in the right half, we can stop caring about gaps. With this optimization added to the smarter way of filling the gaps the worst case for phase 3a becomes even lower: we still make the roughly N^2/2 jumps made to place frogs, but now we will only make at most 3N^2/16 extra jumps while filling in the gaps, saving N^2/16 jumps. This improves the algorithm’s number of jumps to (11/16)*N^2.

Open question to ponder: What is the smallest c for which there is an algorithm that only needs c*N^2 + o(N^2) jumps? (Clearly c = 1/2 is a lower bound due to inversions. Can it be reached?)

misof

categories & Tags

UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close