SRM_Blog
SRM Editorials

Single Round Match 802 Editorials

BallotCounting

We can stop counting as soon as one of the candidates has strictly more than half of all votes.


public int count(String votes) {
int voterCount = votes.length();
int votesA = 0, votesB = 0;
for (char vote : votes.toCharArray()) {
if (vote == 'A') ++votesA; else ++votesB;
if (2*votesA
voterCount) return votesA+votesB;
if (2*votesB
voterCount) return votesA+votesB;
}
return voterCount;
}

TheQuestForGold

For each pit, mark all of its neighbors as slimy. The process in which the player will explore the cave directly corresponds to a depth-first search (DFS). If the player enters a chamber that’s empty, they can try recursively exploring each of its neighbors. However, if they enter a chamber with slime, they must immediately return to the previous chamber. (Going to another visited neighbor makes no sense, as it was already visited. Going to an unvisited neighbor is not allowed because of the risk of death.)

This DFS will discover exactly all the chambers that can safely be visited by the player. If one of them contains treasure, output “gold”, otherwise output “no gold”.

The solution can also be implemented using BFS. Formally, we will have a directed graph in which each chamber is a node. An empty chamber without slime has outgoing edges to all its neighbors, a slimy chamber has no outgoing edges.

As we are only processing each chamber at most once, both solutions have a time complexity linear in the size of the cave: O(rows * columns).


int[] DR = {-1,1,0,0};
int[] DC = {0,0,-1,1};

public String explore(String[] cave) {
int R = cave.length, C = cave[0].length();
boolean[][] pit = new boolean[R][C];
for (int r=0; r<R; ++r) for (int c=0; c<C; ++c)
if (cave[r].charAt(c) == 'P') pit[r][c] = true;

boolean[][] slime = new boolean[R][C];
for (int r=0; r<R; ++r) for (int c=0; c<C; ++c)
if (pit[r][c]) for (int d=0; d<4; ++d) {
int nr = r+DR[d], nc = c+DC[d];
if (nr
= 0 && nr < R && nc = 0 && nc < C)
slime[nr][nc] = true;
}

boolean[][] visited = new boolean[R][C];
int[] Q = new int[2*R*C];
int qs=0, qf=0;
for (int r=0; r<R; ++r) for (int c=0; c<C; ++c)
if (cave[r].charAt(c) == 'S') {
Q[qf++]=r; Q[qf++]=c; visited[r][c] = true;
}

while (qs < qf) {
int cr = Q[qs++];
int cc = Q[qs++];
if (slime[cr][cc]) continue;
for (int d=0; d<4; ++d) {
int nr = cr+DR[d], nc = cc+DC[d];
if (!(nr
= 0 && nr < R && nc = 0 && nc < C)) continue;
if (visited[nr][nc]) continue;
visited[nr][nc] = true;
Q[qf++] = nr; Q[qf++] = nc;
}
}

for (int r=0; r<R; ++r) for (int c=0; c<C; ++c)
if (cave[r].charAt(c) == 'T') {
if (visited[r][c]) return "gold";
return "no gold";
}
return "no gold"; // because there’s no treasure at all
}

DisjointDiceValues

The probability that Bob wins can be computed in polynomial time. One possibility is to iterate over all pairs (a,b) and for each such pair compute the probability that Alice rolled a distinct values while Bob rolled b distinct values completely different from Alice’s. This probability is choose(6,a) * choose(6-a,b) * (probability that Alice’s dice show exactly the values 1 to a) * (probability that Bob’s dice show exactly the values a+1 to b).

To count the probability of having exactly the values in [1,a] in A rolls, we can use the principle of inclusion and exclusion. We can easily count all sequences of A rolls such that each roll is in [1,a]: there’s a^A of them. However, this does not guarantee that all those values are actually present. For each i, we must subtract those sequences that never contain i. There are (a-1)^A such sequences for each specific i. However, once we subtract those A*(a-1)^A sequences, we are not done yet: we just subtracted some sequences too many times. E.g., each sequence that is missing two values has been added once but then subtracted twice. We must correct this by adding them back on. And so on.

However, for the constraints used in this problem there are also much simpler solutions in terms of implementation. Here’s one:

As the game is symmetric and A+B <= 16, we may assume that A <= 8. We can now iterate over all 6^A <= 6^8 possibilities of what Alice rolled. For each of them we know how many distinct values she had, and thus what’s the probability that Bob’s B rolls will not hit any of them.

Here’s another:

Use a simple DP to calculate values p[x][y] = the probability that after x rolls we will see exactly the values described by the bitset y on our dice. Iterate over all pairs y1, y2 of disjoint bitsets (there are 3^6 such pairs) and sum p[A][y1]*p[B][y2] to calculate Bob’s winning probability.

BestEvenSplit

The underlying graph problem (smallest bisection) is NP-hard, so there’s no hope in anything polynomial-time.

Given that we can fix one vertex and choose N/2 - 1 others to go into its group, we have choose(N-1, N/2 - 1) options to choose from. For N = 30 this evaluates to 77,558,760 possible splits. This is also the largest possible answer, e.g., for 30 isolated vertices.

We will write a brute-force solution that will examine each of those options in constant time.

There are multiple ways to do this. A common tool in all of them is that they require the use of bitsets. For each vertex we have a bitset of its neighbors, and for each group a bitset of its current members. Bitwise AND can then be used to tell how many neighbors a vertex currently has in the opposite group.

We also need constant-time population count for bitsets (i.e., given a bitset, what’s the number of 1s in it). If your programming language / processor instruction set doesn’t have it, it’s easy to do manually: in this problem we can precompute population counts for up to 2^15 into an array and then do population counts for up to 2^30 using two array lookups.

One possible approach how to iterate over all possible solutions in time proportional to their number is to generate them in such an order that each solution differs from the previous one in only two elements: one pair of vertices is swapped between the groups. This is always possible: for any N and K we can order all K-element subsets of N elements in such a way that each subset only differs from its predecessor in two elements - one added, one removed. Once we can generate such an order (more precisely, the pairs of elements that are added and removed in each step), the rest will be easy: each time we are moving those two elements between partitions, we can recompute the number of edges between partitions in constant time.

The generation can look as follows:

  • We can start from an arbitrary subset.

  • Pick one element x that’s in the subset. Set x aside as “always in”.

  • Recursively generate all other subsets of size K-1 from remaining elements. Together with x, you just generated all K-element subsets that contain x.

  • Pick any element y that’s currently not in the subset. Swap x and y.

  • Set x aside as “always out”.

  • Recursively generate all other subsets of size K from remaining elements. You just generated all K-element subsets that do not contain x, and you are done.

Another possible approach is a backtracking algorithm that processes elements in sorted order and tries assigning each of them to each not-yet-full group. This is quite a bit easier to implement than the previous solution but harder to analyze. The issue with the analysis here is that when one of the groups is full we still need to add all the remaining elements to the other group, one at a time, in order to count their broken friendships. This means that sometimes we’ll spend more than a constant number of steps per solution tested. The slowdown is clearly at most N per solution, but intuitively it should be much lower. How bad is the overhead really? Using some clever counting it can be shown that when using this approach to generate all ways to split 2k-1 elements into k-1 and k, the total number of times an element is added to a group is bounded by choose(2k+1,k) -- and that is at most four times choose(2k-1,k-1), i.e., the number of solutions we are testing. Thus, the approach does indeed spend an amortized constant time per solution (but may still have a somewhat worse constant factor than the previous solution).

QuadraticJumping

The easiest way to start exploring is to write a simple dynamic programming algorithm that will solve all small inputs optimally: for 1, 2, 3, … jumps we can construct the set of reachable locations. From observing how the set grows we can quickly form a theory that all locations will be reachable.

And indeed: 0 and 1 are reachable trivially, 2 is reachable as -1-4-9+16, 3 is reachable as -1+4, and at any moment the sequence of jumps “right, left, left, right” adds 4: n^2 - (n+1)^2 - (n+2)^2 + (n+3)^2 = 4. Of course, this gives us a solution with O(goal) jumps which is almost never optimal: as we’ll soon see, the optimal solution will only have O(goal^(1/3)) jumps.

Suppose we make n jumps. Can we reach the goal exactly?

The maximum distance is if we make all n jumps to the right. This distance is the sum of the first n squares: S = n(n+1)(2n+1)/6. If S < goal, we clearly cannot reach it with n jumps, regardless of how we jump.

Any other reachable value can be produced by taking the maximum one and switching some jumps from right to left. Switching the x-th jump decreases the result by 2x^2. Thus, in general, your final location will be S - 2*(sum of jumps to the left).

Hence, n is a valid number of jumps if and only if the following conditions hold:

  • S = n(n+1)(2n-1)/6 must be greater than or equal to goal

  • their difference D = (S - goal) must be even

  • it must be possible to write D/2 as the sum of distinct squares (as all jumps to the left are distinct)

We are looking for the smallest n with the above properties. The first two are trivial to test, how about the third one?

This is where we modify the DP program we had from the beginning and use it to construct the complement: the numbers that can be written as a sum of distinct squares. Again, after very few steps we’ll begin to observe a pattern: there are a few small values that cannot be constructed this way, but everything bigger seems to be possible.

Again, for the sake of completeness, here’s a proof of a formal version of this observation:

Lemma: For n >= 13, all numbers from the range [129, 129+n^2) can be written as a sum of distinct squares smaller than n^2.

Proof: By induction.

Base case: Our DP program can be used to show that all numbers in [129,378) can be constructed using squares up to 12^2, which is more than enough for our claim for n=13.

Induction step: Take any number from [129, 129+n^2), write it as a sum of distinct squares below n^2 and then add n^2. This shows that numbers from [129, 129+2n^2) can be written as a sum of distinct squares below (n+1)^2, and for our n we have 2n^2 > (n+1)^2 so this range contains the one from our claim.

This gives us a full solution that’s really easy to implement: we’ll just iterate over all n until we find the smallest one such that n(n+1)(2n+1)/6 is at least equal to goal, their difference is even and its half isn’t one of the small constant number of counterexamples we have. (The solution can easily be improved to run in constant time by computing the smallest n that satisfies the first condition and realizing that within a constant number of steps from there we have to hit an n that also satisfies the other two conditions.)