March 11, 2022 Single Round Match 825 Editorials

FindBob

If we don’t visit any house, all positions of Bob’s house are possible. As soon as we visit at least one house, the very first answer will eliminate almost all possibilities: we will always be left with at most two candidates for Bob’s house. This observation can be used to solve this task in linear time.

However, that solution is not the easiest one in terms of implementation. As our input is small, we can afford to implement a solution that’s less efficient but uses shorter, simpler code.

What we can do here is simply test each of the N houses. For each house we will ask the question: “Can Bob live here?” and we can answer the question by iterating over all the information we were given. If all actual distances match the declared ones, we have a candidate.

boolean isBobHere(int h, int N, int[] houses) {
	for (int n=0; n<N; ++n) {
    	if (houses[n] == -1) continue;
    	if (houses[n] != Math.abs(h-n)) return false;
	}
	return true;
}

public int[] find(int N, int[] houses) {
	int validLocations = 0;
	for (int n=0; n<N; ++n) if (isBobHere(n,N,houses)) ++validLocations;
	int[] answer = new int[validLocations];
	for (int n=0, i=0; n<N; ++n) if (isBobHere(n,N,houses)) answer[i++] = n;
	return answer;
}

OptimalMemoryGame

In this problem we are asked to play an optimal turn of the memory game. This is significantly more tricky than it might seem at the first glance. Here are two cases to consider before we look at the full solution:

  1. “A-B-C-”
  2. “AA-BB-”

In both of these cases we are actually guaranteed to collect all three pairs if we are smart. 

In the first case the three hidden cards must be A, B, C in some order. We do not know the order, but we don’t have to know it. We can start by choosing any one of the three unknown cards. Regardless of what letter we see, it must be one of A, B, and C and we know where its pair is. As the second card we can then select the matching one and collect the pair. Repeat twice and we are done.

In the second case the two hidden cards must be identical because each card type appears exactly twice. We don’t know what letter the two unknown cards share but we do know that they must share the same letter and that’s enough. We can reveal one of them followed by the other and collect the pair.

Armed with these observations let’s consider the general case. We are given information about 2N cards. We know that those 2N cards form N pairs. In general, we will have some number (T) of pairs in which Two cards are known, some number (O) of pairs in which One card is known, and the rest will be some number (Z) of pairs in which Zero cards are known. We have T+O+Z = N. We can determine T, O, Z simply by going through the given string and counting the number of occurrences for each letter.

If T > 0, we can start by simply taking all those pairs. Once that’s done, we are left in a situation in which all known cards are distinct.

First let’s consider what happens if Z = 0. This is the first case we discussed above: if all pairs have one card known, we can collect all pairs one after another.

Now we have Z > 0, so at least one pair is completely face-down.

If there is exactly one such pair and nothing else (Z = 1, O = 0), we have the second case handled above: we know the two cards form a pair and we can pick them up.

We now claim that in the remaining cases we cannot do anything.

Proof. Either Z >=2 or (Z = 1 and O >= 1). In either case we have at least three unknown cards and these include one specific pair of matching cards.

If we start by revealing an unknown card, it is possible that we reveal one of the two matching cards. And if that happens, we aren’t guaranteed to make a pair because there is still more than one unknown card.

And if we start by revealing a known card, we also aren’t guaranteed to make a pair because we don’t know which of the at-least-three unknown cards is its match.

public int findPairs(String known) {
    int[] appearances = new int[26];
    for (char c : known.toCharArray()) {
        if (c >= 'A' && c <= 'Z') {
            int x = c - 'A';
            ++appearances[x];
        }
    }

    int N = known.length() / 2;
    int answer = 0;
    for (int i=0; i<26; ++i) if (appearances[i] == 2) {
        ++answer;
        appearances[i] = 0;
        --N;
    }

    int One = 0;
    for (int i=0; i<26; ++i) if (appearances[i] == 1) ++One;
    int Zero = N - One;
    if (Zero > 1) return answer;
    if (Zero == 1 && One > 0) return answer;
    return answer + N;
}

RollAllOnes

Clearly the optimal strategy is to always pick up and reroll all dice that aren’t ones, and never pick up a die that already shows the value one. (The first action never decreases the number of ones after the roll, the second one never increases it.)

The first question we need to answer: if we roll A dice, what is the probability p[A][B] of getting exactly B ones?

There are exactly D^A different outcomes of the roll. Among those there are some with exactly B ones. We have choose(A,B) ways to choose which dice show the ones, and then (D-1)^(A-B) ways to choose what values other than 1 appear on the other A-B dice. Thus, there are choose(A,B)*(D-1)^(A-B) good roll outcomes, and p[A][B] is the ratio between the number of good and all possible outcomes.

Alternately, the values p[*][*] can be computed using dynamic programming. Suppose we have A dice and want B ones. Imagine that we pick up and roll one of the dice before the rest. With probability 1/D the die will roll a one, in all other cases it won’t. If it does, we need to roll B-1 more ones on the remaining dice. If it doesn’t, we need to roll all B ones on the remaining dice. This gives us the following recurrence: p[A][B] = (1/D) * p[A-1][B-1] + ((D-1)/D) * p[A-1][B].

Once we have these values, we can compute the actual answers we need. Again, we will use dynamic programming: we will compute values dp[0] to dp[N], where dp[i] is the expected number of rolls needed if you start with i dice.

We will use the case i=3 to illustrate the general recurrence used to compute these values. In this case we already know the values dp[0] = 0, dp[1] and dp[2].

We now have i = 3 dice. Once we roll them, we will get some number of ones. This number j will be between 0 and 3, inclusive. Remember that the probability of rolling j ones is p[3][j]. This gives us the following equation:

dp[3] = 1 + p[3][3] * dp[0] + p[3][2] * dp[1] + pp[3][1] * dp[2] + p[3][0] * dp[3].

Explanation: the “1 +” at the beginning is the first roll we certainly make. Then, with probability p[i][j] we rolled j ones and we are left with i-j dice that still need to be processed. And we know that in expectation processing these dice will take dp[i-j] more rolls.

The only unknown quantity in the above equation is the value dp[3] we currently seek. We can express it from the equation as follows:

dp[3] = ( 1 + p[3][3] * dp[0] + p[3][2] * dp[1] + pp[3][1] * dp[2] ) / (1 – p[3][0] )

public double solve(int N, int D) {
    double[][] p = new double[N+1][N+1];
    p[0][0] = 1.;
    for (int n=1; n<=N; ++n) for (int s=0; s<=n; ++s) {
        p[n][s] = 0.;
        if (s > 0) p[n][s] += p[n-1][s-1] / D;
        p[n][s] += p[n-1][s] * (D-1) / D;
    }

    double[] dp = new double[N+1];
    dp[0] = 0;
    for (int n=1; n<=N; ++n) {
        dp[n] = 0;
        for (int r=1; r<=n; ++r) dp[n] += p[n][r] * dp[n-r];
        dp[n] = (1 + dp[n]) / (1 - p[n][0]);
    }
    return dp[N];
}

SlowestNim

This solution assumes that the reader already knows the optimal strategy for the game of NIM. In this game, the losing positions are precisely the positions in which the bitwise xor of all pile sizes is zero. (The terminal position in which you lose does have this property. If xor is zero and you make a valid move, xor will stop being zero. And if xor is non-zero, it can be shown that your opponent always has a move that changes xor of pile sizes back to zero.)

We want to play the game of NIM as slowly as possible. 

Clearly, if there is a total of X tokens the game will take at most X turns. Thus, we cannot do better than that. And if we want to make the game last X turns, we need to do two things. First, we have to always take just a single token. And second, we must do so in a way that will force our opponent to just take a single token too.

It turns out that the above is always possible. We will prove this below.

If we have just the intuition that this is always possible but no proof why or how, we can still implement an easy correct solution. The main idea is simply to try all possibilities of taking one token, and for each of them go through our opponent’s possible responses. If the opponent has a winning move that takes more than one token, we abandon the current option. And if the opponent is forced to take a single token too, you just found your answer.

Proof that it’s always possible to force our opponent to play the full X-turn game follows.

As when discussing the strategy for the original game, let’s write all pile sizes in binary. For example:

.....1000
..1001000
.10001000
110101000
.10100000
111000000

Note that we are in a losing position, i.e., each column has an even number of ones.

For each pile size, let’s look at the number of trailing zeros in binary. Let’s pick any pile for which this number is minimal. (In our example, any one of the top four piles.) We claim that we can take one token from any such pile.

Let’s examine what happens when we do so. Below we took one token from the pile marked with the asterisks.

.....1000
..1001000
.10000111 ***
110101000
.10100000 #
111000000 #

What can our opponent do?

  • There are clearly no winning moves where we take from any of the remaining piles (marked with hashes above), because any such move disrupts more than four columns.
  • There are no winning moves where the opponent takes from the same pile we did.
  • For each of the other three “minimal zeros” piles, the only winning move is taking one token.
int twos(int x) {
    int answer = 0;
    while (x % 2 == 0) { ++answer; x /= 2; }
    return answer;
}

public int[] move(int[] piles) {
    int s = 0;
    for (int x : piles) s += x;
    int best = 0;
    for (int n=0; n<piles.length; ++n) {
        if (twos(piles[n]) < twos(piles[best])) best = n;
    }
    return new int[] { s, best, 1 };
}

FalseAbove

We can first solve the problem for even N. Suppose N = 2*k.

We cannot have more than k true statements. If we did, look at the true statement that contains the biggest number. This number is at least k+1, and as the statement is true, it claims that there are at least k+1 false statements above it. This would make the total number of statements at least 2k+2.

Suppose we want to have exactly k true statements. Again, none of them can contain a number greater than k for the same reason as above. Thus, this is only possible if the true statements are precisely those numbered 1 to k.

Once we fix that those statements are true, their placement in the permutation is fixed. It must look as follows:

(_, 1, _, 2, _, 3, _, 4).

All the remaining statements are false and any permutation of them works. Thus, A = k and there are B = k! optimal permutations.

When summing them:

  • Each of the digits 1 to k appears k! times in the fixed location shown above.
  • Each of the digits k+1 to 2*k appears (k-1)! times in each of the k available spots.

Now the solution for the more interesting case N = 2*k + 1.

Again, having k+1 or more true statements doesn’t work for the same reason as above, so we will again have A = k. However, now there are multiple ways to have k true statements: any k out of the statements numbered 1 to (k+1) can be true. Once we fix which ones are true, their positions in the permutation are again fixed by what they claim to happen above themselves. E.g., here’s the only possible distribution of true statements for N = 9 if the true statements are 1, 2, 4, and 5:

(_, 1, _, 2, _, _, 4, _, 5).

At this moment, note that we still aren’t completely free to place the remaining statements into the permutation.

The ones greater than k+1 are safe to place – they are guaranteed to be false regardless of where we place them. The problematic one is the remaining “small” statement. We need to make sure that this statement that is supposed to be false actually is false. This means that it cannot be placed in a location that would make it true. 

In the above example, this is the configuration we cannot produce: (_, 1, _, 2, _, 3, 4, _, 5). Here statement 3 was supposed to be false, but it is placed exactly in the location where it sees three false statements above itself, which makes it true – a contradiction.

Now we can count and sum all valid permutations. 

If the missing small value is between 1 and k, inclusive, there is exactly one forbidden location where it cannot be placed. Thus, we can choose one of the k other locations for this value and then we’ll have k! ways to place the larger values into the permutation. 

If the missing small value is k+1, there is no forbidden location, it will be false regardless of where we place it. Thus, in this case all (k+1)! permutations of the missing values are valid.

This gives us a total of B = k * k * k! + 1 * (k+1) * k! = (k*k + k + 1) * k!

Now that we know the exact set of optimal permutations, summing them up in O(N) time is mostly a technical step. The reference solution does it as follows:

  • Each of the values 1 to k can only appear in two positions, so we can simply compute the number of times it appears at each of them.
  • For the value k+1 we can consider each position separately.
  • The larger values are completely interchangeable, so we can proceed as follows. For each position count the partial permutations that have it empty after we placed values 1 to k+1. Then, separately for each position: multiply that number by the corresponding power of N+1, multiply that by the sum of all large values, and multiply that by (k-1)! to account for permuting the remaining large values.

To finalize this solution, note that we haven’t been completely honest at one point. The above formulas actually do give the correct result, we do only count each valid permutation exactly once. However, we deceived you a little when claiming that our example partial permutation (_, 1, _, 2, _, 3, 4, _, 5) is invalid. This is actually a valid permutation, but it gets counted in a different branch: it’s valid if the true statements are 1, 2, 3, 5 instead of 1, 2, 4, 5, so that’s when it gets counted. We can easily verify that we aren’t counting anything twice: for any permutation there is at most one set of k true statements.


misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds