June 7, 2022 TCO22 Round 2B Editorial


The way to discover a full solution is to realize that a single flower F is a valid input. In order to fix such a flower we need to give it two friends: flowers 1 and 2. Those two friends now only have one friend each: the original flower F. They each need another friend. But we can only add a total of three flowers, so we only have one last flower available. That final new flower 3 must therefore be a friend of both flower 1 and flower 2. There is only one way to do this: the only pattern that works is a 2×2 square of flowers.

Placing this square requires a tiny bit of casework. When the original flower is at (r,c), we can usually add the three new flowers at (r+1,c), (r,c+1) and (r+1,c+1), but in cases when the original flower is on the right and/or bottom edge of the lawn we need to add new flowers on the other side instead.

A full solution now follows easily: we can simply do this for each original flower separately. That is, for each original flower we find its 2×2 square and then we take the union of all those 2×2 squares and we fill those locations with flowers. This is clearly a valid solution: we added at most three new flowers for each original one, and in the final layout each flower is a part of some 2×2 square of flowers, so it has at least two friends.

public String[] add(String[] lawn) {
    int R = lawn.length, C = lawn[0].length();
    char[][] newLawn = new char[R][C];
    for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) newLawn[r][c] = '.';
    for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) 
        if (lawn[r].charAt(c) == 'F') {
            int rlo = r, rhi = r+2, clo = c, chi = c+2;
            if (r+1 == R) { --rlo; --rhi; }
            if (c+1 == C) { --clo; --chi; }
            for (int r2=rlo; r2<rhi; ++r2) {
                for (int c2=clo; c2<chi; ++c2) newLawn[r2][c2] = 'F';

    String[] answer = new String[R];
    for (int r=0; r<R; ++r) {
        answer[r] = "";
        for (int c=0; c<C; ++c) answer[r] += newLawn[r][c];
    return answer;

(Instead of using an auxiliary array, we can also just directly add the flowers to the original array. More precisely, we can iterate over the original array once and each time we see a flower with not enough friends, we add new flowers to make its own 2×2 square. Note that the new flowers are guaranteed to have enough friends, so even if we look at them later, their presence in the array will not change anything. We still prefer the above implementation because it does not need to check whether a flower looks alone.)


To start this solution, let’s be very clear about what it means for such a process to actually run forever. Before we look at Delina herself, consider a much simpler game: we are flipping a fair coin until it lands heads. This is also a potentially infinite process in the sense that the game can have arbitrarily many turns. However, we shall soon see that there is a fundamental difference between these two processes. 

Let’s look at various ways the coin flipping game can play out. With probability 1/2 the game ends in just one round: we flip the coin and we get a head immediately. With probability (1/2)^2 the game will take exactly two turns: a tail followed by a head. And so on: in general, the game will terminate after exactly n turns with probability (1/2)^n.

What is now the probability that the coin flipping game will terminate after a finite number of turns? The event “the game terminated after a finite number of turns” is the union of all events of the form “the game terminated after exactly n turns”. These events are all mutually exclusive, so we can simply sum their probabilities to get the probability of any one of them occurring. The sum is (1/2) + (1/2)^2 + (1/2)^3 + … = 1.

Thus, we got the conclusion that with probability 1 the coin flipping game terminates after a finite number of turns. Hence, the probability of the event that the coin flipping game never terminates is 0.

Now let’s get back to Delina. We will use the same technique to analyze this game, but now we’ll get a very different conclusion: the sum of all probabilities for events of the form “Delina’s process will terminate after exactly n steps” will be strictly smaller than 1. Whatever is left must be the probability of the opposite scenario, i.e., the scenario in which the process actually runs forever without terminating.

Remember that in Delina’s game we have a threshold T we need to roll in order to get the next round.

If the threshold is T, the probability that a single die roll will be less than the threshold is q = (T-1)/20. If we have x pixies, we roll x+1 dice and take the maximum. Thus, we only fail if all x+1 dice fail to hit the threshold. The probability that all dice fail to hit the threshold is q^(x+1).

Let’s number the rounds starting from 1. We can now compute the probabilities as follows:

  • If we start with G pixies, we roll G+1 dice in round 1, and we get one more pixie = one more die in each of the following rounds. Thus, in round n we roll G+n dice.
  • The probability that we hit the threshold in round n is therefore (1 – q^(G+n)).
  • Let P(n) be the probability that we reach round n. We have P(1) = 1 and then P(n+1) = P(n) * (1 – q^(G+n)).
  • Let T(n) be the probability that the game ends in exactly round n. That happens if we reach round n and then fail to hit the threshold in this round. Thus, T(n) = P(n) * q^(G+n).

The probability that the process terminates after a finite number of steps is now simply the sum of T(n) over all positive integer values of n. And, conversely, the probability of getting an infinite process is 1 minus the value of said sum.

We can evaluate this sum numerically. The value q^(G+n) decreases exponentially with n, and thus P(n) and T(n) decrease even faster. 

Another way of computing the answer is to realize that 1 – P(n) is the probability that the game terminates before round n. As we increase n, the value 1 – P(n) is getting closer and closer to the probability of the game eventually terminating. Thus, the probability of the game running forever is simply lim(n to infinity) P(n). In other words, our answer is the value of the following infinite product: (1 – q^(G+1)) * (1 – q^(G+2)) * (1 – q^(G+3)) * …

Evaluating this product numerically was another way of solving the task.

For the constraints used in the problem, both these approaches are actually numerically stable: the rounding errors made by representing the intermediate results in floating-point variables (more precisely, “doubles”) are small enough and don’t blow up, so the results we get are precise enough. The statement intentionally contained the largest possible case as an example. This was to help you check that your solution doesn’t have problems with accuracy.

// one possible implementation
public double getProbabilitySum(int T, int G) {
    double singleFail = (T - 1) / 20.;
    double probFail = 1.;
    for (int g=0; g<G+1; ++g) probFail *= singleFail;
    double probHere = 1;
    double answer = 0;
    while (true) {
        double oldAnswer = answer;
        answer += probHere * probFail;
        if (oldAnswer == answer) break;
        probHere *= 1 - probFail;
        probFail *= singleFail;
    return 1 - answer;

// another possible implementation
public double getProbabilityProduct(int T, int G) {
    double singleFail = (T - 1) / 20.;
    double probFail = 1.;
    for (int g=0; g<G+1; ++g) probFail *= singleFail;
    double answer = 1.;
    while (true) {
        double oldAnswer = answer;
        answer *= 1 - probFail;
        if (oldAnswer == answer) break;
        probFail *= singleFail;
    return answer;

In both implementations shown above we could have also used a fixed number of steps. The way we implemented the function uses the fact that eventually the step will become so negligible that the new value of the answer will get rounded back to the exact same value that can be represented in a double.

When writing the reference solution, we had to go one step further. We cannot just hope that our way of computing the above formulas is good enough, we should be reasonably certain. Of course, one quick way of gaining a lot of confidence is having a math engine such as Wolfram Alpha evaluate the infinite product for you with a much greater precision. But we can also actually compute the same answer in a third way, and for that one all errors made during computation are reasonably easy to estimate. Our thanks go to Leonhard Euler, whose Pentagonal number theorem analyses our infinite product and shows how to turn it into an equivalent infinite sum. When computing said sum numerically, we can then easily estimate from above both the errors made in the terms we actually sum up and the error made by ignoring all but the few largest terms of the sum.

Here’s one more fun fact about such infinite random processes: while in one and two dimensions a random walk will eventually return to the origin with probability 1, this is no longer true in more than two dimensions. In particular, the probability that a random walk in 3D eventually returns home is only about 34 percent. (See Pólya’s Random Walk Constants.) One could say that a drunk human will eventually find their way home but a drunk bird is quite likely to get lost forever!


In this problem the key observation is that whenever we want to charge multiple cars in the same spot, there is always a fixed order in which we should do it. And by “fixed” we mean that we can sort all N cars according to some specific criterion and then for any subset of cars the correct order in which to charge them is (a subsequence of) that particular order.

What is this order and how to discover it?

Imagine that we are charging cars a and b immediately after each other. Suppose that CT denotes their charging times and P their penalties. If we start charging car a in time t, the total penalty for these two cars is as follows: car a will be ready at t + CT[a], so its penalty is (t + CT[a]) * P[a], and car b will be ready at t + CT[a] + CT[b], so its penalty is (t + CT[a] + CT[b]) * P[b].

Now suppose we swap the order in which we charge these two cars. First, note that no other cars will be affected by the swap. This is obvious for cars on other spots and cars charged earlier in this spot. But it’s easy to see that it’s also true for cars charged in this spot later: after we swap a and b, the next car will still begin charging at exactly the same time it did before: at t + CT[a] + CT[b].

Thus, only the penalties for a and b will change. The new penalty for b goes down to (t + CT[b]) * P[b] while the penalty for a goes up to (t + CT[a] + CT[b]) * P[a].

In both cases the total penalty contains the terms t*P[a], CT[a]*P[a], t*P[b] and CT[b]*P[b]. In the first case we have an extra term CT[a]*P[b], in the second case we have the term CT[b]*P[a]. If the first term is smaller, it’s better to charge car a first, and if the second term is smaller, it’s better to make the swap and charge car b first.

As all quantities involved are positive, we can rewrite the inequality CT[a]*P[b] < CT[b]*P[a] as follows: CT[a] / P[a] < CT[b] / P[b].

And this gives us the order we promised you above. If we compute the value CT[x] / P[x] for each car, the optimal charging order will always be the order in which these values form a non-decreasing sequence.

(In case of a tie, all such orders are optimal. No other order can be optimal because if we have an order that’s not sorted according to this value, there will necessarily be two consecutive cars that violate the order, and swapping those cars would improve the total penalty.)

Thus, we can start our implementation by sorting all N cars into this order. (When doing so, make sure to remember the original id of each car so that you can then report the schedule properly.)

We will now spend O(N * 2^N) time to precompute some useful stuff about subsets of cars. Namely, for each subset of cars we will find:

  • The sum of their penalties.
  • The maximum of their sizes.
  • The total penalty we would pay for these cars if we were to charge them in one spot, starting immediately. (In order to compute this, we just process the cars in our subset in the optimal order. This gives us the time when each of them is fully charged and thus also the penalty we would pay for it.)

Now the rest of the solution boils down to exponential dynamic programming. For each x and each subset y of cars we want to find the minimum total penalty dp[x][y] we can get if we charge exactly all cars from the subset y on the first x spots. Of course, subsets of cars are represented via bitmasks: y=0 is the empty set while y=2^N – 1 is the set of all N cars. The value dp[M][2^N – 1] is then the optimal cost of charging all cars.

For x > 0 the value dp[x][y] is computed by iterating over all subsets of y. For each such subset z we consider the solution where exactly the cars in the subset z are charged on the last available spot (i.e., spot x-1). For a specific z we proceed as follows:

  • If the maximum size of a car in z is too large for spot x-1, this option is impossible, so we ignore it.
  • If dp[x-1][y without z] is infinite, this option is also impossible: we cannot charge the remaining cars on the remaining spots.
  • If we have to wait for the parking spot, the total penalty for waiting is (the sum of penalties in the set z) times (the time we have to wait, which is spotAvailable[x-1]).
  • The total cost of this option is then dp[x-1][y without z], plus the penalty for waiting, plus the precomputed penalty paid while charging the cars from the subset z.
  • The final value dp[x][y] is the minimum of these values over all valid choices of z.

In this solution we iterate over all M parking spots. For each of them we iterate over all subsets of cars, and for each of those over all of its subsets. There are exactly 3^N such nested pairs of subsets: for each specific pair (y, z) we process, each car has three options: it’s either not in y, or in y but not in z, or it is in both y and z. Thanks to our precomputations, we can process each pair (y, z) in constant time, which means that we process each spot in O(3^N) time and thus the whole dynamic programming takes O(M * 3^N) time. This is also the total time complexity of this solution, as N*2^N is already asymptotically smaller than 3^N.

public long[] charge(int[] chargingTime, int[] penalty, int[] carSize, 
                     int[] spotAvailable, int[] spotSize) {
    int N = chargingTime.length, M = spotAvailable.length;

    // sort all cars greedily into the order for one spot
    int[] id = new int[N];
    for (int i=0; i<N; ++i) id[i] = i;
    for (int a=0; a<N; ++a) for (int b=a+1; b<N; ++b) {
        if (1L*penalty[b]*chargingTime[a] > 1L*penalty[a]*chargingTime[b]) {
            int tmp;
            tmp = chargingTime[a];
            chargingTime[a] = chargingTime[b];
            chargingTime[b] = tmp;
            tmp = penalty[a]; penalty[a] = penalty[b]; penalty[b] = tmp;
            tmp = carSize[a]; carSize[a] = carSize[b]; carSize[b] = tmp;
            tmp = id[a]; id[a] = id[b]; id[b] = tmp;

    // precompute some useful stuff to avoid doing it multiple times
    long[] totalPenalty = new long[1<<N];
    int[] maxSize = new int[1<<N];
    long[] penaltyFromZero = new long[1<<N];
    for (int cars=0; cars<(1<<N); ++cars) {
        totalPenalty[cars] = 0;
        maxSize[cars] = 0;
        penaltyFromZero[cars] = 0;
        long curTime = 0; 
        for (int n=0; n<N; ++n) if ((cars & 1<<n) != 0) {
            totalPenalty[cars] += penalty[n];
            maxSize[cars] = Math.max( maxSize[cars], carSize[n] );
            curTime += chargingTime[n];
            penaltyFromZero[cars] += curTime * penalty[n];
    // do a subset dp to determine the optimal solution 
    // for each subset of cars and each prefix of spots
    long INF = 1L << 60;
    long[][] dp = new long[M+1][1<<N];
    int[][] useSet = new int[M+1][1<<N];
    dp[0][0] = 0;
    for (int cars=1; cars<(1<<N); ++cars) dp[0][cars] = INF;
    for (int m=0; m<M; ++m) for (int cars=0; cars<(1<<N); ++cars) {
        // compute the best solution for the first m+1 spots 
        // and the given set of cars
        dp[m+1][cars] = INF;
        int sub=0; 
        do {
            int prev = cars ^ sub;
            if (dp[m][prev] == INF) continue;
            if (maxSize[sub] > spotSize[m]) continue;
            long curPenalty = dp[m][prev] + penaltyFromZero[sub];
            curPenalty += totalPenalty[sub] * spotAvailable[m];
            if (curPenalty < dp[m+1][cars]) {
                dp[m+1][cars] = curPenalty;
                useSet[m+1][cars] = sub;
        } while ((sub = (sub-cars)&cars) != 0);

    // construct one valid schedule
    int[] L = new int[N];
    long[] T = new long[N];

    int m = M, cars = (1<<N) - 1;
    while (m > 0) {
        int sub = useSet[m][cars];
        cars ^= sub;
        long curTime = spotAvailable[m];
        for (int n=0; n<N; ++n) if ((sub & 1<<n) != 0) {
            L[ id[n] ] = m;
            T[ id[n] ] = curTime;
            curTime += chargingTime[n];
    // report the constructed solution
    long[] answer = new long[2*N+1];
    answer[0] = dp[M][(1<<N)-1];
    for (int n=0; n<N; ++n) answer[1+n] = L[n];
    for (int n=0; n<N; ++n) answer[N+1+n] = T[n];

    return answer;



Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds