September 26, 2022 Single Round Match 838 Editorials

ThreeFractions

Method 1: averages

Given two fractions P/Q and R/S, we can easily compute the fraction that’s exactly halfway between them: it’s simply their average, i.e., their sum divided by 2. Their sum can be computed by bringing them to a common denominator: P/Q + R/S = PS/QS + RQ/SQ = (PS + RQ)/QS. Then, their average is simply half of that: (PS + RQ) / (2*Q*S).

If we do this for the given fractions N1/D1 and N2/D2, we’ll get our first fraction A/B that lies between them. For the second and third such fraction, we can now compute the average between N1/D1 and A/B and the average between A/B and N2/D2: i.e., the fractions that lie at the first and third quarter of the interval between N1/D1 and N2/D2. (When computing these, watch out for overflows – don’t just multiply the denominators again. Instead, you can note that the new denominator 2*D1*D2 is already the common denominator for all three fractions you currently have.)

Method 2: Stern-Brocot tree

Given two fractions P/Q and R/S, we can easily show that the fraction (P+R)/(Q+S) lies somewhere between them. We can use this to get a fraction A/B between N1/D1 and N2/D2, and then two more times to get a fraction between each boundary and A/B.

void newFractionBetween(int N1, int D1, int N2, int D2, int[] output, int x) {
    output[x+0] = N1 + N2;
    output[x+1] = D1 + D2;
}

public int[] find(int N1, int D1, int N2, int D2) {
    int[] answer = new int[6];
    newFractionBetween( N1, D1, N2, D2, answer, 0 );
    newFractionBetween( N1, D1, answer[0], answer[1], answer, 2 );
    newFractionBetween( answer[0], answer[1], N2, D2, answer, 4 );
    return answer;
}

Method 3: just a common denominator is enough

As before, an easy first step is to convert both fractions to the same denominator. A simple way to do this is to change the first fraction to (N1*D2) / (D1*D2) and the second fraction to (N2*D1) / (D2*D1).

Now, when does a fraction of the form x / (D1*D2) lie between those two fractions? Clearly, if and only if x lies between N1*D2 and N2*D1. 

If that gives us enough integer values, we could simply take any three of those and be happy. For example, if we want three fractions between 3/21 and 7/21, we can take 4/21, 5/21 and 6/21. But in the worst case N1*D2 and N2*D1 can only differ by 1, so there are no integers between them. E.g., if the original fractions are 1/7 and 1/6, the new fractions are 6/42 and 7/42. What can we do then?

One easy trick is to go one step further with the same approach: we can just multiply all the numerators and denominators we have by 4. In our most recent example, instead of looking for fractions between 6/42 and 7/42 we are now looking for fractions between 24/168 and 28/168. And now we are done: we can take 25/168, 26/168 and 27/168 as our three fractions.

In general, we are looking for fractions that lie between (4*N1*D2) / (4*D1*D2) and (4*N2*D1) / (4*D2*D1). Both fractions have the same denominator and their numerators differ by at least four. Thus, we can easily construct at least three fractions between them: the fractions (4*N1*D2 + x) / (4*D1*D2) for x=1, 2, 3.

public int[] find(int N1, int D1, int N2, int D2) {
    N1 *= D2;
    N2 *= D1;
    int denom = D1*D2;

    N1 *= 4;
    N2 *= 4;
    denom *= 4;

    return new int[] { N1+1, denom, N1+2, denom, N1+3, denom };
}

MaxDistinctPairs

This problem is easy if N >= 3. If we can use at least three letters, we can solve the problem greedily: go through the string once from the left to the right, and each time you encounter a period (‘.’) replace it with a letter that’s not currently adjacent to that period. As each period has at most two neighbors, you can always choose a letter that’s different from both neighbors. This clearly maximizes the number of distinct pairs in the result.

The problem is also really easy if N = 1: if the only letter we may use is ‘A’, we have to change each period to an ‘A’.

The only tricky case to consider is the case N = 2. If we have only ‘A’ and ‘B’, sometimes we cannot be optimal. For example, if we are given the partial string “A..B” we can fill it optimally to get “ABAB”, but if we are given “A..A”, regardless of what we do, we’ll create at least one pair of adjacent equal letters.

How can we handle these situations? 

First, let’s show that in these situations we must really create at least one pair of adjacent equal letters. The proof is simple: The pattern “…ABABAB…” is the only pattern with no adjacent equal letters. In this pattern the indices of all ‘A’s have the same parity and all ‘B’s have the opposite parity. Thus, if we are given:

  • a segment of the form “X, an even number of periods, X” (where X is ‘A’ or ‘B’), or
  • a segment of the form “X, an odd number of periods, Y” (where X and Y are distinct)

we know that this segment cannot be filled with the ABABAB pattern – in the first case we see two equal letters at positions with an opposite parity, in the second case we see two opposite letters at positions of the same parity.

Now we know that we must have at least one pair of equal letters within this segment. But, on the other hand, it’s really easy to fill the segment in a way that has exactly one pair of equal letters: just start at the left end and alternate ‘A’ and ‘B’ until you hit the right end. The one equal pair will be formed by the last two letters. Thus, this way of filling the segment of periods is clearly optimal.

Hence, for N = 2 the following approach works:

  • If the whole string is periods only, fill it with the ABABAB… pattern and you are done.
  • Go through the string left to right. Whenever you encounter a period that follows a letter, replace it by the opposite letter.
  • If there are still some periods left, they must all be at the beginning of the string. Go through the string one more time, but this time right to left. Whenever you see a period before a letter, replace it by the opposite letter. (This clearly solves the prefix optimally, with no equal pairs of letters.)

The code shown below uses the above approach for all values of N: whenever we try to fill a period, we prefer a locally-unused letter, and we use ‘A’ if no such letter exists.

void addLetter(int N, char[] filled, int x) {
    // adds a new letter at position x
    // tries to use a letter that isn't present among neighbors
    char prev = (x==0 ? '@' : filled[x-1]);
    char next = (x+1==filled.length ? '@' : filled[x+1]);

    for (char c='A'; c<'A'+N; ++c) 
        if (c != prev && c != next)
            filled[x] = c;
    if (filled[x] == '.')
        filled[x] = 'A';
}

public String maximize(int N, String partial) {
    
    char[] filled = partial.toCharArray();
    int M = filled.length;

    for (int m=1; m<M; ++m) 
        if (filled[m] == '.' && filled[m-1] != '.')
            addLetter(N, filled, m);

    if (filled[M-1] == '.') addLetter(N, filled, M-1);
    
    for (int m=M-2; m>=0; --m)
        if (filled[m] == '.')
            addLetter(N, filled, m);

    return new String(filled);
}

FlippingCoinSum

The theme of this problem is a classic dynamic programming topic closely related to subset sum and knapsack.

The main challenge is coming up with a solution that’s as simple to implement as possible.

Approach 1: Do a simpler thing twice

We will simply implement two standard functions:

  • A function that constructs the subset sum DP table for a given sequence of coins. That is, for each K and each S we’ll compute what is the best way of reaching the sum S if we can only use the first K coins. (Here, “best” means “fewest coins used”.)
  • A function that takes the above DP table and a sum S, and computes and returns one optimal way of constructing the sum S.

Now that we have these, we can easily compute our answer as follows:

  • Let S be the current sum of visible coins and G the goal sum.
  • In the optimal solution we remove some visible coins. The sum of their values, X, must be between 0 and S, inclusive. We can iterate over all X.
  • Once we know X, we know Y = the sum of values of coins we need to turn face-up to get to the goal. The new sum will be S – X + Y, and as that should equal G, we get Y = G-S+X.
  • For each fixed X and computed Y, we look at the best way to flip exactly the value X face-down and exactly the value Y face-up.
  • The optimal solution is the minimum of those over all X.
  • Once we know the optimal solution and the optimal X, we simply construct one solution to select X from face-up coins, one solution to select Y from face-down coins, and we return their appropriate concatenation.
int[][] dp(int[] coins) {
    int N = coins.length;
    int S = 0;
    for (int x : coins) S += x;
    int[][] answer = new int[N+1][S+1];
    for (int s=1; s<=S; ++s) answer[0][s] = 987654321;
    for (int n=0; n<N; ++n) {
        for (int s=0; s<=S; ++s) answer[n+1][s] = 987654321;
        for (int s=0; s<=S; ++s) {
            if (answer[n][s] == 987654321) continue;
            answer[n+1][s] = Math.min( answer[n+1][s], answer[n][s] );
            int ns = s + coins[n];
            answer[n+1][ns] = Math.min( answer[n+1][ns], answer[n][s]+1 );
        }
    }
    return answer;
}

int[] reconstruct(int[] coins, int[][] dp, int goal) {
    int N = coins.length;
    int S = goal;
    int X = dp[N][S];
    int[] answer = new int[X];
    for (int n=N-1; n>=0; --n) {
        int ns = S - coins[n];
        if (ns < 0) continue;
        if (dp[n][ns] == dp[n+1][S]-1) {
            S = ns;
            answer[--X] = coins[n];
        }
    }
    return answer;
}

public int[] flip(int[] faceUp, int[] faceDown, int goal) {
    int[][] upDown = dp(faceUp);
    int[][] downUp = dp(faceDown);
    int U = faceUp.length, D = faceDown.length;
    int SU = upDown[U].length-1, SD = downUp[D].length-1;
    int best = 987654321, bu = -1, bd = -1;
    for (int ud=0; ud<=SU; ++ud) {
        int du = goal - SU + ud;
        if (du < 0 || du > SD) continue;
        if (upDown[U][ud] + downUp[D][du] < best) {
            best = upDown[U][ud] + downUp[D][du];
            bu = ud;
            bd = du;
        }
    }
    if (best == 987654321) return new int[] {-123456789};
    int[] udsol = reconstruct(faceUp, upDown, bu);
    int[] dusol = reconstruct(faceDown, downUp, bd);
    int[] answer = new int[ udsol.length + dusol.length ];
    for (int i=0; i<udsol.length; ++i) answer[i] = -udsol[i];
    for (int i=0; i<dusol.length; ++i) answer[udsol.length + i] = dusol[i];
    return answer;
}

Approach 2: Do a slightly more complicated thing once.

We can do essentially the same DP on all coins at the same time. This time, the extra input we have for each coin is whether it starts used or unused. The DP can then be calculated as follows:

  • suppose we already know the optimal way dp[K][S] of reaching the sum S using the first K coins
  • now we take the next coin, let its value be V
  • if this coin starts used, we can leave it used for free to reach the new sum S+V using K+1 coins using dp[K][S] flips, or we can flip it to unused, in which case we just reached the sum S using K+1 coins and it required dp[K][S]+1 flips
  • similarly, if this coin starts unused, remaining at sum S is free and going to sum S+V costs us one flip

Once we have the DP table, we can reconstruct one optimal solution in the same way as in the previous solution: we start with all coins and the desired sum G and then we go back through the coins and for each of them we check whether it’s flipped in the optimal solution and then we adjust the sum according to the coin’s final state.

DistinctFaces

The key piece of knowledge needed to solve this problem is linearity of expectation. Whenever we have a sum of random variables, the expected value of the sum always equals the sum of the expected values of the individual variables. Notably, this statement holds always, even if those variables are not mutually independent.

In our case, for each value i between 1 and M = max(size), inclusive, we can have an “indicator” random variable X[i] that has the value 0 if none of the dice rolled the value i, or the value 1 if at least one die did.

Clearly, after the dice roll the total number of distinct visible values is simply the sum of all X[i].

But then we know that the expected number of distinct values on our dice is the sum of expected values of all X[i]!

And the expected values of X[i] are really simple: X[i] is just another name for the probability that we roll at least one i.

How can we calculate X[i]? By calculating the probability of the complementary event: that we never roll i.

This is simple: if a die has size[j] < i faces, it can never roll i, so for that die the probability of not rolling i is 1. And if a die has size[j] >= i faces, the probability of not rolling i is p = (size[j] – 1) / size[j]. If we have count[j] such dice, the probabilities multiply: the probability that none of them will roll i is p^count[j].

As we have N distinct dice sizes, we can divide all values between 1 and M into N buckets based on how many dice can roll them. Within each bucket, each value i clearly has the same probability X[i] of being rolled at least once, so we can compute it once and then multiply it by the number of values that landed in this bucket.

long modexp(long a, long b) {
    if (b == 0) return 1;
    long t = modexp(a, b/2);
    t = (t*t) % 1_000_000_007;
    if (b%2 == 1) t = (a*t) % 1_000_000_007;
    return t;
}

long inverse(long x) { return modexp(x, 1_000_000_007 - 2); }

public int expectation(int[] count, int[] size) {
    // sort the sizes
    int N = count.length;
    for (int i=0; i<N; ++i) for (int j=i+1; j<N; ++j) if (size[j] < size[i]) {
        int t;
        t = count[i]; count[i] = count[j]; count[j] = t;
        t = size[i]; size[i] = size[j]; size[j] = t;
    }

    // for each range of values, take one value from that range
    // look at all dice that can roll it
    // calculate the probability that it will be rolled at least once
    long total = 0;
    int prev = 0;
    for (int i=0; i<N; ++i) {
        int values = size[i] - prev;
        prev = size[i];

        long numerator = 1, denominator = 1;
        for (int j=i; j<N; ++j) {
            long badn = modexp( size[j]-1, count[j] );
            long curd = modexp( size[j], count[j] );
            numerator   *= badn; numerator   %= 1_000_000_007;
            denominator *= curd; denominator %= 1_000_000_007;
        }
        numerator  = denominator - numerator; numerator %= 1_000_000_007;
        numerator += 1_000_000_007;           numerator %= 1_000_000_007;
        numerator *= values;                  numerator %= 1_000_000_007;
        numerator *= inverse(denominator);    numerator %= 1_000_000_007;
        total     += numerator;
    }
    total %= 1_000_000_007;
    return (int)total;
}

Forest

We will start by sorting all the edges from the cheapest to the most expensive, breaking ties arbitrarily. Once we know which ones got erased and which ones were kept, this is an order in which Kruskal’s algorithm would process them to compute the minimum spanning forest.

Instead of first flipping all the coins and then using Kruskal to process everything we can now do an equivalent thing: we will go through the edges in the sorted order, and for each of them we’ll flip the coin and if it was kept, we’ll immediately process it.

More precisely, imagine that each time we flip a coin, the universe splits into two copies: one where we erased the edge and one where we kept it and processed it using Kruskal’s algorithm.

While each of these universes is different, some groups of universes turn out to be fundamentally the same: for the future of Kruskal’s algorithm we don’t really care which edges we already processed, the only thing that matters is the current set of connected components.

This leads us to a DP / memoization approach that merges these similar universes together. We start in the state where each vertex is its own connected component and no edges have been processed yet. As we make some coin flips and process some kept edges, we will reach some new states, each of the form “these are the current connected components, and this is the number of shortest edges we already processed”. For each of these states we’ll ask the same question: “What is the expected cost of the edges we’ll still add to the spanning forest in the future?” Clearly, the answer to this question for the initial state is directly the answer we seek.

These questions are easy to answer recursively using the “next step” method. In order to answer a question, we look at the next edge and we consider the next coin flip. There are two options:

  • With probability p the coin tells us to erase the edge, which brings us to the state “same components, one fewer edge left to process”. We recursively find the expected cost for that state and multiply it by p.
  • With probability 1-p the coin tells us to keep this edge. Now we check what Kruskal does with it. If the two endpoints are already in the same component, the edge is ignored and the state is again “same components, one fewer edges”. If they are in different components, Kruskal takes the edge, so its length L[j] becomes a part of the answer, and the new state is “the same components but with those two merged into one, one fewer edges”. In either case we solve the new state recursively to get the expected value V for the rest of the process. If Kruskal accepted the edge, we then add its length L[j] to V. Then, we multiply V by the probability (1-p) that this branch actually occurs.

The expected value for the original state is the sum of the values we got for the two mutually exclusive branches.

The final important observation is that this solution can be very fast as long as we are clever about how we represent the connected components in a state. For the reference solution we chose a simple way: Each connected component is represented by its bitmask, so the set of connected components is then simply an array of integers. We maintain this array sorted. Once we do that, each way of dividing the N vertices into connected components has a unique representation. 

The number of these grows quite slower than one might expect: the exact number of ways in which we can divide the set {0,1,…,N-1} into disjoint subsets is counted by the Bell numbers: https://en.wikipedia.org/wiki/Bell_number, https://oeis.org/A000110. For N=9 there are only B(N) = 21,147 ways to divide the N vertices of a graph into connected components.

The time complexity of the above solution is roughly proportional to the rate of growth of Bell numbers: as there are O(N^2) edges, there are O( B(N)*N^2 ) states and each of those can be solved in polynomial time.


misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds