July 11, 2022 Single Round Match 833 Editorials


We just need to do the calculations described in the statement. One part of the calculations that may be tricky for beginners is the fee per each started minute of a call. If we have a call that takes S seconds, the number of minutes we’ll paying for can be computed by dividing S by 60 and rounding the result up to the nearest integer. This is sometimes mathematically written ceil(S/60), read “ceiling”.

Ceiling of a fraction X/Y, where X, Y are positive integers, can be computed in integers using a nice formula: (X+Y-1) div Y.

(Try playing with the formula to see why it always works. Consider two cases: what happens when X is a multiple of Y? And what happens when it isn’t?)

public int payLeast(int[] calls, int P, int[] perMonth, 
           int[] perCall, int[] perMinute) {
    int answer = 1 << 30;
    for (int p=0; p<P; ++p) {
        int current = perMonth[p];
        for (int call : calls) {
            current += perCall[p];
            int minutes = (call + 59) / 60;
            current += minutes * perMinute[p];
        answer = Math.min( answer, current );
    return answer;


The key observation: Suppose your guess is X > 1 and nobody else guessed X-1. Then your guess isn’t optimal and you should rather guess X-1 instead. The reason why is pretty obvious: if the guess X won you the item for prices from some interval [X,Y], guessing X-1 means that for prices from this interval you are still the one who got closest without going over, and additionally you also win if the price is X-1. So, with the new guess you win for any price in [X-1,Y].

We can repeat the above observation until it no longer applies. Thus, the optimal guess is always either 1, or one more than another player’s guess.

With N players this gives us just O(N) candidates for the optimal guess. We can test each of them and pick the best one among them. (Each test can be done separately in O(N) time, or we can sort the previously made guesses and then find our best guess in O(N log N) time total. The code shown below implements this faster option.)

public long guess(long[] previousGuesses, long MAX) {
    int N = previousGuesses.length;
    long[] sentinels = new long[N+2];
    for (int n=0; n<N; ++n) sentinels[n] = previousGuesses[n];
    sentinels[N] = 0;
    sentinels[N+1] = MAX+1;
    Arrays.sort( sentinels );

    long bestAnswer = -1, bestWinCount = 0;
    for (int n=0; n<=N; ++n) {
        long candidate = sentinels[n] + 1;
        long winCount = sentinels[n+1] - candidate;
        if (winCount > bestWinCount) {
            bestAnswer = candidate;
            bestWinCount = winCount;
    return bestAnswer;


This problem is an exercise in dynamic programming. For each point between (0, 0) we want to compute the number of valid ways in which it can be reached, but this time we need to keep a more precise tally and know how many of these ways fall into which of several possible categories.

More precisely, while walking from (0, 0) to (X, Y), our state at any point during the journey consists of the following information:

  • obviously, our coordinates (current x, current y).
  • the direction (north or east) of our last movement
  • the number of consecutive steps we took in that direction (1, 2, 3, or “more than 3”)

The knowledge of this extra information allows us to control that we never make exactly 3 steps in the same direction: whenever we are in a state (x, y, dir, 3), we have to continue in that direction for at least one more step – we may not turn at that point.

We’ll then use dynamic programming to compute, for each of these states, the number of ways in which it can be reached. The final answer is then the sum over all states where we reached (X, Y) and the number of last consecutive steps isn’t 3.

In the implementation below we do a forward-DP in which we always take a state whose values are already computed correctly and we apply all valid ways to make the next step.

public int count(int X, int Y) {
    long MOD = 1_000_000_007;

    long[][][][] dp = new long[X+2][Y+2][2][5];
    dp[0][0][0][0] = 1;

    for (int x=0; x<=X; ++x) for (int y=0; y<=Y; ++y) {
        // For all states where we are at (x, y) we already know
        // the correct values. Now we make the next step from there.

        // Handle the start from (0, 0) as a special case:
        // Reach both (0, 1) and (1, 0) with 1 step.
        if (x == 0 && y == 0) {
            dp[0][1][1][1] = 1;
            dp[1][0][0][1] = 1;

        // Make another step in the current direction.
        for (int s=1; s<=4; ++s) {
            int ns = Math.min(4, s+1);
            dp[x+1][y][0][ns] = (dp[x+1][y][0][ns] + dp[x][y][0][s]) % MOD;
            dp[x][y+1][1][ns] = (dp[x][y+1][1][ns] + dp[x][y][1][s]) % MOD;

        // Make the first step in the opposite direction
        for (int s=1; s<=4; ++s) if (s != 3) {
            dp[x+1][y][0][1] = (dp[x+1][y][0][1] + dp[x][y][1][s]) % MOD;
            dp[x][y+1][1][1] = (dp[x][y+1][1][1] + dp[x][y][0][s]) % MOD;
    long answer = 0;
    for (int d=0; d<2; ++d) for (int s=0; s<5; ++s) 
        if (s != 3) answer += dp[X][Y][d][s];
    answer %= MOD;
    return (int)answer;


First, we can generate all the values L and R. For each pair L[i], R[i] we check whether the interval contains some stones and if it does not, we return -1. (A quick way to check each interval in constant time is to precompute, for each position, the nearest stone to the left and the nearest stone to the right.)

One way to solve this problem would be via dynamic programming: “for each catching attempt, for each of the good stones, what is the maximum sum of distances such that we end on that stone?”

The only problem with this solution: the above DP has O(NC) states and each transition takes O(N) time to evaluate, which is way too slow for N<=2000 stones and C<=10^6 intervals.

We need to make one more observation: For each interval it is sufficient to consider only the leftmost and the rightmost good stone. There is always an optimal solution in which Frogger only uses these stones.

Once we prove the above claim, we have effectively decreased N down to 2, which takes our time complexity down from O(N^2 C) to O(N+C). We include the proof below.

public int jump(String stones, int C, int minW, int seed) {
    int N = stones.length();
    int maxW = N-1;

    int[] L = new int[C];
    int[] R = new int[C];

    // generate L and R    
    long state = seed;
    for (int c=0; c<C; ++c) {
        state = (state * 1103515245 + 12345) % (1L << 31);
        int w = minW + (int)(state % (maxW-minW+1));
        state = (state * 1103515245 + 12345) % (1L << 31);
        L[c] = (int)(state % (N-w));
        R[c] = L[c] + w;
    // precompute the next stone left and right
    int lastSeen = N;
    int[] nextStone = new int[N];
    for (int n=N-1; n>=0; --n) if (stones.charAt(n) == 'O') {
        nextStone[n] = n; lastSeen = n;
    } else {
        nextStone[n] = lastSeen;

    lastSeen = -1;
    int[] prevStone = new int[N];
    for (int n=0; n<N; ++n) if (stones.charAt(n) == 'O') {
        prevStone[n] = n; lastSeen = n;
    } else {
        prevStone[n] = lastSeen;

    // for each catch, find the positions of the left and right good stone
    int[] leftmost = new int[C];
    int[] rightmost = new int[C];
    for (int c=0; c<C; ++c) {
        leftmost[c] = nextStone[ L[c] ];
        if (leftmost[c] > R[c]) return -1;
        rightmost[c] = prevStone[ R[c] ];

    int[][] dp = new int[C][2];
    dp[0][0] = dp[0][1] = 0;
    for (int c=1; c<C; ++c) {
        dp[c][0] = Math.max( 
            dp[c-1][0] + Math.abs( leftmost[c-1] - leftmost[c] ), 
            dp[c-1][1] + Math.abs( rightmost[c-1] - leftmost[c] ) 
        dp[c][1] = Math.max(
            dp[c-1][0] + Math.abs( leftmost[c-1] - rightmost[c] ),
            dp[c-1][1] + Math.abs( rightmost[c-1] - rightmost[c] )

    return Math.max(dp[C-1][0], dp[C-1][1]);

How to prove the claim that it’s enough to consider the two extremal stones for each interval? 

We can do it iteratively: take any solution, take any step where Frogger uses a middle stone instead of one of the two extremes, and show that he can instead use one of the two extremes without making the solution worse. Starting from any optimal solution and repeating that argument then eventually produces an optimal solution that only uses extremal stones.

A solution where Frogger ends the whole process on some middle stone cannot be optimal – obviously, we can make the last jump longer by continuing in its distance until we get to the last good stone. Symmetrically, it’s never optimal for Frogger to start on a middle stone.

Finally, suppose we have, somewhere in our solution, a sequence of jumps (stone A) -> (stone B) -> (stone C) such that B is some middle stone. Instead, let’s extend the jump from A to B farther in its original direction, all the way to the last available stone B’.

Without loss of generality, suppose the jump A->B was left to right. If C=B’ or C is to the right of B’, the total distance of the jumps A->B->C did not change (we are just going right from A to C). In all other cases, A->B’->C is obviously longer than A->B->C (by twice the distance between B’ and max(B,C)).

This concludes the proof.


The obvious necessary and sufficient condition for a solution to exist is that each letter must have an even number of occurrences.

Next, we can observe:

  • More than one edit is actually never needed. If we know the final position for each letter we want to move, we can simply move each letter directly to its final position, all during the same edit. The direct route for each letter is clearly cheaper or equal to the sum of costs of any sequence of movements of that letter with the same start and end.
  • In an optimal solution, occurrences of the same letter will never change their relative positions. (Any pair of moves where two equal letters cross paths can be replaced by a shorter-or-equal pair of moves in which these two letters swap their destinations and thus maintain their relative order.)

Using the second observation we can now divide the letters into N pairs that must correspond to each other in the final WW arrangement. For each pair, one of them will be at some index x<N while the other will be at x+N.

Now, for each pair of letters and for each x we can compute the total cost of moving the pair of letters to the coordinates x and x+N. All that remains is picking a distinct x for each pair in such a way that the sum of costs is minimized. This is a pure assignment problem, also known as “minimum cost perfect matching”. We can apply one of the standard polynomial-time algorithms such as the Hungarian algorithm or any implementation of MinCost-MaxFlow.

public int rearrange(String S) {

    // verify solution existence
    if (S.length() % 2 == 1) return -1;
    int N = S.length() / 2;
    int[] counts = new int[256];
    for (int n=0; n<2*N; ++n) ++counts[ S.charAt(n) ];
    for (int i=0; i<256; ++i) if (counts[i] % 2 == 1) return -1;
    // divide the letters into N pairs
    int[][] pairs = new int[N][2];
    int L = 0;
    for (int i=0; i<256; ++i) if (counts[i] > 0) {
        int[] where = new int[ counts[i] ];
        for (int j=0, n=0; n<2*N; ++n) if (S.charAt(n) == i) where[j++] = n;
        int C = where.length / 2;
        for (int j=0; j<C; ++j) {
            pairs[L][0] = where[j];
            pairs[L][1] = where[j+C];

    // for each pair and each destination, compute the travel costs
    int[][] costs = new int[N][N];
    for (int a=0; a<N; ++a) for (int b=0; b<N; ++b) {
        int distance = Math.abs( pairs[a][0] - b ) 
                     + Math.abs( pairs[a][1] - (b+N) );
        costs[a][b] = distance * ((int)S.charAt( pairs[a][0] ));

    // build the MCMF graph for the assignment problem and solve it
    MincostMaxflow solver = new MincostMaxflow(costs);
    long[] answer = solver.getFlow( solver.N-2, solver.N-1 );
    return (int)answer[1];


categories & Tags


Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds