October 26, 2021 Single Round match 811 Editorials

TripleEliminationTournament

One possible approach is actually simulating one possible tournament, keeping track of the number of losses of each player and assigning match results in some appropriate way. (E.g., always make the player with the smaller number of losses lose, with the exception of player 0 who already has exactly G losses.) 

One easy way to simulate a tournament round is to sort all players according to their number of losses and then to divide the sorted order into pairs simply by going left to right.

Of course, there is also a much simpler solution. Each game creates exactly one loss, so the total number of games played in the tournament equals the total number of losses. Each of the N-1 eliminated players had exactly 3 losses, and the tournament winner had G losses. Thus, the total number of losses is 3*(N-1) + G and that must also be the total number of games played.

public int countGames(int N, int G) {
    return 3*(N-1) + G;
}

As an extra comment, note that regardless of match results our tournament will always terminate: as long as there are at least two players, the next round will involve at least one match, and thus the total number of losses will increase each round.

Extra challenge: As a function of N, what is (at least approximately) the number of rounds the tournament will take?

CircularParking

A straightforward simulation will clearly be too slow: e.g., if all cars appear at the same place, we will have N(N-1)/2 collisions, so we cannot afford to simulate each collision separately.

There are many easy ways to simulate each parking car in O(log n) time. The operation we need is always the same: given where the car starts, what is the nearest free parking spot? Once we know both where we appeared and where we’ll park, we can then determine the number of collisions for the current car in constant time.

Some ways to find the next free parking spot quickly include using a segment tree or a Fenwick tree to keep track of either empty or full parking spots.

In terms of implementation simplicity, probably the best choice is to use a built-in sorted set, such as a TreeSet in Java or std::set in C++. We will use it to store the empty parking spots. Whenever we have a car appear at some parking spot p, we do the following:

  • Query the set for the smallest element x such that x >= p.
  • If such an element exists, our car will create x-p collisions and then park at spot x.
  • If there is no such element, our car will create N-p collisions and then it will start looking for a parking spot from 0. To find the parking spot, we just take the smallest element in the set. (Or, equivalently, we find the smallest element x such that x >= 0.)
public long park(int N, int A, int B, int C) {
    TreeSet<Integer> freeParkingSpots = new TreeSet<Integer>();
    for (int n=0; n<N; ++n) freeParkingSpots.add(n);

    long answer = 0;
    for (int i=0; i<N; ++i) {
        long pp = A; pp *= i; pp += B; pp %= N; pp *= i; pp += C; pp %= N;
        int p = (int)pp;

        Integer spot = freeParkingSpots.ceiling(p);
        if (spot == null) {
            answer += N - p;
            spot = freeParkingSpots.ceiling(0);
            answer += spot;
        } else {
            answer += (spot - p);
        }
        freeParkingSpots.remove(spot);
    }
    return answer;
}

ReconstructPermutation

All we need to do is to construct the answer left to right. In each step, we’ll ask the question: what is the smallest value that can appear here? The answer is always the smaller of two options: either it is the next value of the given partial permutation, or the smallest of the values we haven’t used yet that don’t appear in the given partial permutation.

This greedy construction works for two reasons:

  • We never have a smaller option than the one we are using. All smaller elements have either already been used, or appear later in the given partial permutation.
  • We can never get stuck. For any prefix constructed by this algorithm there is always at least one way to continue: first append the rest of the given partial permutation, then append all remaining elements in any order, and you’ll have a valid permutation.

In the implementation shown below we iterate over all elements of the partial permutation and for each of them we first place all available values smaller than that element and then the element itself. In the end we then fill in the rest using the remaining unused values, in order.

public int[] reconstruct(int N, int[] partial) {
    boolean[] used = new boolean[N];
    for (int x : partial) used[x] = true;
    int[] answer = new int[N];
    int where = 0;
    for (int x : partial) {
        for (int i=0; i<x; ++i) if (!used[i]) {
            answer[where++] = i;
            used[i] = true;
        }
        answer[where++] = x;
    }
    for (int i=0; i<N; ++i) if (!used[i]) {
        answer[where++] = i;
        used[i] = true;
    }
    return answer;
}

SmoothDivisors

This was intended to be the problem in the set that requires the most thinking, but then has a fairly simple and straightforward implementation. The point values were set so that good solutions to this and the next problem should be approximately comparable in value.

Theorem 1. Any number with exactly one distinct prime divisor p is smooth. Proof: its set of proper divisors is either empty, or all its proper divisors are divisible by p.

Helpful observation: for any N, each proper divisor of N is divisible by some (at least one) of the primes that divide N.

Theorem 2. Any number with three or more distinct prime divisors is smooth. Proof for exactly three: suppose N has exactly three distinct prime divisors p, q, r. The sequence p-pq-q-qr-r-pr is a cyclic sequence with the required property and each of its elements is a proper divisor of N (which is at least pqr). We can now take all other divisors of N that are divisible by p and insert them between p and pq, then all remaining divisors divisible by q can go between q and qr, and finally we insert everything that remained between r and pr.

This leaves just numbers with exactly two prime divisors – i.e., numbers of the form p^a * q^b with p, q prime and a >= b >= 1.

If a >= 3 or a = b = 2, the number is smooth. Proof: we use the cyclic sequence of proper divisors p – pq – q – p^2*q and insert all other divisors into it as in the previous proof.

The remaining cases are the numbers that aren’t smooth:

  • Numbers of the form p*q have proper divisors {p, q} and p and q are coprime.
  • Numbers of the form p^2 * q have proper divisors {p, p^2, q, pq} and q is coprime to both p and p^2, so it cannot have two neighbors in the cyclic sequence.

We can now proceed in multiple ways. For example:

  • Do a prime sieve (e.g., the Sieve of Eratosthenes) to find all primes up to the given upper bound B.
  • Mark all numbers up to B as smooth.
  • Generate all numbers of the form p*q and p*p*q and mark them as not smooth.
  • Use a simple cycle to count all smooth numbers in the given range.

Note that the step where we generate the non-smooth numbers takes O(B) time if we break whenever the current pair of primes has a product that’s too large. This is because all the values we’ll generate will be up to B and distinct.

boolean[] isPrime;
int[] primes;

void sieve() {
    isPrime = new boolean[MAXVAL+1];
    for (int n=2; n<=MAXVAL; ++n) isPrime[n] = true;
    for (int i=2; ; ++i) {
        if (i*i > MAXVAL) break;
        if (!isPrime[i]) continue;
        for (int j=i*i; j<=MAXVAL; j+=i) isPrime[j] = false;
    }
    int P = 0;
    for (int n=2; n<=MAXVAL; ++n) if (isPrime[n]) ++P;
    primes = new int[P];
    P = 0;
    for (int n=2; n<=MAXVAL; ++n) if (isPrime[n]) primes[P++] = n;
}

public int count(int A, int B) {
    sieve();
    boolean[] isSmooth = new boolean[B+1];
    for (int n=A; n<=B; ++n) isSmooth[n] = true;
    for (int i=0; i<primes.length; ++i) {
        int p = primes[i];
        for (int j=i+1; j<primes.length; ++j) {
            int q = primes[j];
            if (B/p/q == 0) break;
            isSmooth[p*q] = false;
            if (B/p/p/q >= 1) isSmooth[p*p*q] = false;
            if (B/p/q/q >= 1) isSmooth[p*q*q] = false;
        }
    }
    int answer = 0;
    for (boolean b : isSmooth) if (b) ++answer;
    return answer;
}

Another possible solution looks as follows:

  • Do an augmented version of the Sieve of Eratosthenes where you precompute, for each N, the smallest prime divisor of N.
  • For each number in the range [A,B] use the precomputed values to construct its prime factorization (possible optimization: breaking early as soon as we get 4+ primes, regardless of their values) and then check it to see whether it has one of the two forbidden forms.
int[] smallestDivisor;

void sieve() {
    smallestDivisor = new int[MAXVAL+1];
    boolean skip = false;
    for (int i=2; i<=MAXVAL; ++i) {
        if (smallestDivisor[i] != 0) continue;
        smallestDivisor[i] = i;
        if (i*i > MAXVAL) skip = true;
        if (skip) continue;
        for (int j=i*i; j<=MAXVAL; j+=i) {
            if (smallestDivisor[j] == 0) smallestDivisor[j] = i;
        }
    }
}

public int count(int A, int B) {
    sieve();
    int[] primes = new int[40];
    int answer = 0;
    for (int n=A; n<=B; ++n) {
        int P = 0;
        int tmp = n;
        while (tmp > 1) {
            primes[P++] = smallestDivisor[tmp];
            tmp /= smallestDivisor[tmp];
            if (P >= 4) break;
        }
        if (P == 0 || P == 1 || P >= 4) { ++answer; continue; }
        if (P == 2) {
            if (primes[0] == primes[1]) ++answer;
            continue;
        }
        if (P == 3) {
            if (primes[0] == primes[1] && primes[1] == primes[2]) ++answer;
            if (primes[0] != primes[1] && primes[1] != primes[2]) ++answer;
            continue;
        }
    }
    return answer;
}

MonotoneStrings

For the constraints we used this problem isn’t conceptually too hard but requires a lot of attention to detail and the implementation will probably take some time.

The definition tells us that each letter that occurs in a monotone string must do so in a single contiguous segment. We can easily check whether this is possible. E.g., we can precompute the first and last occurrence of each letter in the given pattern. If these intervals for any two letters overlap, the answer is zero: we cannot produce any monotone string from the given pattern.

Another thing that will be useful later is to note the number U of unused letters: those that appear in the alphabet but not in the provided pattern.

The next step of our solution will be to fill in the wildcards that are forced: for each letter, everything between its leftmost and rightmost occurrence in the pattern must be its copies, so we can replace each of those wildcards by the appropriate number of copies of the letter. (Or we can just remove them. Or ignore them. Or replace each of them by a single letter – this might be the most convenient way of getting rid of those wildcards.)

This leaves us with at most |V|+1 = 63 blocks of wildcards, as now no two blocks of wildcards will have the same character before (nor after) them. If we have zero blocks of wildcards left, we know that the answer is 1.

We can now use dynamic programming to count all ways of producing a monotone string from the given pattern. Let dp[r][u] be the number of valid ways in which we can fill the first r wildcard segments if we have u unused letters we may, but do not have to use.

The value dp[r][u] can be computed as follows: Let L be the length of a string that matches the r-th segment of wildcards. In general, this segment begins with some letter and ends with some different letter. (The exceptions are the first and last segment that have no letter before/after them.) Let A be the number of letters we have available in this way. (I.e., A is usually 2, sometimes 1 and very rarely 0.)

We need to choose how many new letters we’ll use while this segment. The minimum is 0, with one notable exception: if the entire input is wildcards (A = 0), we need to use at least one new letter for that segment. The maximum is min(L,u): we cannot use more than we have available, and we cannot use more than the number of letters in the string that matches the given wildcards. Suppose we fixed the number n of new letters. How do we count the ways?

  • We have binomial(u,n) ways to choose the set of new letters and factorial(n) ways to choose the order in which they appear.
  • We have binomial(L+A-1,n+A-1) ways to choose where the boundaries between blocks of equal letters are. (Explained below.)
  • We have dp[r-1][u-n] ways to fill in all previous segments of wildcards using the remaining unused letters.

Thus, the number of ways for a fixed n is the product of the four values mentioned above, and the total value of dp[r][u] is the sum of those over all n. The time complexity of this solution is O(|pattern| + S^2 * |V|).

Counting the ways to choose the boundaries between blocks: In and immediately around the segment of the string that corresponds to the current block of wildcards (including its endpoints) there will be n+A blocks of letters, and thus our segment will contain n+A-1 boundaries between them. These boundaries have to be at distinct locations (each new letter has at least one occurrence), and we can only have a boundary at the end of our segment if that end has another letter (i.e., it’s not the beginning or end of the string). Thus, we have exactly L+A-1 possible boundary locations: L+1 in general, L or L-1 if A is smaller. 

Challenge: Without thinking about the specifics of this problem, optimize the computation of the DP described above to improve its asymptotic time complexity.


misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds