October 15, 2021 Single Round match 815 Editorials

SunShroom

In this problem we just need to do some simple math. If a shroom was planted after query time, we can ignore it. If it was planted before query time, we can subtract its planting time from the query time to get how long it existed, and then divide that by 24 to get the number of times it produced sun. The first four times it produced 15 sun each, and then all other times it produced 25 sun.

    public long produceSun(long[] plantingTime, long queryTime) {
        long answer = 0;
        for (long p : plantingTime) {
            if (p > queryTime) continue;
            long produced = (queryTime - p) / 24;
            if (produced <= 4) {
                answer += 15 * produced;
            } else {
                answer += 15 * 4;
                produced -= 4;
                answer += 25 * produced;
            }
        }
        return answer;
    }

DevuAndRabbitNumbering

We may be tempted to do as few changes to the IDs as possible, but such an approach actually leads to a more complicated solution. For example, suppose we have IDs 1, 2, 3, 4, 4, 4, 5, 6, 7. We can happily leave the 1, 2, 3 IDs alone, but then we encounter three rabbits with ID=4, we will have to decrease one of their IDs, and this change will then propagate to enforce changes to rabbits we already processed.

A much simpler approach is a greedy strategy that will assign the smallest possible ID to each bunny. The strategy looks as follows: We will start by sorting the bunny IDs. Then we will process them from the smallest to the largest. For each rabbit, we will decrease their ID if possible. If not, we will leave it the same. If that is also impossible, we will increase it. And if that ID is also already taken, we report that there is no solution.

Proof that the above strategy works:

First, consider the original sorted order of bunnies (with ties broken arbitrarily). We claim that if there is a solution, there is one where the final sorted order of bunnies is the same. This should be obvious: if we have a solution where two bunnies swap places in the sorted order, we can also have a valid solution in which the earlier one gets the smaller of their two final IDs.

Once we know that, the rest of the proof is also obvious. Given the extra constraint that the ordering of the rabbits cannot change, our greedy strategy clearly gives each rabbit the smallest possible final ID (with the constraint that each starting ID can be decreased at most by 1), and thus if it fails, each other strategy must also fail.

    public String canRenumber(int[] rabbitIds) {
        Arrays.sort(rabbitIds);
        int lastNewID = -1;
        for (int i = 0; i < rabbitIds.length; i++) {
            boolean ok = false;
            for (int diff = -1; diff <= 1; diff++) {
                if (rabbitIds[i] + diff > lastNewID) {
                    lastNewID = rabbitIds[i] + diff;
                    ok = true;
                    break;
                }
            }
            if (!ok) {
                return "cannot";
            }
        }
        return "can";
    }

EllysKeys

This is an NP-hard problem known under the name Maximum Set Packing. Each key can be represented by its set of teeth, and we are looking for the largest collection of mutually disjoint sets.

The NP-hardness means that we shouldn’t bother looking for polynomial-time solutions. Luckily, for the constraints used in the problem it can still be efficiently solved using exponential dynamic programming. If we have N keys with P positions each, the time complexity of the solution will be O(N*2^P).

The solution can proceed in multiple ways. In some similar problems we have to pay attention to the order in which we process the elements to make sure that we don’t use the same element twice. In this particular problem we don’t have to do that, as each key is incompatible with itself: once we use a key, we’ll reach a state in which the same key cannot be used again. (Except for the completely empty keys. The solution that doesn’t pay attention to the order in which keys are processed must handle these as a special case.)

In detail, the above solution can look as follows: For each subset of positions we want to find the size of the largest collection of compatible keys that has teeth at precisely those positions. We will start by initializing the answer for the empty set to the number of empty keys we have. Now we will process all other subsets in increasing order (by iterating over all possible bitmasks). For each subset S of positions, we try all possibilities for the first non-empty key we’ll use. The key must have all its teeth on allowed positions from the subset S only. The best solution for S that uses this key has then size 1 + the best solution for the remaining positions (in S but not on the first key we chose). The overall best solution for S is found by taking the maximum over all valid keys we can start with, and the overall best solution for the problem is the maximum over all values computed using the DP.

Other solutions are possible as well. For example, we can maintain an array of size 2^P with the same meaning as above, but this time we will start with an empty collection of keys and we will be adding the keys one by one, each time computing new 2^P values from the old ones by considering two options: either we don’t add the new key to a state, or (if it’s compatible) we do. This approach leads to the same time complexity and does not need to handle the empty keys specially.

BilliardsMaster

The main trick needed in this problem (and, to be honest, probably in all problems ever that involve billiards) is the math needed to hit the target by making a single bounce off a side of the table. Imagine that the table sides are mirrors. If you send the ball along the correct path, after it bounces off the mirror, the actual ball changes trajectory towards the actual goal, but at the same time in the mirror we can see the image of the ball continuing along a straight line towards the mirror image of the goal. Thus, when deciding where to aim, we should find the mirror image of the goal according to the table side we are using for the bounce, and then point the cue towards that mirror image.

This generalizes to multiple bounces. If you actually had four mirrors around your billiards table and you looked into them from the point where the cue ball starts, you wouldn’t see just four extra tables in the reflections. You would see an infinite two-dimensional grid of reflected tables, each of them containing one mirror image of the desired goal. Each of those goals corresponds to one possible trajectory of the cue ball, and the table it’s on tells us how many bounces off each side of the table will there be.

In other words, imagine the whole 2D plane. Initially, the region between (0,0) and (tx,ty) is covered by the original table. We can now cover the rest of the 2D plane by reflections of the table by always taking one of the tables we already have and flipping it over according to one of its edges.

These table images can be given new coordinates (flipsx, flipsy). If we choose the mirror image of the goal that is on the table with coordinates (flipsx, flipsy), the ball will make exactly abs(flipsx) + abs(flipsy) bounces off table sides while actually travelling towards the goal in reality. Thus, if we want exactly b bounces, we have O(b) possible goal locations. For each of them we can compute the distance the ball will travel (this is simply the Euclidean distance between the start and the mirror image of the goal), and for each of the shortest ones we can compute the number of bounces off each side it produces. If there are multiple valid shortest solutions, we take the maximum number of bounces (for each side separately).

MaximizingLCM

This problem concludes our two-problem mini-series of tasks where number theory insight helps. (The first of these was PrettyPrimes a few SRMs ago.)

We can start by making some very simple observations. The constraints imply that once N becomes medium-sized, M has to be reasonably small as well. E.g., while for N=2 we can go up to M=10^9, for N=18 we can only have M<=10. The boundary here is N=15 where the maximum M is also 15. Hence, whenever N>=15, we can take each of the numbers 1 through M at least once, and the answer is lcm(1,2,…,M).

Cases with small N are also obvious: for N=1 the answer is M, for N=2 and M>=2 the answer is M(M-1). For some implementation of the general solution these don’t need to be handled as special cases, but on the other hand they are so easy to handle that it cannot hurt to do so.

This leaves us with cases that have N between 3 and 14, inclusive. Now for the number theory. We can start by proving the following claim: each test case has an optimal solution in which the N selected numbers are mutually relatively prime. Proof: take any optimal solution. While any two selected numbers A and B have a prime P such that P^a divides A and P^b divides B, with a>=b>0, replace B by B/P^b. This does not change the LCM of your collection. Repeating the above process must eventually yield a collection of mutually relatively prime numbers. (The process cannot be infinite as each step decreases the sum of your collection.)

Now for the intuition: we claim that whenever N is small (as in our tests) and M is large enough, there is always a collection of N relatively prime numbers that are all reasonably close to M. And, in turn, the existence of such a collection of numbers will very significantly restrict the space of potentially-better solutions: the number of other collections of N numbers with a larger product will be small enough to iterate over all of them.

This intuitive claim can also be proven exactly. We will show how to do it for N=3. Consider the numbers M, M-1 and M-2. Consecutive numbers are always relatively prime, so the only possible issue is that M and M-2 can both be even. In that case, we can take M-1, M-2 and M-3 instead.

Similar proofs can be done for bigger values of N by only examining finitely many cases: the smallest pattern that works only depends on the value M mod lcm(1,2,…,N).

In general, the intuition that the span needed will always be reasonably small comes from the following fact: whenever we select two numbers that are close to each other, they are either relatively prime, or they only have some very small common divisors. This is because whenever something divides both A and B, it also divides their difference – and therefore only the divisors of the difference between A and B are candidates for common divisors of A and B. As a consequence, in the set {M-something small, …, M} there should be very many relatively prime pairs, and those should then also produce larger cliques of mutually coprime integers. 

For an empirical estimate, suppose we have X consecutive numbers. Consider all X^2 ordered pairs of these numbers. Among them, (X/2)^2 have a common divisor 2, (X/3)^2 have a common divisor 3, and so on for all other primes up to X. This gives an upper bound of X^2 * (sum of squares of reciprocals of primes) pairs that aren’t coprime. Sum of squares of reciprocals of primes converges to a constant slightly smaller than 0.5. Hence, within our set more than half of all pairs are guaranteed to be coprime. (The actual fraction of coprime pairs is bigger, as some of the pair groups listed above do overlap. As X grows, the proportion of coprime pairs converges to 6/pi^2, which is slightly more than 0.6.)

This intuitive reasoning suggests that we could start examining all collections of N mutually prime values starting from the lexicographically largest ones, and once we find a valid solution, use it to prune the rest of the search. Below we show a very simple implementation that is fast enough to go through all possible inputs with N in [3,15] within a few seconds. In each partial state we compute an upper bound on the final product we can get, and if it’s guaranteed to be smaller than the best solution found so far, we prune this branch of the search.

#include <bits/stdc++.h>
using namespace std;

vector<long long> selected;
long long current_product;
long long best_product;

long long power(long long base, int exp) {
    long long answer = 1; while (exp--) answer *= base; return answer;
}

void solve(int remains) {
    if (remains == 0) {
        best_product = max(best_product, current_product);
        return;
    }
    long long last = selected.back();
    for (long long nextval = max(1LL, last-1); nextval >= 1; --nextval) {
        if (__gcd(current_product, nextval) > 1) continue;
        if (current_product * power(nextval,remains) < best_product) break;
        selected.push_back(nextval); current_product *= nextval;
        solve(remains-1);
        selected.pop_back(); current_product /= nextval;
    }
}

void process(int n, int m) {
    best_product = 0;
    current_product = 1;
    selected.clear(); selected.push_back(m+1);
    solve(n);
}

misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds