January 25, 2019 Single Round Match 747 Editorials

SRM 747 was held on 19th Jan, 2019. Thanks to misof for the problems and editorials

Sherlock

In this problem we had to find out whether a name is similar enough to “Benedict Cumberbatch”. The solution is straightforward, we just check all the conditions listed in the problem statement. Note that the implementation can be made more pleasant if you know and use the correct string manipulation functions in your language.

def howManyLetters(name, letterBank):
    return sum( x in letterBank for x in name )
def isBenedict(firstName, lastName):
    if len(firstName) < 5 or len(lastName) < 5: return False
    if firstName[0] != 'B' or lastName[0] != 'C': return False
    if howManyLetters(firstName, "BENEDICT") < 3: return False
    if howManyLetters(lastName, "CUMBERBATCH") < 5: return False
    return True

CycleLength

Each value is uniquely determined by the previous one. Thus, once a value in the sequence repeats, the sequence has entered a cycle.
As there are only finitely many possible values (namely, exactly m), there has to be a repeated value among the first m+1 elements of the sequence.
Thus, probably the easiest solution looks as follows:
Generate the first m+1 elements. You are now guaranteed to be in the part that repeats.
Let x be the current element. Generate following elements until x appears again, and count the steps.
That number of steps is the length of the shortest period.
This solution has no special cases and it needs no data structures – it needs just a constant number of variables. In the worst case it will generate about 2*m elements of the sequence, and thus its time complexity is O(m).

public int solve(int seed, int a, int b, int m) {
    long state = seed;
    for (int i=0; i<m+47; ++i) state = (state*a+b) % m;
    long goal = state;
    for (int steps=1; ;++steps) {
        state = (state*a+b) % m;
        if (state == goal) return steps;
    }
}

Another short solution is to use a data structure. The most appropriate one is a map in which we use the value of an element as the key. The value we store with the key is the index at which the value appeared for the first time. We then generate the sequence one element at a time, and we store all elements into that map. As soon as an element repeats, we know that we found a period. Its length can be computed as the current index (i.e., the index of the second occurrence of the repeated value) minus the index of its first occurrence (which can be retrieved from the map).
For an even smarter solution that can find the preperiod and period in optimal time, in constant space, and even without knowing an upper bound on the period length in advance, look up Floyd’s cycle-finding algorithm.

MostFrequentLastDigit

Silently, this was the most interesting problem of the round, especially for top Div1 contestants. I’ll get to the reason for this claim later.
Of course, the problem was solvable in ways other than the one I had in mind when writing the previous paragraph. Let’s try to come up with a nice deterministic solution. One approach that might help us is a simple visualization of all possible sums of pairs. Here are all the possible sums for the sequence 1,3,1,2,2,4:
We may notice that if we have multiple copies of the same number, they each produce the same multiset of sums. In the figure these appear as identical parts of rows/columns. More precisely, if we have two numbers X and Y with the same last digit, the last digits of the sums that involve X are the same as the last digits in the sums that involve Y. That sounds like a good way to make a lot of copies of the same result.
Let’s see what happens if we only use two distinct values. In our example, let’s try using three 3s and three 7s. The yellow square shows that we now have nine sums that end in a zero: each 3 combined with each 7. Out of all fifteen sums, nine end in a zero and only six end in a different digit.
That sounds promising. If we can make sure that our digit is more frequent than all other digits combined, it has to be the most frequent of all. And the figure shown above gives us an idea how to do it.
Here’s the resulting algorithm:

Find two non-zero digits x and y such that (x+y) mod 10 = d.
Create an array with floor(n/2) copies of x and ceil(n/2) copies of y.
Make sure that the numbers are all distinct.

Why does it work?
Step 1 can always be performed. For example, if d is at least 2, we can choose x=1 and y=d-1; and we can choose (5,6) for d=1 and (4,6) for d=0.
In step 2 we will create an array for which the sums of many pairs will end in the digit d. Are there enough of those? If n = 2k, we have k*(2k-1) pairs and out of those k*k are good and only k*(k-1) are bad. And if n=2k+1, we have k*(2k+1) pairs and out of those k*(k+1) are good and only k*k are bad. Hence, in both cases our digit is clearly the most frequent one.
This is easy to do. For example, we just add 10*i to the element at position i.

TieForMax

This problem has multiple polynomial-time solutions. I’ll show one that’s based on computing the complementary probability — i.e., the probability that the maximum is unique.
We will do a three-dimensional dynamic programming. Let ways[a][b] be the number of sequences of actions in which we can distribute a tokens onto b piles so that each pile has at most c tokens. These values are easy to compute. We’ll leave the base cases as an exercise and we’ll only focus on the general case. The general case is computed as follows:

ways[a][b] = sum( binomial[a][i] * ways[a-i][b-1]   for i = 0..min(a,c) )

What’s going on in the formula? We try all possibilities for the number i of tokens on the first pile. Once we have fixed i, we have binomial[a][i] ways to choose the steps in which we placed those tokens, and for each of those ways we have ways[a-i][b-1] ways to distribute the remaining a-i tokens onto the remaining b-1 piles in a valid way.
Now, once we have this precomputed, we can find the probability that a unique pile is maximal: we will just iterate over all possibilities for its size. For each x we will compute the probability that the unique maximal pile has size x, and as these events are distinct, the result is simply the sum of these probabilities.
How many different sequences of actions will produce a result in which the unique maximal pile has size x? We have p options for which pile it is, we have binomial[t][x] ways to choose in which steps we placed a token onto this pile, and we have ways[t-x][p-1][x-1] ways to distribute the remaining tokens onto the remaining piles in such a way that all of them are smaller. Thus, the total number of valid sequences of actions is the sum of p*binomial[t][x]*ways[t-x][p-1][x-1] over all x.
The total number of actions is simply p^t. Hence, the correct return value of our function is 1 – (the result from the previous paragraph, divided by p^t).
The array “ways” has O(t*t*p) cells, and we can compute each of those in O(t) time. The final sum is negligible in comparison. Thus, our solution has the time complexity O(t^3*p) and space complexity(t^2*p).

RochesterSequences

This problem has a fairly straightforward dynamic programming solution that is correct but too slow. The solution looks as follows: Let best[i][j] be the length of the longest NRS (nondecreasing Rochester sequence) that starts with element i and ends with element j. To compute this, we will try all the possibilities for the second element and all possibilities for the second-to-last element, and we will return the best among those. Formally, best[i][j] = 1 + max best[p][q] overall p,q such that (i < p < q < j) and S[i]+S[j] <= S[p]+S[q]. This solution works in O(n^4).
Can this solution be improved? Not directly. Usually if we want to take a maximum over some possibilities, we can do it faster if we use an efficient data structure, but in this case the condition is rather ugly, as we need to select the maximum of some ugly three-dimensional subspace.
In cases like the one above it is sometimes possible to reduce the complexity by taking one dimension out into the order in which we evaluate the subproblems. This is precisely the technique that will help us solve this problem as well.
Before we apply it, we’ll think about a better visualization of the problem we are solving. Instead of using the definition from the statement, we can define NRSs as follows: an NRS is a sequence of intervals with two properties:
They are properly nested, i.e., each one lies strictly inside the previous one.
Their weights are nondecreasing. (Here, the weight of [i,j] is defined as S[i]+S[j].)
The general scheme of our solution will still be the same as above: for each interval we are trying to find the longest NRS that starts with this interval. Now comes the main trick. In standard interval DP problems we tend to sort the subproblems by length. In this case, that doesn’t help, as it still leads to the same problem as above. However, what does help is sorting the intervals by weight (and only using length as a tie-breaker).
(Remark: Any other single dimension would also work, but as i, j are small and weights can be large, this is the most convenient way.)
Imagine that we have all possible intervals sorted by weight (min to max), and intervals with the same weight sorted by length (max to min). By doing this sort, we eliminated one dimension of the original problem. We are now asking for the longest sequence of intervals with the following properties:
It is a subsequence of this sequence. (This is now trivial.)
The intervals are properly nested.
In order to find the optimal solution, we will process our sorted sequence back to front. Whenever processing an interval, the optimal solution that starts with it can be found by taking a maximum over the solutions for all intervals that 1. have already been processed, and 2. lie strictly inside this interval.
Thus, we can use a data structure that can perform point updates (we just computed a new value) and rectangular queries (what is the maximum among the already computed values in this range?). One such data structure is a 2-dimensional range tree. This data structure can perform the necessary operation in O(log^2 n), and thus the total time complexity becomes O(n^2 log^2 n).

MostFrequentLastDigit revisited

The cool thing about this problem is that it can be solved completely at random, which is quite a rare thing in programming contests 🙂
More precisely, here is a perfectly valid solution:

while True:
    generate a random array of valid numbers
    if it has all other desired properties, return it
Here’s the same thing actually implemented in Python.
def valid_number():
    while True:
    x = randint(0,10**9 - 1)
    if x % 10: return x
def solve(n,d):
    while True:
        A, B = [ valid_number() for _ in range(n) ], [0]*10
        for i in range(n):
            for j in range(i):
                B[ (A[i]+A[j]) % 10 ] += 1
        if len(set(A)) == n and max(B) == B[d] and B.count(max(B)) == 1:
            return A

Yep, that’s it. Why does it work? And, more importantly, how could one read the problem statement, come up with this approach and trust that it will work? I’ll answer these questions below. Note that I’ll intentionally skip most of the details. Everything below can be turned into exact computations, but the main point of the following text is precisely to show how to avoid those tedious computations and just make an educated guess.
First of all, the random array will be very unlikely to contain collisions. From the birthday paradox we know that a collision becomes likely if the number of items approaches roughly the square root of the length of the array. Here, we are selecting 200 random numbers from among 900 million roughly evenly distributed ones. We should be safe, collisions should be quite rare. (Of course, this step can be skipped by the alternate easy construction where we set A[i] = randint(1,9)+10*i for all valid i. This guarantees no collisions and the same distribution of last digits, at the cost of having to think a little bit more.)
Second, if we just take a random array and look at the last digits of its pairwise sums, some digit will usually win. Sometimes there will be a tie for the maximum (do you see how this relates to the div1 medium problem?) but quite often the maximum should be unique. If we a) didn’t have the asymmetry introduced by banning the last digit 0, and b) ignored the possibility of ties for the maximum, each digit would be the most frequent digit in pairwise sums with probability 1/10. And that would mean that on average generating 10 valid arrays should be enough to hit one correct answer.
We now need to take into account the two factors we ignored above. How big is the bias, and how big is the possibility of a tie? When I first considered this problem, I did the following back-of-the-envelope estimates:
If we have x,y from {1,2,…,9}, there are clearly 9 ways to get the sum 0 and 8 ways to get any other sum.
Thus, for the code shown above, the expected value of B[0] is (9/81)*(the number of pairs), and for each other B[i] it is (8/81)*(the number of pairs). The number of pairs is roughly 20,000, so the expected number of 0s is about 250 higher than the expected number of any other digit.
For a binomial distribution with n trials the standard deviation is on the order of sqrt(n), so the actual numbers in a trial will easily go a hundred or two up or down. That is enough to a) expect that max(B[1:10]) will be unique most of the time, because the range of possible values is large enough; and b) expect that max(B[1:10]) has a reasonable chance to beat B[0].
And if we need a small number of random trials to get an input for which max(B[1:10]) > B[0], we only need 9 times that many trials to get an input for which a specific B[d] is the unique maximum.
In order to train your intuition, I suggest two things. First, work out the math I did at least a bit more precisely and think about what assumptions you can and cannot make. Second, play around with the implementation to see whether and how actual data you’ll measure matches the estimates you made. If there are discrepancies, try to find out where they came from.


misof



UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds