February 17, 2021 Single Round Match 799 Editorials

CoolPairsEasy

Just iterate over all pairs (a, b) such that a != b and check whether lastname[a] == firstname[b].

class CoolPairsEasy:
	def count(self, firstname, lastname):
    	N = len(firstname)
	answer = 0
	for x in range(N):
		for y in range(N):
			if x != y:
				if lastname[x] == firstname[y]:
					answer += 1
		return answer

PlanningTrips

The solution will consist of two steps. First, we will compute the sum of all input numbers. Second, we will find the closest greater-or-equal power of A.

The numbers are huge, but there is a neat trick that will make our life simple. Instead of counting in base 10 (where the numbers are huge and also ugly) we will count everything directly in base A. When written in base A, each of the input numbers is simply a 1 followed by some zeros, and computing the sum of such numbers is much simpler.

We still cannot afford to process or store all digits of these numbers, but we don’t have to. Instead, we will only remember the non-zero digits of the sum. We will use a map to store them. The exponent will be the key into the map, the digit will be the value assigned to the key. For example, the Python dictionary {10: 3, 11: 2} represents the number 3*A^10 + 2*A^11.

We will process the input one number at a time. Each time we add a new number, we increment the corresponding digit in the sum and then we perform carries if needed. (If a digit of the result becomes A, we set it to 0 and increment the next digit instead.)

In the end there are two possible cases. If the sum is exactly a power of A (the only non-zero digit is a 1) the answer is its exponent, otherwise the answer is the next higher exponent.

from collections import defaultdict

def find(a, num):
    	total = defaultdict(int)
    	for x in num:
		# add the current number to the sum
        	total[x] += 1
		# do the carries if needed
        	while total[x] == a:
            	total[x] -= a
            	total[x+1] += 1
            	x += 1
    	biggest = max( x for x in total )
    	if sum( total[x] for x in total ) == 1:
        	return biggest
    	else:
        	return biggest+1

MapleTreeEasy

Thin subtrees are the same thing as simple paths. Thus, we want to count all subsets of nodes such that their minimal subtree is a simple path. We can do this using dynamic programming.

In general, a path in a rooted tree consists of two parts: a path from X to Y goes upwards until it reaches some node Z and then it goes downwards. (The node Z is the least common ancestor of X and Y.) It is usually a good idea to group paths by Z when we need to count them or process them somehow.

To be more precise, the paths that have their topmost point at some specific node Z can be classified as follows:

  1. Paths that begin at Z and only go downwards. (This includes the trivial path that only contains Z.)
  2. Paths that consist of two non-empty parts, each going downwards from Z.

In tasks like this one it is quite common to count paths of type 1 first (which is easier) and then to reuse these counts when counting paths of type 2. This is because in each path of type 2 we can look at the children of Z that are on the path. From each of them we have one independent path going only downwards, i.e., a path of type 1.

In our solution we will do something similar. First, we will compute the values nonempty_downwards[Z] for each Z. This value is the number of subsets of nodes such that the subtree they determine is a nonempty path that lies entirely in the subtree rooted at Z and only goes downwards. In other words, we are counting all subsets of nodes such that they all lie on the path from Z to one of the leaves below Z.

These values can be computed using very simple dynamic programming: If we know the count for each child of Z, we add up all those counts (we can only select nodes in one of the children’s subtrees), multiply the sum by two (to each subset of nodes in some child’s subtree we may but do not have to add Z itself) and then add 1 (which counts the one-element subset {Z}).

Now we are ready to compute everything we need:

  1. The number of type-1 paths with their top exactly at Z is one (for the path {Z}) plus the sum of nonempty_downwards[C] over all children C of Z. This is because for these paths we must select Z and we may add anything that produces a downward-going path in any one child’s subtree.
  2. The number of type-2 paths is the sum over all children C1 != C2 of Z of (2 * nonempty_downwards[C1] * nonempty_downwards[C2]). This is because for two distinct children of Z we select some nodes that induce a downwards-going path, and the “2 *” part is there because to any such set of nodes we may but do not have to add Z itself.

The above solution can be further optimized (to process each node in time linear in its degree instead of quadratic), but the input is so small that these additional optimizations were unnecessary.

def count(self, parent):
    	N = len(parent) + 1
    	children = [ [] for n in range(N) ]
    	for n in range(N-1): children[ parent[n] ].append(n+1)

    	nonempty_downwards = [ 0 for n in range(N) ]
    	for n in reversed(range(N)):
        	nonempty_downwards[n] = 1
for c in children[n]:
nonempty_downwards[n] += 2*nonempty_downwards[c]

    	answers_with_top_here = [ 0 for n in range(N) ]
    	for n in reversed(range(N)):
        	answers_with_top_here[n] += 1
        	for c in children[n]:
answers_with_top_here[n] += nonempty_downwards[c]
        	for c1 in children[n]:
            	for c2 in children[n]:
                	if c1 < c2:
					x = nonempty_downwards[c1]
					y = nonempty_downwards[c2]
                    	answers_with_top_here[n] += 2*x*y 

    	return sum(answers_with_top_here)

CapriciousSorting

Going from the highest of 30 bits to lowest we can, for each bit, determine whether none, one or both values of this bit work. If only one works we apply it immediately, if both work we can pretend it’s 0 and remember to double the answer at the end. (Thus, the answer is always 0 or a power of two.)

To do the check for bit b, we can simply shift all numbers by b bits to the right (which means that we are only considering bit b and higher bits) and check which choices for flipping / not flipping this bit leave the sequence sorted in non-decreasing order. After checking all bits we need to make one final check: whether the resulting sequence is actually strictly increasing.

bool is_sorted(const vector<int> &X) {
	vector<int> Y = X;
	sort( Y.begin(), Y.end() );
	return X == Y;
}

struct CapriciousSorting {
	int count(vector<int> num, int bits=30) {
    		int answer = 1;
    	for (int b=bits-1; b>=0; --b) {
        		vector<int> zero = num;
     		   	for (int &x : zero) x >>= b;
        	vector<int> one = zero;
        		for (int &x : one) x ^= 1;
     		   	if (!is_sorted(zero) && !is_sorted(one)) return 0;
        	if (is_sorted(zero) && is_sorted(one)) answer *= 2;
        		if (!is_sorted(zero)) for (int &x : num) x ^= 1<<b;
    		}
     	// final check: are there any duplicates?
    	set<int> numset(num.begin(), num.end());
    		if (numset.size() != num.size()) return 0;
    	return answer;
	}
};

MapleTree

Two different solution types were intended to pass: you could either implement something better than cubic in N, or you could have had a solution using bitsets in O(N^3/W), where W is the machine word size.

Suppose we already chose two vertices u and v that should be the endpoints of a diameter of our subtree. Clearly the whole u-v path will be in the subtree. Which other vertices can we include in the subtree without violating the fact that u-v is a diameter? It should be reasonably easy to show that these inclusions don’t influence each other. If a vertex w has dist[u,w] > dist[u,v] and/or dist[v,w] > dist[u,v] we clearly cannot include it. And if we include all other vertices, we will still get a valid subtree with diameter dist[u,v]. (Draw a picture in which the diameter u-v is horizontal and all other edges go downwards. Where in the picture are all the selected vertices? Why doesn’t any pair of selected vertices have a distance bigger than dist[u,v]?)

In this way we could count all good subsets of vertices in O(N^3) time. We would iterate over all pairs (u,v) and for each of them we would count all w that can be added and increment the number of good subsets by pow(2, number of such w).

The above solution has one problem: some subsets can have multiple diameters and we are counting them multiple times. To avoid double-counting, we need to break the ties: for each pair (u,v) we will only count those subsets for which (u,v) is the lexicographically smallest out of all diameters. To achieve this, in cases where dist[u,w] or dist[v,w] equals the diameter we will only include those w that are bigger than v.

This type of solution can be sped up using bitsets. We will process the pairs (u,v) in decreasing order of distance, and within a fixed distance in lexicographic order. For each vertex u we will maintain a bitset of vertices w that are at most as far from u as the current diameter. The number of selected w for a pair (u,v) can then be counted by intersecting these bitsets for u and v. Once we are done processing a specific diameter (u,v), we remove u from v’s bitset and vice versa.

Asymptotically faster solutions exist – the size of the set of viable w can be maintained faster, e.g. by keeping track of the middle of the diameter. Details are, as the saying goes, left as an exercise for the reader 🙂

CoolPairs

Each string in the output is either “a” or one of the strings in the input. We can number these in lex order and use their numbers in the rest of the implementation.

Let K = # of wildcards. In O(2^K * poly) we can precalculate the best solution value for each subset of wildcards, but with the extra constraint that all wildcards have to be replaced by the same string. Once we have this information, we  can do another pass of dynamic programming in which we spend O(3^K) to calculate the best actual solution value for each subset of wildcards. For each bitmask we will also store the smallest string we can assign to its first wildcard in some optimal solution.

Now, knowing the value of the optimal solution we can construct the lexicographically minimal one by doing another dp over subsets in which we are going left to right and construct partial solutions that are guaranteed to cover 1, 2, … first wildcards. More precisely, we will compute the values dp[M][mask] where M is a number of wildcards and mask is a bitmask in which the first M-1 bits are set and the rest is arbitrary. The value dp[M][mask] will be the smallest string that should be assigned to wildcard number M in the lexicographically smallest optimal solution obtainable from the partial optimal solution for the given mask. There are two options: if mask has bit M set, we have already assigned the string to this bit earlier and we don’t have to do anything now. In the other case we know that bit M is the smallest bit where this unknown new string occurs, and thus the optimal choice is the smallest string we precomputed and stored at the end of the previous paragraph. It can be shown that the total time complexity of this phase is also O(3^K).


misof


categories & Tags


UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds