Saturday, August 16, 2008 Match summaryIn both divisions competitors faced relatively strightforward easy and medium problems, but tough hard problems which only elicited a single passing submission between them. The only passing hard submission from zhuojie handed him the win in division 1. An excellent performance from redocpod in only his fifth rated event put him in second place, with Rizvanov_de_xXx completing the top three. In division 2, fast submissions on the easy and medium problems with a couple of successful challenges won ahmed_aly the division by a comfortable margin, with xgy coming second and nike third. The ProblemsRestaurantManagerUsed as: Division Two  Level One:
This restaurant operates a firstcome, firstserved policy on its tables and you are tasked with working out how much business they will lose with this policy. As with many division II easy problems, the solution simply involves carefully simulating the process to arrive at the answer. Step through the arrivals in sequence and as each group arrives determine which table it should be allocated. A useful trick is to maintain a list of the times at which each table will become available. This is set to the departure time of the group to which the table is allocated. It is then very simple to determine whether a table is available at a given time. EmbassyUsed as: Division Two  Level Two: Used as: Division One  Level One:
First notice that if we know the time that we start filling in the forms, then solving this problem is very simple: just fill in every form as soon as you have received it and get it approved at the first point in time when the embassy is open after you have finished filling it in. But how do we know when to start? With a day being potentially 10^{6} time units long, a brute force over start time will work if it is efficiently implemented. However, there is a much more efficient solution, which is nearly as easy to implement. The key here is noticing that there is always an optimal solution where some form is approved just as the embassy is opening. This is straightforward to prove: take any optimal solution where this isn't the case. If at any point you had completed a form before the embassy had opened, you would be waiting and would get it approved right as the embassy opens. Otherwise, you never have to wait and such a solution can be shifted back in time, without changing the length of time it takes until the point where some form is filled out right at the opening time. We therefore have a maximum of 50 starting times, so we can try them all. Assume for simplicity that the embassy opens each day at time 0 and closes at time openTime. Initially set the starting time to zero, then for each form, first shift the starting time back by the length of time it takes to fill it in, so that we arrive at the embassy just as it opens, then simulate the whole process. The optimal time is just the minimum over all the starting times you try. A helpful trick in this problem is noticing that you can work modulo dayLength, which makes it easy to work out whether the embassy is open or not. Shape3DUsed as: Division Two  Level Three:
This question asks you to canonicalize the description of a 3D volume. Such a canonicalization might be useful if you wanted to check whether 2 shapes were the same, for example. Unfortunately, this was rather tricky for a division 2 problem and nobody managed to get a working solution during the contest. The first step is to realise that there are only a finite number of rotations the we need to consider, 24 to be exact. We can therefore simply try every possible rotation and take the one that gives the lexicographically minimum description. However, generating all 24 rotations is a tricky little problem in itself. The are many different ways to do this, such as by successive rotation around each coordinate axis, but you have to be very careful to ensure you generate all the possible rotations. A simpler and less errorprone way of generating the rotations is to use vectors. Notice that rotating the shape is the same as describing the shape in a rotated set of axes. However, we are only interested in coordinate systems whose axes are parallel with the current axes. We can therefore pick any two nonparallel directions out of the 6 axis directions to be our new i and j axes and then calculate k to form a righthanded set. We then resolve the shape into this new coordinate system. This sounds an awful lot more complicated than it actually is, particularly if you have a prewritten 3D vector class. c++ pseudocode follows. int Axis1 [] = {1,0,0,1,0,0}; That's all there is to generating the rotations. So we have our rotated shape, what now? We need to generate the lexicographically minimum description of that shape under some translation. Firstly, notice that we can always get some xcoordinate to beome 0, since if every coordinate is greater than 0 (we can't have any negative coordinates in our answer), we can always translate the entire shape in the negative xdirection until some xcoordinate becomes 0. So let's work out which xcoordinate will become 0 first and translate the entire shape to make that happen. It now helps to do the same for the y and zcoordinates. This means that any further translation in the negative x, y or zdirections would lead to some coordinate becoming negative, so we only have to consider positive translations. Put differently, we can now increase the coordinates in our shape without limit, but from this point on we can't reduce them any further. int xmin = INF, ymin = INF, zmin = INF; for (int k=0;k<rShape.size();k++){ xmin = min(xmin, rShape.x); ymin = min(ymin, rShape.y); zmin = min(zmin, rShape.z); } for (int k=0;k<rShape.size();k++){ rShape.x = xmin; rShape.y = ymin; rShape.z = zmin; } So now for the lexicographical minimization. For this we need to minimize the first element of the return, then the second element and so on. However, in this case it is sufficient to only consider the first element. Why? Because the only weapon in our armory is to translate the shape around and no two translations will cause some cube to have the same string representation. So, whichever cube is in the first position of the array will have some translation that minimizes it, and that translation will then determine the positions of the rest of the cubes as well. Consider, therefore, the cube in the first position of the array. Remembering that we can only increase the value of its coordinates, what translation causes it to become lexicographically minimal? It is easy to see that for each coordinate, we need to find the lexicographically minimum number that is greater than, or equal to it. For a number X, the number with this property is either X, if X is 0 or 1, or the smallest power of 10 that is greater than or equal to X. To see that we always end up with a power of 10, just consider the digits in order: unless X is 0, the first digit must be greater than 0, so we look for solutions where this digit is 1. We can always find some number greater than X whose first digit is 1. Similarly, considering the remaining digits, we can always find some number greater than X where each successive digit is set to 0. Therefore, we are always looking for a power of 10. So now our translation is fully specified and we can apply it to the whole shape. We can then convert each of our cubes to the string representation and sort these in ascending order to given the overall lexicographically minimum array. However, we need to backpedal a bit: how do we know which element is going to end up in the first position? Since there are at most 50 cubes in the shape, we can just try every cube in the first position and take the lexicographically minimum description that results. If the cube that we have tried doesn't actually end up in the first position after sorting, it doesn't matter, as there is no way it could be the first element of the return anyway, because the translation that causes it to become lexicographically minimal causes another cube to be lexicographically less than it. for (int first=0;first<rShape.size();first++){StringInterspersal Used as: Division One  Level Two:
This problem seems to be a simple ordering problem at first glance, but it has a boundary case that requires a little thinking. Most competitors realised very quickly that the solution is to work from the start of the string and add a single letter at a time. Since we need to minimize the first letter, we choose the smallest first letter out of the words in W, remove it from that word and add it to the return string. We then have a slightly smaller set of words and can simply repeat the process for the remaining letters. Simple, isn't it? Well, not quite, since we also have how to handle the case where 2 or more words start with the same letter. Which word should we choose to remove the first letter from? It seems intuitive that if the first letters are the same, then we should compare the second letter and if they're the same, compare the third letters and so on. This looks very similar to a lexicographic ordering, but there is one very important difference, as we will see later. First of all, let us take a look at why this intuition is correct. Consider 2 words S_{1} and S_{2}, with some maximal shared prefix P, where the letter following P in S_{1} is x_{1} and the corresponding letter in S_{2} is x_{2}. Now consider some final interspersal in which c_{1} occurs before c_{2}: what we can show is that we can always generate this exact string up to and including c_{1}, by choosing the first letter of S_{1} first. This is because the prefix of both strings is the same, such that corresponding letters are interchangeable between the strings without changing the result. So if any block of characters from S_{2} is chosen before the characters from S_{1}, when can simply swap them over and choose that block of characters from S_{1} first. This is most easily demonstrated by example. If the strings are S_{1} = "COMMONPREFIXA" and S_{2} = "COMMONPREFIXB" with lexicographically minimum interspersal "CCOMMOMMONONPPREFIREFIXAXB", interspersed as follows: C OMM ONP REFIXA There are two blocks here where a letter of S_{2} is chosen before a letter of S_{1}, shown in red and green above. Note that a "block" here is a sequence of consecutive letters in one of the generating strings, which might not be consecutive in the interspersal. Since the blocks are the same in each string, we can simply swap them over: C OMM ON P REFI XA Now every letter in S_{1} (including the first) is chosen before the corresponding letter in S_{2}. The other thing to notice is that if we swapped the strings entirely and chose every letter from the other string, then the resulting interspersal wouldn't change before the original position of c_{1}, but rather than c_{1} appearing there c_{2} would appear in its place. This means that c_{1} must be alphabetically earlier than c_{2}, since otherwise we could generate a lexicographically smaller interspersal just by swapping the strings in the input. So what we have shown is that for any two strings, the alphabetically earlier character in the first position at which they differ will come earlier in the resulting interspersal, and it is always possible to achieve this by choosing letters from the string that contains the earlier letter before the corresponding letters of the other string. However, there is a catch: what if there are no differing letters and one string is a proper prefix of another? Actually, it's quite easy to extend the logic above to show that any interspersal can be changed to one where letters from the longer string are chosen first without changing the resulting string. This leads us to a "tiebreaking" rule that is the opposite of the standard lexicographical comparison. So the correct solution to this problem is as follows: define an ordering of the strings, where that a string S_{1} occurs before a string S_{2} if S_{1} either has the earlier character at the first position at which they differ, or S_{2} is a proper prefix of S_{1}. Then simply repeatedly extract the first letter of the earliest string under this ordering. There are a few other tricks that work as well, such as comparing S_{1} + S_{2} to S_{2} + S_{1} or padding the strings with very late characters (e.g. 'Z' + 1). Showing that the method that you employed is correct is homework. KPlanetaryAlignmentUsed as: Division One  Level Three:
This was an innocuous, but actually quite tricky counting problem with an evil boundary case that knocked out all but one of the submissions. Many competitors counted the number of distinct lines that were formed, when you were asked for the number of distinct times. Certain tricky test cases lead to 2 disjoint alignments happening simultaneously, and most of the solutions doublecounted these events. The first step in solving this problem is to answer the question: given some subset of the planets, how often are they all aligned? We can find the pairwise alignments between the planets in the subset, then work out when they all happen simultaneously. Alternatively, another way to visualize this is to pick some planet in the subset and imagine that we are rotating around the star, such that to us, this planet appears to remain stationary. The advantage to this is that now, rather than having to consider that the planets have to line up along some arbitrary line, this planet is sitting on the same line constantly, so we only have to consider the points in time when the planets pass this stationary line. If we calculate the new periods of our planets in this rotating frame of reference, each planet will lie on this line every half a period. The new period of each planet is given by: 1 1 1 =    T_{i rotating} T_{i} T_{0} T_{i} * T_{0} => T_{i rotating} =  T_{i}  T_{0} where T_{0} is the period of the planet we have chosen to remain stationary and T_{i} is the period of some other planet in the subset. Remember to take the absolute value of this if one of the planets has negative period. Alignments of planet 0 and planet i happen with a period given by a half this value. We now have to work out how often these crossings all happen simultaneously. This is simply the lowest common multiple (LCM) of all the periods divided by 2. However, the periods are all fractions, so we need to work out how we can calculate the LCM of a set of fractions. First of all write all the fractions in their lowest terms. Next, consider that we are looking for a set of smallest integers, such that we can multiply each fraction by its respective integer and we will arrive at the same value for each one. For this to be true, both the numerator and the denominator must be the same. Let's start with the denominator: when we multiply a fraction by an integer, the denominator only changes due to cancellation when it shares a factor with then number by which we are multiplying. We therefore need to cancel out factors of the denominators until they are all the same. The only factors that will be left will be those that are shared between all the fractions. The denominator of the answer is therefore the greatest common divisor (GCD) of the denominators of the original fractions. Now, once all the denominators are the same, we need to work out how to make the numerators the same. With the denominators the same, this is now just a standard LCM calculation. The LCM of a set of fully reduced fractions is therefore: LCM(numerators) GCD(denominators) So now that we can work out the period of alignment of some subset of the planets, we simply take every ksized subset, work out its period of alignment, use a simple division to work out how many of them occur in the specified time range and add them all together to get the total, right? Nearly, but not quite: the problem is that the sets of times that these alignments happen overlap and we are asked for the number of distinct times. We need some way of calculating the size of the union of a number of overlapping sets when the potential size of the answer is way too big to consider each time separately. Luckily, the inclusion/exclusion principle proves exactly such a method. This describes the size of a union of sets in terms of the sizes of their intersections, which are often much easier to calculate. An intersection here is the number of times within the specified timeperiod when two or more different ksubsets are aligned simultaneously. Noticing that each ksubset alignment is periodic with a fractional period and we want to work out how often several of them happen simultaneously. We see that this is exactly the same calculation that we have just performed with the pairwise planetary alignments. The inclusion/exclusion principle tells us to consider every possible subset of the sets we are trying to union and calculate the size of the intersection of these sets. If we have just chosen an odd number of sets, then we add this size to the total, otherwise we subtract it from the total. That's all there is to it! A note on complexity: inclusion/exclusion takes time exponential in the number of sets. The worst case here is when we have the largest number of ksubsets, which happens when N is 5 and k is 2 or 3, where there are 10 subsets to consider. This is small enough that inclusion/exclusion easily works in time. With N = 6 it is very hard to write a solution that doesn't time out. Notice also that the constraints were carefully set so that none of the values in the calculation would overflow a 64bit integer, so it is unnecessary to use library integer classes. 
