Wednesday, June 1, 2005 Match summary In 1957, the Soviet Union launched Sputnik I, marking the beginning of the space race. SRM 245 may be remembered as TopCoder's version of Sputnik. With a convincing win in only his eighth match, Petr rocketed to 4th place in the Top 10 list, only one point out of 3rd. His countrymen also put on a show in Division 2, with newcomers from the Russian Federation—led by andrewzta and halyavin—taking positions 1, 2, 4, 12, and 16. The ProblemsStraightsUsed as: Division Two  Level One:
If you have, say, X 5s, Y 6s, and Z 7s, then you have X*Y*Z 567 straights. So, the problem boils down to taking products of adjacent numbers, as in int sum = 0; for (int i = 0; i <= hand.lengthk; i++) { int prod = 1; for (int j = 0; j < k; j++) prod *= hand[i+j]; sum += prod; } return sum;Every solution I looked at took this approach. A more efficient algorithm takes advantage of the fact that neighboring products overlap. For example, the product of positions 15 and the product of positions 26 share the product of positions 25. To get the new product, you multiply the old product by hand[6] and divide by hand[1], being careful to avoid dividing by 0. In this fashion, you can get an O(N) algorithm instead of an O(N^2) algorithm. Of course, efficiency was not an issue for this problem, so it was probably wiser to stick with the slower but simpler code. Flush Used as: Division Two  Level Two: Used as: Division One  Level One:
If you have A cards from suit 0, B cards from suit 1, and so on, then the size of your largest flush is max(A,B,C,D). There are Choose(suits[0],A) * ... * Choose(suits[3],D) hands with this distribution, out of a total of Choose(suits[0]+...+suits[3],A+B+C+D) possible hands. Therefore, the contribution of this type of hand to the overall expected size is Choose(suits[0],A) * ... * Choose(suits[3],D) max(A,B,C,D) *  Choose(suits[0]+...+suits[3], A+B+C+D)We sum up these values for all possible values of A, B, C, and D that add up to the desired number of cards. The easiest way to generate these possible values is four nested loops: double expected = 0; for (int A = 0; A <= suits[0]; A++) for (int B = 0; B <= suits[1]; B++) for (int C = 0; C <= suits[2]; C++) for (int D = 0; D <= suits[3]; D++) { if (A+B+C+D != number) continue; expected += ... } return expected;Actually, because D = numberABC, we could get by with three nested loops instead of four, but efficiency doesn't matter for this problem, and the fourloop version is slightly easier to code. It is possible to use the straightforward definitions of factorial and choose, provided you use doubles instead of integers. Also, since all the fractions share the same denominator, it is a good idea to do that division once at the end, rather than dividing every term along the way. The most common variation on this theme was to calculate the denominator by keeping a running sum of the numerators, instead of using choose directly, thereby avoiding some very large factorials. CardCostsUsed as: Division Two  Level Three:
Many people probably tried to use dynamic programming on this problem, but a greedy solution was all that was necessary. The basic idea is to buy the cards one at a time, from whichever round is currently cheapest. If you have already purchased c cards from round r, then the next card from round r will cost k^r * (2*c + 1), where the 2*c + 1 comes from the difference between (c+1)^2 and c^2. You can use this formula to find the cheapest round. Then you add the cost to a running total and increment the count for that round, as in totalCost = 0 count = new int[1000] // number of cards in each round loop n times loop over the rounds to find the cheapest one totalCost += cost of cheapestRound count[cheapestRound]++ return totalCostThe key is to limit the number of rounds you need to consider. One possibility is to find the largest round for which k^r < n^2. (k^r < 2*n is an even better bound, but takes a little more thinking.) Another possibility is to ignore round r+1 until you have already purchased a card from round r. Either approach works fine, except when k=1. In that case, the best strategy is to buy each card in a different round, for a total cost of n. In all other cases, the number of rounds will be small because of the exponential growth of k^r. See andrewzta's solution for a nice implementation of this approach. BikeLockUsed as: Division One  Level Two:
I wonder about this problem every day when I attach one of these locks to my laptop. This problem is easiest to solve using dynamic programming/memoization. First, note that the order in which rotations are performed does not matter, so we might as well do them from left to right. Of course, because a single rotation can involve up to three adjacent disks, the notion of left to right is a little ambiguous. Here I mean left to right according to the leftmost disk involved in each rotation. In the process of turning the leftmost disk to the desired position, we can also turn the next two disks into whatever positions we want. So we consider all 100 possibilities for the positions in which we leave the next two disks while turning the leftmost disk to its desired position. For each possibility, there are two costs to be considered: the cost of turning the leftmost three disks into the appropriate positions plus the cost to recursively solve the rest of problem for all but the leftmost disk. We add these two costs together for each of the 100 possibilities, and keep the cheapest result. The key calculation is the cost of turning the leftmost three disks into their appropriate positions. All turns that do not involve the first disk will be accounted for in the recursive call, so for this part of the calculation, we can assume that all turns involve the first disk. In other words, we can turn the first disk by itself, the first two disks together, or the first three disks together. Now, if the first three disks are currently on positions A, B, and C, and we want to turn them to positions X, Y, and Z, then the cost can be calculated as follows: // first get the third disk into position by turning all three disks cost = turns[abs(ZC)] A = (A + Z  C + 10) % 10 B = (B + Z  C + 10) % 10 // then get the second disk into position by turning the first two disks cost += turns[abs(YB)] A = (A + Y  B + 10) % 10 // finally get the first disk into position cost += turns[abs(XA)]The turns array says how many turns are necessary to rotate the disk by a given distance: turns = {0,1,1,1,2,2,2,1,1,1}SandTimers Used as: Division One  Level Three:
Normally I'm a big fan of dynamic programming, but this one seems easier to me to approach as a graph problem. The nodes are the different amounts of sand that can be in the upper chambers of the sand timers. For example, if you have a 5minute timer and a 7minute timer, then one node in the corresponding graph might be 0 minutes left in the 5minute timer and 2 minutes left in the 7minute timer. There is an edge from state S to state T if you can reach state T by turning over some number of timers and letting the sand fall until one nonempty timer becomes empty. The cost of this edge is the amount of time needed for the timer to run out. Building the reachable portion of the graph can be accomplished by a depthfirst search. Once the graph is built, we process it in two ways. First, we find the earliest time that each node can be reached, using either Dijkstra's algorithm or a simplertocode floodfill to find the shortest path from the start node (all timers empty) to every reachable node. Second, we determine which intervals can start at each node. This can be accomplished by a kind of backwards depthfirst search using a boolean array at each node to keep track of which intervals are possible. We start by marking each node as being the start of an interval of length 0. Then, whenever we mark a node with an interval of length I, we recursively mark all its predecessors with intervals of length I+W, where W is the weight of the edge from the predecessor to the current node. We stop whenever the corresponding boolean has already been set. Performing this marking in the backwards direction means that we have to store reverse edges when we build the graph. We could certainly try to do marking in the forwards direction, but that would yield information about the intervals that end at each node, and it seems more useful to find which intervals start at each node. Finally, we can determine all the possible intervals by combining the results of the previous two phases. For each reachable node and each interval beginning at that node, if the shortest path to that node plus the length of the interval does not exceed maxTime, then this interval is marked as possible in a global array. At the end, we make an array of all the intervals that are not marked. 
