Saturday, November 20, 2004 Match summary In Division 1, SnapDragon won with a commanding lead, and more than 400 points behind were Ulan and Yarin in second and third. The easy problem posed little difficulty for seasoned coders, while the medium and hard received a variety of different solutions, and saw a variety of reasons for many solutions not surviving. Only the top 5 finishers correctly solved all three problems. In room 7, natori earned 5 correct challenges on the medium problem, a feat that far surpassed anyone else in the challenge round. Division 2 was championed by a newbie, unChabonSerio, who in the process earned a spot as 14th on the alltime most impressive debut chart... congratulations! A new, soontobe target? We shall see. Antoni and titid_gede took 2nd and third, while newbies jmbeverl and kindloaf rounded out the top 5, again a very impressive first time performance.
The ProblemsWaiterTippingUsed as: Division Two  Level One:
This problem wasn't so much about finding a clever solution as much as it was about carefully reading the problem statement, which gives, very clearly, an inequality, and asks for the largest value for which the inequality is satisfied. A quick look at the problem constraints confirm that brute force searching is indeed feasible. Perhaps the biggest gotcha here was the temptation to try to "solve" the equation and find the correct result, although the last example was intended to show a case where this reasoning could get a coder in trouble. Indeed, that was probably the most common flaw in solutions that did not make it. HealthFoodUsed as: Division Two  Level Two: Used as: Division One  Level One:
The problem of searching/sorting a list of items based upon multiple criteria is interesting not only because of how frequently it comes up in many programmatic tasks, but also because of the variety of ways to approach a solution. There was an opportunity for some of the more seasoned coders were able to take advantage of standard library code and implement a clean and elegant solution using a custom comparison method (Depending on your language of choice, the Java API, C++ STL, and .NET framework all offer some type of comparer interface to work from.) Nonetheless, it was still very possible to come up with a good solution, even without knowing how to use a custom comparer. A fairly clean solution, which does not rely upon a comparer interface, has as its cornerstone in a funcion, isBetter, which takes two indexes, and a diet string, and determines if the second entree is a better fit for the diet plan than the first. Then, using a switchor select casestatement, compares the two entrees based upon the first letter of the diet plan. If they are different, it returns the appropriate result, else it recurses itself, but removes the first character from the diet plan description, so that the next most important criteria can be considered. Once such a function is in place, it is a simple matter of setting the selected entree to 0 for each person, then looping over each possible entree to determine if it is a better fit for the diet plan than the one that is already selected. PayBillUsed as: Division Two  Level Three:
This is a classic subset sum problem. Coders who quickly saw this, and were familiar with the typical algorithms for approaching such a problem had a considerable advantage here. One small wrench was that some type of backtracking had to be implemented in order to generate the result required by the problem, e.g. the actual elements that contribute to the given sum. There are two basic approaches to this problem. The first, which inevitably fails due to timeout on larger test cases, is to try obtain the sum by either including or excluding the first element, and then calling itself recursively with the remainder of the set. Unfortunately, with the maximum 50 people, this is over 1 quadrillion operations to perform. Instead, a more clever, dynamic programming approach needs to be used. Since each item can only be up to 10,000, we know that the sum cannot be more than 500,000. So, we simply need a boolean array of 500,000 elements, where the ith element represents whether or not we can reach the total i with some subset of the original values. In code it looks something like this: boolean[] canTotal = new boolean[500001]; canTotal[0] = true; for (int i = 0; i < meals.length; i++) for (int j = totalMoney; j >= meals[i]; j) if (canTotal[j  meals[i]]) canTotal[j] = true; But, we also need to keep some kind of backtracking. We can do this with bitmasks, or even simply an array of 500,000 integers, each indicating the last element that had to be added to get the given total. Then, at the end, we can just look at the integer array to grab the indexes of those elements that contributed to the total, and construct our result. Note here that the constraints give us that there is exactly one unique solution. This is the saving constraint that allows us to get away with this method. If multiple solutions were possible, we would need to be more clever about how we kept track of the desired result. TurntableServiceUsed as: Division One  Level Two:
Both this problem and the hard were inspired by an actual occurrence of a group of people going out for Chinese food, and sitting at a round table (with a turntable in the middle), and ordering the food only after a long discussion of trying to figure out what to order to make everyone happy. So, what's a person to do? Package the whole experience into SRM problems, of course! This problem is really what's known as a subset covering problem. However, the timings given for moving the turntable add an interesting nuance to the problem, because there may be multiple ways to stop the turntable at, say, 6 different positions, and give everyone a chance to serve themselves a favorite entree. But, of those several different permutations, some ways involve less total rotation, and hence are faster. A similar issue is to determine, given a set of stop positions, what is the optimal rotation path to stop at all of those positions? In fact, the optimal path is either to go only in one direction, or to start in one direction, and then go back and continue in the opposite direction for the remainder of the stops. This observation is a bit subtle at first, but once realized, it isn't overly hard to determine the minimum time it takes to stop at a given set of positions. There are several reasonable approaches to this problem, one of which is a breadth first search. The only quirk is that, depending on how the search is coded, you might not be able to stop immediately upon finding a set of stopping points that works, because (as explained in the previous paragraph) another set of the same number of stopping points might also work, in less time. A solution previously unconsidered was first seen during the actual competition, using dynamic programming on the people who had been served, and the elapsed time; the logical thrust that makes it work properly being that each rotation can only (sometimes) add to the people who have been served, so processing in order by that bitmask is legitimate. A third possibility is simple brute force. With a maximum of 15 people, there are only 32,768 subsets of stopping points. With reasonably efficient code, it's possible to check all of the possibilities. One simple optimization is to avoid stopping at any location that doesn't give anyone a favorite. Each stopping point that can be eliminated cuts the number of cases to evaluate in half. For each one that satisfies everyone, you simply calculate the minimum time to visit all of those points (as outlined above), and return the minimum of those values. In any case, while this problem was not especially difficult to think about how to code a solution, there were several things to consider, so it was very easy to get caught on a subtle flaw, especially in system testing. OrderFoodUsed as: Division One  Level Three:
Here is a problem that really is best done by a computer. For all but some of the most trivial examples, it can actually be difficult to determine simply by looking at the inputs, whether or not a given solution is indeed correct. At first it seems like the only reasonable approach is to simply try the combinations of different entrees, and see which combinations satisfy the requirements of exactly two favorites for every person. But, since the constraints offer a maximum of 30 entrees, it's simple to see that this will be way too slow on larger test cases. Any such solution is O(2^n), which just won't cut it. Next, we try to optimize in a number of ways... don't consider any entrees that aren't anyone's favorite, and stop considering any subset once someone has more than two favorites. This helps, certainly, but clearly not every possible case will see enough optimization in this manner. We still need something more efficent. In this case, the answer the technique of divide and conqueror. First divide the entrees into two groups, with no more than 15 in each. Now, each group has at most 32,768 subsets of entrees to consider. Now, we build a bitmask for each subset, indicating which people have their favorites satisfied by the given combination of entrees... but the trick here is that we need to use 2 bits for every person (and with 15 people, that's 30 bits, so a 32bit int still does the trick!), so that we can indicate whether 0, 1 or 2 of their favorites are available. As we perform this calculation for each half of the entrees, we store the results in a structure like a Hashtable, or even just an array (that we sort at the end) will do; we maintain the minimum cost of entrees to obtain each possible bitmask of people's favorites. Then, for each resultant value in the first set, we search for a corresponding value in the second set (here is where a Hashtable, or sorted array that we can binary search, comes into play) to complement our bitmask. For each complementary pair of bitmasks we find, we select the lowest total cost. Implementing the optimizations described in the third paragraph is an important step towards keeping the size of the hashtable or array from getting too large, and definitely improves the overall efficiency of the algorithm. 
