




reid takes the hard problem on his way to the finals by legakis,
The last event of the day was semifinal round 3. haha was the first to submit, solving the 200 point problem in about 7 minutes. snewman was right on his tail though, submitting about 1 minute later. Three other coders submitted within 10 minutes, and they all moved on to the medium. reid was quickest, but snewman made it a race, getting 0.08 fewer points on the first two problems.
This problem would be considerably more difficult if it were not for this key simplifying restriction: "All pixels in the top and bottom rows and left and right columns of the original image will be black." With this restriction, pixel (1,1) in the source image is the only pixel that contributes to pixel (0,0) in the blurred image. Therefore, the only possible values for pixel (0,0) in the blurred image are 0 and 1. If it is 0, then pixel (1,1) in the source is black, and if it is 1, then pixel (1,1) in the source is white. Once you have determined the value of a pixel in the source, you can subtract its contribution to the blurred image and move on to the next pixel. After the contribution from source pixel (1,1) has been subtracted from the blurred image, then pixel (2,1) in the source image is the only remaining pixel that contributes to pixel (1,0) in the blurred image. You can calculate the value of the source pixels one at a time, working from lefttoright and toptobottom, simply by looking at the blurred pixel to the upperleft and then subtracting 1 from the 3x3 around the pixel if it is white. In pseudocode, given a blurred image B of size W x H:
create source image S of size W x H initialize S to all black for y = 1 to H2 // skipping top and bottom rows for x = 1 to W2 // skipping left and right columns if B[x1,y1] == 1 { // only possible values are 0 and 1 set S[x,y] to white subtract 1 from 3x3 region of B centered at (x,y) } return SSpanning The relatively small constraints on the input size make this problem amenable to a brute force solution, with a couple simple optimizations to keep the running time down. The basic strategy is: for each subset of edges, compute the shortest path between all pairs of nodes. Of those pairs that have a largest distance less than or equal to the given threshold, keep track of the minimum cost. The allpairs shortest path problem can be solved by running Dijkstra's algorithm from each node, or by using the FloydWarshall algorithm. With a maximum of 10 nodes, these algorithms run very quickly. One important optimization is to prune branches of the search that cannot possibly connect all nodes. For example, if there are N nodes, you know that you must use at least N1 edges. Another optimization, if you build the graph recursively by considering each edge in order and either adding it or not, is to incrementally update your allpairs shortest path solution after you add each edge. You can also prune branches where the cost of the edges you have already added exceeds the smallest cost you have found so far. SureThingThe key to this problem is given in the example at the end of the problem statement: "Notice that you don't care if your first guess is right or wrong." This is not a probability problem  it is a math and logic problem. If you make the optimal wager at every step, you never care if any of your guesses are right or wrong. Consider the case with 1 blue and 2 red marbles. You guess that the next marble will be red, but need to compute how much to wager. For now, let's call your wager x (where x is a fraction of your bankroll). You know that if you guess correctly, you will have (1+x) times what you started with, and be able to double that one more time, ending up with 2*(1+x). If you guess incorrectly, you will have (1x) times what you started with, but be able to double that twice, ending up with 4*(1x). The optimal value of x is the value that maximizes the minimum of these two equations, and this optimal value is where they intersect. (Any other value of x would decrease one of the equations, lowering the minimum.) So, to find the optimal value, you set the equations equal to each other and solve for x. In this case, you get x = 1/3. Notice that I assumed above that we should guess that the next marble will be red, without explanation. If we instead guess that the next marble will be blue, we will get x = 1/3, indicating that we assumed the wrong color. It is always the case that you want to guess that the next marble will be whatever color is most plentiful among the remaining marbles. If there is a tie it does not matter, because your wager will end up being zero. Computing your wager is just as easy when there are more than two colors of marbles. This is because you only have to equate two equations: the equation if your guess is correct, and the equation for the worst case (which is when the second most plentiful color of marble is removed from the bag.) Now, the problem can be solved recursively with memoization. Given a set of colors of marbles M, sort the set. Then create a set W for the the winning case (with 1 subtracted from the highest element of M) and a set L for the worst losing case (with 1 subtracted from the secondhighest element of M). Call your function recursively on W and L, to compute how much you can win in either of those cases. Calling these results w and l respectively, you then compute the amount of your wager for M by solving the following equation for x: w * (1 + x) = l * (1  x) Once you have x, you can plug it in to either side of the equation to compute how much money you can guarantee yourself to end up with starting at state M. The base case for the recursion is when M contains only 1 nonzero number. In this case, the value is simply 2 raised to the power of that number, because you can double your money that many times. 
