Saturday, August 19, 2006 Match summary
Taking place the same day as this year's International Olympiad in Informatics awards ceremony, SRM 316 proved to be an exciting event with an active challenge phase that some coders will find hard to forget. Notably, this was tomek’s first rated event after the TCO06 Finals. He came into the match with a chance of hitting the 3600 rating mark, though unfortunately he didn't quite get there this time. After completing the easy and evaluating the medium and hard problems, some of the coders in Division 1 opted for the tricky 550 while others went directly for the reasonably easy 900. By the end of the coding phase a respectable number of mediums and hards were in, with Petr in first with the highest scores on all three problems. A truly amazing challenge phase followed, with several coders raking in points and significantly reducing the time required for system testing. Top performers were Vintik, with nine successful challenges on the easy, and Jan_Kuipers, who brought down eight mediums. With 225 points from challenges, Petr looked to have secured another convincing win, but to everyone’s surprise his hard failed, leaving Michael_Levin, another student of the Moscow State University, to enjoy his first SRM win thanks to four successful challenges. Jan_Kuipers came in second, with a failed medium submission he made up for in the challenge phase, and Andrew_Lazarev took third with a fine performance in the challenge phase as well. tomek finished fourth due to a resubmission on the medium. After rounding up the top five, konqueror will enjoy being the only red coder from India at the moment. In Division 2 things were a little more calm, with another Russian coder, FedorTsarev, winning the division and moving to Division 1 in the process. Newcomer wd.h placed second, followed by MemoriesOff in third and another newcomer, westsang, in fourth. The ProblemsHiddenMessageUsed as: Division Two  Level One:
Problems don’t get much easier than this. This task asked coders to parse the given string and concatenate the initial letters from the words contained in it. An easy way to do this is to scan the input string two characters at a time. If the first character is a space and the next one is a letter, then this second character must be the first letter of a word, so we append it to the result. See FedorTsarev’s solution for the implementation of this. Alternatively, coders used tools from their preferred languages to split the input string into words. InboxCleanupUsed as: Division Two  Level Two: Used as: Division One  Level One:
The strategy described in the problem statement of selecting all messages on a page and then individually deselecting some of them might actually help in some situations. It surely did for the purpose of this task. A hidden trick turned what seemed to be a really easy problem into a ticking bomb during the challenge or system test phases and caught some of the leading coders offguard. The first step of the solution is iterating through all possible values K for the number of emails per page, computing the minimal number of clicks required for each one and choosing the best. Once you determine how to calculate the number of clicks required for a fixed number of emails per page you’re pretty much done. We have to iterate over each page of emails, make a selection with the minimal number of clicks, press delete only if there is at least one message to delete, and advance to the next page (if the current page is not the last one). In making the selection, there are only two options, both of them mentioned in the problem statement: either select messages individually or select all messages and then deselect the ones we must keep. So all we need to know for each page is the number delete of emails to delete and the number keep of messages to keep. The optimal number of clicks required to do a selection will then be min(delete, keep+1). This is what went wrong for some coders: they evaluated keep to be equal to K–delete, overlooking the fact that the last page might contain fewer than K messages. For a nice implementation of the above algorithm, see Petr’s solution. SpreadingNewsUsed as: Division Two  Level Three:
The first thing to notice is the recursive structure of the problem: to find the minimum time required for the manager to spread the news, we must know the minimum times for each direct subordinate to spread the news among their own subordinates (direct or indirect). Note that it wouldn’t be of any help to the manager if at least one subordinate doesn’t spread the news in a minimal time  this might lead to a worse solution, but not to an improvement. In other words, the optimal solution to the problem will be built from optimal solutions to the subproblems. More formally, we define a subproblem as the minimum time required for one of the employees to spread the news among their direct or indirect subordinates. Let’s also define time[i] the answer to the ith subproblem  that is, the time required for the ith employee. Our goal is to calculate time[0], and then return it. Let s_{i} be the number of direct subordinates of the ith employee. The base case is when the ith employee doesn’t have any direct subordinates (s_{i} = 0) yielding time[i] = 0. For the other subproblems, let’s assume we have already solved their subsubproblems and we know how long it takes for each of the direct subordinate of the ith employee to spread the news. We have to use this information for solving the subproblem at hand. It is easy to see that an employee must give the first phone call as soon as they hear the news from a supervisor, and call all their subordinates in some order without any pause between calls. How can we find this order so that the spreading time is minimized? Let’s take a closer look at what happens if the s_{i} subordinates are called in the order p_{0}, p_{1}, ..., p_{si1}. Specifically, employee numbered i will call employee p_{0} at time 0, then employee p_{1} at time 1, and so on, where the times are relative to the moment of time employee i has heard the news from its supervisor. The moment of time at which p_{0} or any of his direct and indirect subordinates will have heard the news is clearly 1 + time[p_{0}], and the moment of time at which p_{1} or any of his direct or indirect subordinates will have heard the news is 2 + time[p_{1}]. The more general statement holds: the moment of time at which employee p_{i} and all subordinates will have heard the news is i + 1 + time[p_{i}]. So putting all these head to head we can deduce the following relation: time[i] = max(1 + time[p_{0}], 2 + time[p_{1}], ..., s_{i} + time[p_{Si1}])From this it is rather easy to conclude that the ith employee must call his direct subordinates in decreasing order of their spreading times. The proof that this greedy strategy works is easy and it is left as an exercise for the reader. If you are stuck somewhere, you can always use the forums to ask whatever question you may have. The implementation of the above algorithm is not too difficult. One may choose to write a recursive function close to what was described above, just as yext did in his solution. Note that you don’t need to memoize the function as some coders did in their solutions, since it will be called exactly once for each parameter. Alternatively, one may iterate through each employee in decreasing index order, since all his direct subordinates have greater indexes. This leads to a very concise implementation, like westsang’s solution. Note that his solution has time complexity O(N^{2} lg N), but it can easily be modified to run in O(N lg N), by replacing the second loop with one that iterates over direct subordinates only. PlacingPiecesUsed as: Division One  Level Two:
The first thing that occurred to most coders after reading the problem statement was a greedy algorithm. Ideally, such solutions shouldn’t have lived for too long as some of the examples were counterexamples for such approaches and a hint that no greedy strategy would do. Sadly though, some coders didn’t take a close look at the examples and, even worse, overlooked the condition that the total length of the placed pieces couldn’t be greater than the length of the board L. As a result, the most common greedy approach passed the example that had been specifically designed against it. Unfortunately, this same miss caused some other solutions to fail that were otherwise on the right track. (As it turns out, this problem is very similar to the knapsack problem.) The first thing one should realize here is that when placing some pieces on the board, we are interested in the total length, len, and the number of pieces used, count. To check whether any other piece fits on the board, we only have to check if the smallest unused piece, of length s, fits. When placing the pieces on the board, we will have Llen total space left. It is easy to see and prove that the best strategy is to divide this space evenly among two consecutive pieces or the edges of the board and the first and the last pieces, and leave count+1 spaces on the board. To conclude, at least one more piece fits on the board if the following expression evaluates to true: (count + 1) * s <= L  lenIf this expression is false, we have a valid solution, and a candidate for the optimal solution. With this key observation at hand we are ready to build a dynamic programming algorithm. After sorting the pieces in increasing order by length, we will iterate over the index ind of the smallest unused piece, and place all pieces with a smaller index (since their length is smaller), with a total length of smallerLen. With the remaining pieces we will fill up the table canPlace, where canPlace[higherCount][higherLen] is true if and only if there is a way to place higherCount pieces with the total length of higherLen. Now, we take each pair (higherCount, higherLen) for which canPlace[higherCount][higherLen] is true, and check whether by adding these highestCount pieces to the partially filled board (do not forget to check whether they can be added, ie. smallerLen + higherLen <= L) we get a valid solution, by using a similar expression to the one in the last paragraph (where count is replaced by smallerCount + higherCount and len is replaced by smallerLen + higherLen)  if we do, we have to compare it with the best solution so far. If we are not able to find any solution at all, then all pieces must be placed on the board. A straighforward implementation of the above algorithm runs in O(N^{3} * L) time complexity  see ploh’s solution for a nice implementation. You can turn this into a O(N^{2} * L) algorithm if, instead of building and then filling the table canPlace independently for each smallest unused piece, you use the information obtained at the last iteration as well. For more details, you can check out jakubr’s solution, which turns out to be quite similar to the referenced solution. There is an alternate solution to this, thanks to brett1479. At every iteration over the smallest unused piece, instead of filling in the table canPlace, we can use another one, best, so that best[b] will contain the minimum number of pieces, from the remaining ones, for which the total length is b. The valid solutions are then found similarly to the algorithm above. To keep this article easy to read, the proof on why this works will be omitted and left as a useful exercise to the reader. If it doesn’t work out, I will be happy to answer in the forums. A straighforward implementation of this runs in O(N^{2} * L) but can be improved to run in O(N * L). RoboRaceUsed as: Division One  Level Three:
This problem not only required a correct algorithm but an efficient one as well. However, this didn’t trouble coders too much, as reflected by the high submission and success rates. Let’s start with the first solution that comes to mind: check every possible starting time, and return the first that guarantees a win for our robot. To accomplish this, we could use a slightly modified version of a breadth first search. A state in the search will be represented by three integer values: row, col and time for the current row, column or time, respectively. When being in state (row, col, time) a robot has two options: ignore the command for this time unit and go to state (row, col, time+1) or following the command and reach the state (newRow, newCol, time+1). Now, for each possible starting time T we perform a BFS starting from the state (startRow_{i}, startCol_{i}, T) where (startRow_{i}, startCol_{i}) are the starting positions for the two robots and the index i distinguishes between them. The states for each we search  let’s call them final states  are (destRow, destCol, T’) for some T’ greater than T, where (destRow, destCol) is the position of the destination cell. Obviously the robots would like to reach the destination as soon as possible, so final states with smaller T's are preferred. If our robot reaches the destination first  in other words reaches a final state with a lower T'  we can return T. This may pass test cases where the return T is low enough, or even a few tougher ones if optimized a bit, but it wouldn’t have a big chance of passing test cases where the return is 1. Let’s see how the BFS works in more detail. For a better understanding of what is stated below it would help to think of the searches in a reversed way: if the length of the string of commands is N, first perform a search for the starting time N1, then for N2, and all the way down to 0. Notice that a state can be visited many times for multiple starting times, and each time the states reached from it are the same, thus performing redundant work. Now assume we had already done searches for the starting times greater than T. If we perform a BFS for the starting time T we will reach states that were either visited in a previous search or not. If the state is not visited we have no choice but to continue the search, but if it has already been visited there is no reason to do so: after all, we will get the same conclusions as the last time we did it. This way we make sure each state is visited at most once, and since the number of states is W * H * N, where W and H are the dimensions of the board, this will run quickly enough. For a nice implementation, check out liympanda’s solution. Most coders preferred dynamic programming or memoized solutions instead, based on the same key idea. Let best[row][col][time] be the earliest time to reach the destination cell from the state (row, col, time)  that is, best[row][col][time] equals T if and only if the final state (destRow, destCol, T) is reachable from (row, col, time) and there is no reachable final state with a lower T. For any T, we have that best[destRow][destCol][T] = Tand best[row][col][T] = min(best[row][col][T+1], best[newRow][newCol][T+1])where (newRow, newCol) is the new position after following the command given at time T. We return the lowest T for which best[startRow_{1}][startCol_{1}][T] is less than best[startRow_{2}][startCol_{2}][T]. For solutions based on this approach see Jan_Kuipers’ DP solution, or tomek’s memoized version. 
