Thursday, November 4, 2004 Match summary SRM 218 turned out to be full of interesting surprises in both divisions. The medium and hard problems in Division 1 turned out to be major stumbling blocks for many coders, and many of the successful solutions took considerable lengths of time. The Division 1 easy problem, which was also the Division 2 medium, held at least one quirk that led to a high challenge and failure rate. In Division 1, gepa turned in a superior performance to earn his first SRM victory. An extra measure of congratulations is deserved for his offering up the lone successful solution to the hard problem. Hats off to a truly awesome round. nicka81 and Vedensky were 2nd and 3rd, but by a considerable margin. In Division 2, pako took first place honors by more than 100 points, followed by Javaholic and quake, with several others not far behind. Several competitors served up solutions to all three problems, but many of them had one or more fail. A special pat on the back should go to glguy, finishing 6th, as the only newcomer to successfully solve all three problems (and one of nine overall)! Likewise to katiej76, finishing 11th, also as a newcomer, but dishing out a commanding six successful challenges on the 500 point problem. Wow! Though not newcomers, modenl and Vishwesh86 both also deserve commendation for eight and six successful challenges, respectively.
The ProblemsAccessLevelUsed as: Division Two  Level One:
This problem was about a straightforward as it gets, with really no surprises. The most common basic approach was start with an empty string, loop through the input array, and append characters to the string one at a time. The only problems with solutions seemed to stem from mishandling an empty array by not properly initializing the empty string, or by using > for the permission comparison instead of >=. FolderSizeUsed as: Division Two  Level Two: Used as: Division One  Level One:
This problem was a plentiful source of either joy or frustration, depending on which side of the challenge you sat. The basic algorithm was to initialize your array based upon folderCount, then loop through the files, and add the wasted space of each to the appropriate folder. The trick is to make sure the wasted space of each file is properly calculated, and herein lied the subtle mistake of so many coders. In nearly every case that failed, there was something like: waste = clustersize  filesize % clustersizeBUT... this makes a critical error in handling the case when a file exactly fills all of it's clusters. Likewise, a size 0 file will not use any clusters at all, and will have no waste. That being said, once you handle that special case properly, the rest is trivial. Nonetheless, the lesson to learn is twofold. No matter how easy a problem seems, it pays to think it through before hitting the [Submit] button. That there was no example case to illustrate this case only reinforces the importance of thoughtful testing. PermissionTreeUsed as: Division Two  Level Three:
This problem, more than the others, demanded some measure of creativity in finding a good solution, but still worked out nicely under a number of different approaches. One method is to search the tree and test each folder as being a possible home folder, and from there continue to go deeper until no child folders would be valid. While this is certainly a legitimate approach, it is probably not the most efficient. Another method is to find all folders that a given user can access. Then, travelling the tree back up to the root, we simply look for the first common ancestor of all such accessible folders. A simple implementation is to first assume the home folder will be 1. Then, search for a folder the user can access, and set that to the temporary home folder. Then, continuing to search the remaining folders, for each one the user can access, set the temporary home folder to the common ancestor of itself and the newest found folder. Continue for all remaining folders, and your result is the home folder for that user. Some common mistakes that caught coders by surprise were returning a home folder that was not as deep as possible, and not properly determining if a user was in the user list for a folder. Also, some coders had trouble properly handling the parentchild relationship of the root node, which was defined as it's own parent (so that the input would look consistent). As of this writeup, the practice room already contained over a dozen good, varied solutions to this problem. ReconstructUsed as: Division One  Level Two:
In this problem, we sought to reconstruct an original set of Euclidian points in 3space, given a complete set of information about their Euclidian distances from each other. We are given that the first point is (0, 0, 0) and the second is wellenough defined that we can return a unique set of points. The trick to this problem is use brute force to find possible locations of the second point, then recurse for the next point, and so on. The reason brute force is possible in time here is that with each successive point found, there are an increasing number of constraints (because there are more other points to measure from) on what points are valid. As such, the recursion avoids ballooning, even with the brute force searching. The failure condition in which no set can be reconstructed from the given data is determined when all possible points have been tried to no avail, or at any point along the way when the squared distance between two points is not a sum of three squares. WinningProbabilityUsed as: Division One  Level Three:
When only a single solution passes, that is usually an indication that a lowpointvalue problem must be more difficult than meets the eye. Ultimately, the biggest difficulty here was simply just understanding what was required. However, the first example did wonders to help clear this up. In particular, a weighted average over all possible values of p is what we are after. So, how do we do that? If you caught the scent and think this problem has an odor of calculus
about it, you are absolutely correct. But, what do we want to integrate? Well, actually we want
two integrals... the first is of: Now, since each of these two calculated probabilities can [in theory, at least] be represented by a polynomial (in particular, we'll need to employ binomial probability polynomials here) in p. So, it seems like computing the definite integral should be possible, and should guarantee an accurate result. But, when we examine the constraints, and see that prevWins and prevLosses may each be up to 100, we should immediately realize the magnitude of the binary polynomial coefficients we would need to calculate. Instead, a numerical approach may make more sense, and is arguably simpler to code. To do this, we simply divide the interval from 0 to 1 into many very small intervals, and approximate the integral for each. This is where things get troublesome once more, though, as we are constrained by accuracy. In particular, integration by the trapezoid rule often fails due to precision errors, so instead Simpson's rule should be used. Once we calculate the two integrals, we simply divide to obtain our weighted average, and we thus have our final result. 
