JOIN
Get Time
statistics_w  Match Editorial
SRM 235
Tuesday, March 22, 2005

Match summary

One hundred years ago, Albert Einstein published groundbreaking papers on Brownian motion, the photoelectric effect, and special relativity. Any one of these works alone would have been worthy of a Nobel prize, but Einstein managed to polish off all three in the span of about five months. He probably would have made a pretty good TopCoder. In commemoration of the centennial of Einstein's Annus Mirabilis, the United Nations has declared 2005 to be the International Year of Physics, and I decided to celebrate by creating a physics-themed problem set for SRM 235.

As prolific as Einstein was, he would have needed to solve all three problems in under thirty-six minutes in order to beat tomek in this match. ploh, Dazu, and rgrig had the next three highest scores after the coding phase, but their solutions to the hard problem turned out to be flawed. When the dust settled from the challenge phase and system tests, tomek, venco, and dgarthur were the only coders to successfully solve all three problems. However, m00tz narrowly edged dgarthur for third place despite getting the easy problem wrong by racking up four successful challenges with four attempts.

In Division 2, newcomer Petr dominated the division by being the only one to solve all three problems and picking up two successful challenges as well. symbad, another newcomer, only solved one problem but racked up an impressive seven successful challenges to finish fifth. In all, 31 coders advanced to Division 1.

For all problems, the writer's solution is available in the practice rooms.

The Problems

NoisySensor discuss it
Used as: Division Two - Level One:
Value 250
Submission Rate 256 / 289 (88.58%)
Success Rate 173 / 256 (67.58%)
High Score gdiitk for 248.87 points (1 mins 55 secs)
Average Score 200.57 (for 173 correct submissions)

Median filtering is a staple data analysis and noise reduction tool, and can probably be found in your favorite image editing software. In most applications, the filtering will take place on data with two or more dimensions using a variable window size. This problem asked you to implement a basic one-dimensional median filter with a window size of three. A straightforward solution was to make a copy of the input array and loop from the second element to the second last element, finding the median of the previous, current, and next element from the input array and storing it in the corresponding element of the output array. A quick and lazy way to find the median of three numbers is to put them in an array, sort them, and take the middle element. If you were implementing a general median filter with a potentially large window size on multidimensional data, you would probably want to use a linear time selection algorithm instead of sorting.

StepperMotor discuss it
Used as: Division Two - Level Two:
Value 500
Submission Rate 120 / 289 (41.52%)
Success Rate 7 / 120 (5.83%)
High Score Petr for 430.94 points (11 mins 46 secs)
Average Score 285.17 (for 7 correct submissions)
Used as: Division One - Level One:
Value 300
Submission Rate 202 / 220 (91.82%)
Success Rate 75 / 202 (37.13%)
High Score tomek for 290.41 points (5 mins 11 secs)
Average Score 213.51 (for 75 correct submissions)

This problem was so tricky, even I got it wrong. The maxed-out size of the bounds on the input rules out an approach based on iteratively adding or subtracting n from the current position, so modular arithmetic is clearly in order. The first step in determining the number of steps necessary to reach a given target from the current position to find their values modulo n. However, a % b doesn't quite compute a modulo b if a is negative. The correct modulus function, which is worth remembering, is (a % b + n) % n. In this problem, it was necessary to use long integers to avoid overflowing during the addition. Once the position and target modulo n are known, finding the shortest distance is a matter of trying both directions, carefully keeping track of sign and avoiding overflow, and handling the special case where one or more targets were equidistant in either direction from the current position.

AcademicJournal discuss it
Used as: Division Two - Level Three:
Value 1000
Submission Rate 23 / 289 (7.96%)
Success Rate 4 / 23 (17.39%)
High Score Petr for 552.13 points (31 mins 37 secs)
Average Score 489.24 (for 4 correct submissions)

This problem rewarded knowledge of standard libraries and utility functions. In particular, it was necessary to create a mapping from strings (the journal names) to journal objects (which keep track of the number of papers and citations for a single journal), and to sort a list of journal objects. One initial pass through the input should be used to build tables that map names to journal objects and paper indices to journal objects, and a second pass to count the citations to each journal. Finally, the journal objects could be placed in order by writing a comparison function that properly implements the tiebreaking rules, and then sorting using that comparison function. Note that it is possible to compare the impact factors using only exact integer arithmetic: citeCount1 / paperCount1 < citeCount2 / paperCount2 if and only if citeCount1 * paperCount2 < citeCount2 * paperCount1 since the paperCounts must be positive.

PerforatedSheet discuss it
Used as: Division One - Level Two:
Value 400
Submission Rate 122 / 220 (55.45%)
Success Rate 69 / 122 (56.56%)
High Score tomek for 383.46 points (5 mins 57 secs)
Average Score 244.55 (for 69 correct submissions)

The bounds on the sheet dimensions are too large to process the sheet one square unit at a time. The key realization is that the center of mass definition implies that holes can be treated as uniform rectangular objects with "negative mass". In other words, summing the weighted positions of just the points in the perforated sheet is equivalent to summing the weighted positions of all points in the original sheet and subtracting the weighted positions of the points removed by the hole punch. The main complication here is floating point error: subtracting two large floating point numbers that are almost (relative to their magnitude) equal can yield a very small number with a large error. Dividing by this number produces disastrously incorrect results. This is exactly the case when a large sheet has large holes punched out of it such that only a tiny rectangle remains. For this problem, both the total mass and the weighted sum of the rectangle center positions (multiplied by two) could be computed exactly using 64-bit integers, allowing the floating point error to be minimized by postponing the conversion and division until the very last step.

This problem originally allowed holes to overlap, which would have made the problem more challenging. The writer solution actually solves this version of the problem using a plane-sweep (also known as coordinate compression and scanning) approach.

RemoteRover discuss it
Used as: Division One - Level Three:
Value 900
Submission Rate 24 / 220 (10.91%)
Success Rate 7 / 24 (29.17%)
High Score m00tz for 694.12 points (16 mins 31 secs)
Average Score 598.64 (for 7 correct submissions)

Numerical optimization is an eminently useful procedure, and highlights one way in which computational power has revolutionized physics. Generic search techniques such as gradient descent are well-suited to this problem because there aren't multiple local minima to get stuck in. m00tz was the only coder to successfully implement this approach. Unlike many difficult optimization problems encountered in physics, this one can be rigorously solved using calculus. Fix a coordinate system to the terrain such that the n strips are parallel to the x-axis, the rover starts at (0,0), and the target lies at (offsettotalwidth) where totalwidth is the sum of all strip widths. Let x[i] be the x-coordinate of the rover's position just before it enters strip i. The total time necessary to reach the target is the sum with i = 0 to n-1 of sqrt((x[i+1] - x[i])2 + width[i]2)/speed[i], and the problem can be formulated as minimizing this total time as a function of the variables x[i] for i = 1 to n-1, with x[0] fixed at 0 and x[n] fixed at offset. The critical points of the time function occur where the partial derivatives of the function with respect to each of the variables are all zero. The partial derivative of the time function with respect to x[j] is (x[j] - x[j-1])/sqrt((x[j] - x[j-1])2 + width[j-1]2)/speed[j-1] - (x[j+1] - x[j])/sqrt((x[j+1] - x[j])2 + width[j]2)/speed[j]. Doing this for all of the x[j] and setting the derivatives to zero results in a chain of equations that relates x[1] to offset. Now, the terms in these equations look suspiciously trigonometric; in fact, they can be written as 1/speed[j-1] * sin(theta[j-1]) = 1/speed[j] * sin(theta[j]), where theta[j] is the angle between the y-direction and the line connecting points with x-coordinates x[j-1] and x[j] on opposite edges of strip j-1. This chain of equations takes any value of theta[0] and produces an optimal path to a point on the edge of the final strip. Since the optimal path will never go in the negative x-direction, the x-coordinate of this final point is a monotonically increasing function of theta[0] (for theta[0] = 0 to Pi/2). This calls for a binary search on theta[0] to find the exact value that makes the final x-coordinate value equal to offset. The only thing that remains to be done is to calculate the minimal time from the final values of theta.

Coders who have studied physics will recognize the above equation as Snell's law, which describes the refraction of a light ray across an interface between materials with different light propagation speeds. It is possible to derive Snell's law directly from Maxwell's equations without assuming that light always takes the path of minimum time. How is it that light rays just happen to be so clever? If this kind of question is interesting to you, you should study physics!

Author
By the_one_smiley
TopCoder Member