Get Time
statistics_w  Match Editorial
SRM 275
Wednesday, November 30, 2005

Match summary

SRM 275 went off tonight without a hitch, and offered little in the way of surprises. tomek continued his quest to regain top position on the coder rankings list from Petr by easily winning Division 1 by almost 100 points. Rounding out the top 5 in Division 1 with very impressive performances themselves were marek.cygan, krijgertje (who continues his march toward becoming a target!), pparys and haha. Division 2 was dominated by a newcomer, stone, whose new rating was very close to landing a spot on the Impressive Debuts list. The next 4 finishers were dgott, cypressx, aussieviet and marcoloco. Only dgott narrowly missed advancing to Division 1, whereas the others will be moving up for the next SRM.

The Problems

HingedDoor rate it discuss it
Used as: Division Two - Level One:
Value 250
Submission Rate 313 / 327 (95.72%)
Success Rate 216 / 313 (69.01%)
High Score jiamianchaoren for 249.75 points (0 mins 53 secs)
Average Score 228.26 (for 216 correct submissions)

A simple simulation was all that was required to solve this problem. You could choose to run your simulation in two ways: one way was to keep multiplying the current angle by 1/reduction and count how many such multiplications it took for the angle to fall to 5 degrees or lower. Another way was to start with a current angle of 5 degrees exactly and keep multiplying by reduction until the angle increased to initialAngle or higher, counting the number of such multiplications, of course. The second way is better because you can use integer multiplication instead of imprecise double division. C++ code follows:

int numSwings(int initialAngle, int reduction) {
    int angle = 5;
    while (initialAngle > angle) {
        angle *= reduction;

Haar1D rate it discuss it
Used as: Division Two - Level Two:
Value 500
Submission Rate 265 / 327 (81.04%)
Success Rate 225 / 265 (84.91%)
High Score raalveco for 493.34 points (3 mins 18 secs)
Average Score 343.47 (for 225 correct submissions)

This problem required coders to implement a simple 1-dimensional Haar wavelet transform. There isn't too much to do here, just implement the algorithm as described in the problem statement. Clearly, the algorithm can be implemented just as easily iteratively or recursively. In high-level pseudocode, a recursive solution can be represented as follows:

transform(data, N, L)

   form N/2 pairs from data
   (note that N is not necessarily the same as data.length)
   for i:1 to # pairs
      p = pair i
      sums[i] = p.first + p.second
      diffs[i] = p.first - p.second
   assign the first N/2 elements of data = elements of sums
   assign the next N/2 elements of data = elements of diffs
   if L - 1 > 0
      transform(data, N/2, L - 1)

The Haar wavelet as described here is used in a number of image compression algorithms, in particular the SPIHT algorithm (actually, the wavelet is missing a normalization constant...) If you look at the output of the wavelet on constant data, say 32 100s, you will see that the output is 3200 followed by 31 0s. This is known as energy compaction. The wavelet transforms the data such that the energy of the input is concentrated into a few coefficients (which occur near the beginning) of the output. It should be easy to imagine why this wavelet is used in image compression.

GreedyGovernment rate it discuss it
Used as: Division Two - Level Three:
Value 1000
Submission Rate 56 / 327 (17.13%)
Success Rate 34 / 56 (60.71%)
High Score jcerovec for 734.97 points (18 mins 31 secs)
Average Score 486.13 (for 34 correct submissions)

First, it should be noted that the sectors and roads between sectors in this problem are a thinly veiled way of saying nodes and edges. With this in mind, it should be fairly obvious how to construct a graph from the input data. In fact, the input data directly defines the adjacency matrix for the graph.

This task can then be divided into two subtasks. The first subtask is to count the number of distinct paths from sector 0 to sector N-1, subject to the rules in the problem statement. The second subtask is to determine which road's toll should be increased by tollHike to obtain the maximum average cost.

Given that the maximum number of sectors in our city was at most 10, the first subtask could be solved by doing a simple depth-first search (DFS) to generate all the paths in the graph which start from node 0 and end at node N-1. In pseudocode, the DFS can be structured as follows:

N = # of nodes
initialize seen[i] = false for all i // size = N

DFS(currentNode)   // currentNode is 0-based
   if currentNode == N-1
      // found new path, add to list of paths
   if seen[currentNode] == true return      // no cycles
   seen[currentNode] = true
   for i:0 to N-1
      if !seen[i] && toll[currentNode][i] > 0
   // backtrack
   seen[currentNode] = false

To solve the second subtask, we can proceed in two ways. The bruteforce approach is to simply add tollHike to each of the roads (there can be at most 90 of them) and evaluate the cost along each of the paths that you discovered using DFS (Note that this implies that you have cached these paths. An alternative is to not cache the paths and re-run the DFS, calculating the cost along each path within your DFS routine. If you choose this alternative, timeout becomes a concern although it is still possible to write a solution that is fast enough). Then, the average cost is just the sum of the costs along all paths divided by the total number of paths from subtask 1.

There is a nicer solution, though. Rather than checking each road, we observe that we can greedily add tollHike to the road that appears most often over all the paths from sector 0 to sector N-1. This is guaranteed to produce the maximum average cost. Note that a road cannot appear in one path more than once, because this would violate the condition that there are no cycles in the path from sector 0 to N-1. Thus, by choosing the road which appears most often over all paths, we have effectively added tollHike to the maximum number of paths that we can. Obviously, when we now divide by the total number of paths to get an average cost, this will be maximized, since we cannot increase the cost along any path any further.

InverseHaar1D rate it discuss it
Used as: Division One - Level One:
Value 250
Submission Rate 253 / 265 (95.47%)
Success Rate 248 / 253 (98.02%)
High Score lishengren for 245.68 points (3 mins 46 secs)
Average Score 186.30 (for 248 correct submissions)

This problem is obviously related to the Division 2 Medium. In fact, I used the output of the test data from that problem to generate the test data for this problem! Here, it is not a simple matter of implementing an algorithm that has already been given to you, since what is described in the problem statement is the forward transform, not the reverse. The first step, then, is to figure out how to reverse the transform.

The algorithm is best explained by considering a simple case of a 2 element sequence (and obviously, a level-1 transform). So, consider the transformed sequence {y1, y2}, obtained from the original sequence {x1, x2}. From the description of the algorithm, we know that the first half of the transformed sequence (i.e. y1) is a result of the summing operation on the original sequence, and the second half of the sequence (i.e. y2) is a result of the difference operation on the original sequence. In other words,

y1 = x1 + x2
y2 = x1 - x2

Now, you can easily solve those 2 equations for the 2 unknowns to obtain the original sequence. In general, you want to be able to find 2 equations in 2 unknowns. This is always possible, because you can take the i'th sum and the i'th difference, which will both be formed from the same pair in the original sequence. So now, one begins with the section of the data obtained from the last level of the transform, and reverses that portion. Then, take twice the number of elements and consider this our new transformed sequence, and repeat. Pseudocode for a recrusive solution follows:

transform2(data, N)

   for i:1 to N/2   // over all pairs
      ret[2*i] = (data[i] + data[i + N/2])/2
      ret[2*i + 1] = (data[i] - data[i + N/2])/2

   data = ret
   if 2*N ≤ data.length
      transform2(data, 2*N)

transform(data, L)
   len = data.length
   for i:1 to L len/=2      // get the data from the LAST level
   transform2(data, len)   // note we no longer need L!

Horoscope rate it discuss it
Used as: Division One - Level Two:
Value 500
Submission Rate 219 / 265 (82.64%)
Success Rate 115 / 219 (52.51%)
High Score sjelkjd for 479.21 points (5 mins 58 secs)
Average Score 355.53 (for 115 correct submissions)

Intuitively, it would seem that a greedy solution (i.e. always saying a 'G' prediction is Right (if you can) and a 'B' prediction is wrong (if you can)) would not solve this problem correctly. It turns out that intuition, at least in this case, is right. A simple example on which a greedy solution returns an incorrect answer is: {GGB}, with R=1 and W=1. Here, a greedy solution would start off by saying the first day was R, then it is forced to assign the remaining days as W and R. So, we are left with RWR, which gives us a total of 1 Good day (the first one). Clearly, we can do better if we assign the Rights and Wrongs as WRW. This assignment gives us a total of 2 Good days.

This problem can be solved in a pretty straightforward manner using memoization. In fact, let's go straight to the pseudocode:

R, W as defined in the problem statement
p = concatentation of elements in predictions

initialize cache[MAX_DAYS][MAX_R][MAX_W] = -1

solve(int curDay, int numRightLeft, int numWrongLeft)

   if (curDay ≥ total # of days) return 0;   // curDay starts at 0
   if cache[curDay][numRightLeft][numWrongLeft] > -1
      return cached value      // this results has already been computed

   int best = -1;
   if numRightLeft > 0
      // We haven't assigned R Right days in a row
      // Assign the current day as being Right
      if p[curDay] == 'G' add = 1; else add = 0
      best = MAX(best, add + solve(curDay + 1, numRightLeft - 1, W)
   if numWrongLeft > 0
      // We haven't assigned W Wrong days in a row
      // Assign the current day as being Wrong
      if p[curDay] == 'B' add = 1; else add = 0
      best = MAX(best, add + solve(curDay + 1, R, numWrongLeft - 1)

   cache[curDay][numRightLeft][numWrongLeft] = best   

the function is called initially as follows: solve(0, R, W)

The above function basically tries all the ways of assigning Right and Wrong days for all days but uses caching of previously computed results to run in time. The terminating condition for the recursion is clear: when our current day is larger than the total number of days for which we have predictions, we know that we can have no Good days, so we return 0. Otherwise, as long as we can assign the current day as being Right, we try that possibility. Note that we add 1 to our expected number of Good days if we're assigning a day as being Right and it was predicted as 'G'. Similar logic is applied for assigning the current day as being Wrong.

SantaClaus rate it discuss it
Used as: Division One - Level Three:
Value 1000
Submission Rate 57 / 265 (21.51%)
Success Rate 43 / 57 (75.44%)
High Score benetin for 910.43 points (9 mins 5 secs)
Average Score 643.40 (for 43 correct submissions)

By: OlexiyO
This problem is a slightly modified maximal weight matching problem. First we build a bipartite graph, with n nodes representing people and another n nodes representing presents. The cost of giving the ith present to the jth person can be calculated as (value[j][i] - value[j][j]) (because the jth person gets the ith present, but "loses" the jth one). Also, for each of the presents we introduce an edge of cost 0 leading to the person who initially receives this present (so, from present 0 to person 0, from present 1 to person 1 and so on).

Now lets look closely how do we improve a group. We take present a0 and give it to person a1, give present a1 to person a2, ..., present ak - 1 to person ak and present ak to person a0, closing the cycle. We are interested in the smallest improvable group, so, if a group contains several such cycles, we can split it into several smaller groups. Therefore, having indeces a0, a1, ..., ak chosen, we can construct a cycle in the graph we described above. Analogously, having a cycle in that graph, we can find a group corresponding to it. Adding the costs over all edges used in the cycle, we can check whether the total value in the group will increase after redistributing presents.

What we need now is to find the shortest cycle of positive cost in a graph. This can be done by a modification of Floyd-Warshall's algorithm. We find paths with the highest total value for all pairs of nodes of length 2, 3, and so on. As soon as the path (cycle) from ai to ai has the total value greater than zero for one or more i, we calculate and return the maximal cost among these cycles. See tomek's solution for a short and clean implementation.

By NeverMore
TopCoder Member