
Match Editorial 
SRM 184Tuesday, February 17, 2004
Match summary
Both divisions faced a harder set of problems than usual in SRM 183. Most notable was the Division 1 hard, which only tomek was able to solve
successfully, and for only 322 points, but it was still enough to give him the win for the match. There were plenty of challenges to be had on that problem, and although
bladerunner's solution was not successful, 4 of his challenges were. Division 2 coders also had their work cut out for them, with a combinatorial
medium problem that had a significantly lower submission accuracy than the hard problem. bl came out on top in Division 2, with only a handful of
coders within reach.
The Problems
RaceApproximator
Used as: Division Two  Level One:
Value

250

Submission Rate

232 / 285 (81.40%)

Success Rate

186 / 232 (80.17%)

High Score

NeverMore for 240.74 points (5 mins 36 secs)

Average Score

172.94 (for 186 correct submissions)

Though there was a long list of notes and contraints for this problem, there was really very little complicated as far as actual coding goes.
Simply plug in the values into the equation that is given, and format the output. It was important to typecast some values to floating point
numbers when plugging them into the given equation, since a/b can have different answers depending on if a and
b are integers or floating point numbers. Beyond this, formatting seconds into hours, minutes, and seconds,
is a familiar problem to many. If n is the total number of sceonds, then the number of hours is h=n/3600, the number of minutes
is m=(n3600*h)/60, and the number of seconds is s=n3600*h60*m. If the function
approxTime(d1,t1,d2,t2,d) is the function described in the problem statement, then the following code will split the value into hours, minutes,
and seconds:
int time=approxTime(d1,t1,d2,t2,d);
int hours=time/3600;
int minutes=(time3600*hours)/60;
int seconds=time3600*hours60*minutes;
All that is left is formatting, which can be done differently depending on your language and preference.
Here is a simple Java solution.
String ret=hours+":";
if (minutes<10) ret+="0";
ret+=minutes+":";
if (seconds<10) ret+="0";
ret+=seconds;
BagOfHolding
Used as: Division Two  Level Two:
Value

500

Submission Rate

106 / 285 (37.19%)

Success Rate

40 / 106 (37.74%)

High Score

Skippy1313 for 476.76 points (6 mins 19 secs)

Average Score

305.76 (for 40 correct submissions)

This problem could have been solved by trying every way of putting items into the bag, and seeing which ones worked, but there
is another solution that requires a bit more insight.
Call the total number of
items n, the number of items that will not obstruct the item in question s, and the number of items that will obstruct
it b. Clearly b+s+1=n. Those b items can be ordered in b! different ways, and the item in question must be placed in the bag after those first
b items. So we know that when only considering these b+1 items that there are b! ways of ordering them in the bag such that
the item in question can be removed. The s smaller items can be put into the bag at any point and will not affect anything that we are concerned with.
If we think of those b+1 items in some order and consider adding the s items to that ordering one by one, the first of those s items can
go in one of b+2 places, the next one can go in one of b+3 places, and so on. So the number of ways the s items can be inserted into an
ordering of the b+1 items is (b+s+1)!/(b+1)!=n!/(b+1)!. The total number of orderings which do not
obscure the item in question then is b!n!/(b+1)!=n!/(b+1). The odds that a random ordering will be one of these acceptable
orderings is the number of acceptable orderings divided by the number of possible orderings, (n!/(b+1))/n!=1/(b+1).
Once we have this result the coding is pretty easy, just loop through the items and find the number that can obscure the item in question, then return 1/(b+1).
public double oddsReachable(int[] sizes, int item) {
int b=0;
for (int i=0;i<sizes.length;i++)
if (sizes[i]>=sizes[item] && i!=item)
b++;
return 1.0/(double)(b+1);
}
TeamBuilder
Used as: Division Two  Level Three:
Value

1000

Submission Rate

41 / 285 (14.39%)

Success Rate

30 / 41 (73.17%)

High Score

sharprose for 911.50 points (9 mins 1 secs)

Average Score

711.41 (for 30 correct submissions)

This problem statement was a silly way of not mentioning the word 'graph'. What we have is a directed graph where the locations are vertices, and the
paths between them are directed edges. In this problem we want to find the number of vertices that can reach all other vertices, and the number of
vertices that can be reached by all other vertices. What we need to determine is the transitive closure of this graph. The transitive closure
of a graph G has the same set of vertices as G and has an edge from u to v if there is a path from u to v in G.
Once we have this it is easy to calculate the values we are after, and lucky for us there is an incredibly easytocode algorithm for finding the transitive
closure of a graph. First we need the adjacency matrix of a graph, which is actually what we are given. For a graph G of n vertices, the adjacency
matrix is a nxn matrix with the element G[u][v]=1 if there is an edge from u to v, and
G[u][v]=0 otherwise. The following algorithm, also known as FloydWarshall, will find the transitive closure of a graph.
public void transitiveClosure(int[][] paths) {
for (int k=0;k<paths.length;k++)
for (int i=0;i<paths.length;i++)
for (int j=0;j<paths.length;j++)
if (paths[i][k]==1 && paths[k][j]==1) paths[i][j]=1;
};
Now that we have this, any vertex that can reach all other vertices will cause there to be a row of '1's in the adjacency matrix of the transitive closure,
except maybe the value on the diagonal (
G[
u][
u]) is a '0'.
Similarly, any vertex that can be reached by all other vertices will cause there to be a column of '1's in the adjacency matrix of the transitive closure, again,
except for the value on the diagonal.
It is now a simple matter to count these and return the values.
RaceCalculator
Used as: Division One  Level One:
Value

250

Submission Rate

184 / 206 (89.32%)

Success Rate

168 / 184 (91.30%)

High Score

Ruberik for 234.44 points (7 mins 25 secs)

Average Score

163.37 (for 168 correct submissions)

This problem used the same equation as the Div 2 easy, but this time coders were required to follow a very specific algorithm to determine which of
a runner's races was the best race. Here most of the difficulty came from just following the prescribed algorithm to the letter. The first task was to
calculate the badness of each race, the following code demonstrates this:
double[] badness = new double[dist.length];
for (int i=0;i<badness.length;i++)
badness[i]=1.0/0; //Infinity, so anything else will get put into the array
//loop over all races, and then over all pairs of
//other races to calculate badness of that race
for (int k=0;k<dist.length;k++)
for (int i=0;i<dist.length;i++)
for (int j=0;j<dist.length;j++) {
if (i==j  j==k  i==k) continue;
double expected=approxTime(dist[i],times[i],dist[j],times[j],dist[k]);
double err=(times[k]expected)/expected;
badness[k]=Math.max(err,badness[k]);
}
Now that we know the badness of each race, we just go through them all and return the index of the race with the lowest badness.
int best=0;
for (int i=0;i<badness.length;i++)
if (badness[i]<badness[best]) best=i;
return best;
BagOfDevouring
Used as: Division One  Level Two:
Value

500

Submission Rate

84 / 206 (40.78%)

Success Rate

63 / 84 (75.00%)

High Score

tomek for 466.12 points (7 mins 46 secs)

Average Score

330.07 (for 63 correct submissions)

Though the problem statement began much like the Division 2 medium problem, this ended up being a much different problem.
It may be quite obvious that the solution will be done with dynamic programming or recursion with memoization, but probability
can easily trip up many people. I find that memoization usually makes for an easier to understand solution, so that is what I
will present here.
First we need a clear understanding of how this bag works. At each step we can choose
which item to remove from the bag, and in each of these cases we get a different expected yield, since items have different probabilities
of being devoured depending on the contents of the bag.
If we choose to remove a particular item from the bag, then the yield will be the value of the item we just removed plus the sum of the expected yields of each possible state of our bag
(i.e. the contents of our bag, considering that one of the remaining items may be devoured) each weighted by the probability of that
state occuring. We write a recursive function to solve this that takes a set of items and returns the expected yield of that set of items. That is, it
will determine the expected yield for all choices of what item to remove next, and return the yield for the case in which the yield was the highest. The function
calls itself when necessary to determine what the expected yields of smaller subsets of items are. The base case of this function is when there is
one or zero items, in which case the expected yield is the value of that item, or zero if the bag is empty. To make this run quickly, of course, we
need to memoize. That is, once we calculate the result
for a certain set of items, we store that value so we never have to calculate it again. Since there are n items, there are
2^n subsets, so an array of size 2^n works fine since the input size, 15, is small.
double yield(int mask) {
//if we've gotten an answer for mask before, return it
if (memo[mask]!=1) return memo[mask];
double best=0;
//try removing each item, save the best result
for (int i=0;i<values.size();i++) {
if (!(mask & (1<<i))) continue;
double current=values[i];
double totalweight=100;
//sum up the weights in the bag + 100
for (int j=0;j<weights.size();j++)
if (i!=j && (mask & (1<<j)))
totalweight+=weights[j];
//each item might get devoured, add the best yield
//after that, weighted by how likely it is to happen
for (int j=0;j<weights.size();j++)
if (i!=j && (mask & (1<<j)))
current+=weights[j]/totalweight*yield(mask(1<<i)(1<<j));
//the leftover chance is how likely it is that
//nothing is devoured, weight that as well
current+=100.0/totalweight*yield(mask(1<<i));
if (current>best) best=current;
}
memo[mask]=best;
return best;
};
All we need to do from aside from this is initialize the base cases in
memo. For zero items, and for exactly one item we know
the best case result, so store those in the memo before starting. Now all we need to do is call this function with
mask set to
indicate that all the items are available.
TeamBuilding
Used as: Division One  Level Three:
Value

1000

Submission Rate

40 / 206 (19.42%)

Success Rate

1 / 40 (2.50%)

High Score

tomek for 322.07 points (51 mins 36 secs)

Average Score

322.07 (for 1 correct submission)

This problem proved to be deceptively difficult, which tomek apparently noticed in time enough to resubmit at the end of the round to get
the only working solution to this problem. As with the Division 2 hard problem this is a directed graph. Our problem here is to find the minimum
number of edges to add to this graph such that every vertex belongs to some cycle, even if that cycle is just an edge from the vertex to itself.
The solution that occurred to many coders but was incorrect was to take the number of vertices with no incoming edge, the
number of vertices with no outgoing edges, and return the maximum of those two values. The thinking is that as long as all nodes have incoming and
outgoing edges that all vertices will belong to some cycle, but it is only true that this conditions guarantees that some vertices will belong to some cycle.
To show a counter example to this solution, consider two disjoint cycles and an edge from one of the cycles to a vertex, and another edge from that vertex to
the other cycle. In this case all of the vertices have incoming and outgoing edges, but that vertex between the two cycles does not belong to any cycle.
For our directed graph, G, find a subset of the vertices , source, such that every vertex in G can be reached by at least one vertex
in source. Also find a subset of the vertices, sink, such that every vertex in G can reach at least one vertex in sink. Now for
each vertex in sink, if there are any vertices in source that cannot reach it, add an edge from the vertex in sink to one of those
vertices in source. Also, for every vertex in source, if there are any vertices in sink that it cannot reach, add an edge from one
of them to the vertex in source. The most edges that can be added in this fashion is the maximum of the cardinalities of source and sink.
After these edges have been added, every vertex will belong to some cycle. Consider any vertex, from this vertex there is a path to some vertex in sink.
There is an edge leading from this vertex in sink to some vertex in source. Now from here there is either a path to our original vertex, or there is
a path to a different node in sink than the one we were previously at. From this new sink vertex we can get to a different source vertex. We
can continue on in this fashion until we reach a source vertex that leads to our original vertex. The edges we added ensure that this is the case because
when we add edges from sink vertices to source vertices in the event that a particular source vertex couldn't get to a particular sink
vertex, we ensure that when traveling from that vertex in sink to the one in source that we will be able to reach a different sink vertex than
the one we started at. When we add edges from sink to source when a particular sink vertex cannot reach a particular source vertex,
we are ensuring that a cycle will occur (not one cycle for every one of these such edges added, but at least one cycle will be added after all of these edges have been added).
Now, since we know that adding edges in this manner will cause all vertices to belong to a cycle, all we need to do is find the minimum possible cardinalities
for source and sink. In addition, when choosing source and sink, we don't require that a vertex that is already part of a cycle is able to
reach any of the vertices in sink, and we also do not require that it can be reached from a vertex in source.
Unfortunately the best way I know of doing this is also known as Minimum Set Cover, and is an NPComplete problem.
Basically all there is to do at this point is when calculating sink, try all 2^n combinations of vertices and take the smallest valid set. Find
source in the same way. Now the return value is what was stated before, the maximum of the cardinality of these two sets. It is less difficult to show that
with the minimum source and sink sets that the maximum of the cardinalities of these two sets is not only sufficient, it is necessary. I will leave that proof
as an exercize for the reader. I'll admit my proof here
isn't ironclad (and i am only mostly sure that it is correct), and I'm not sure that this problem is NPComplete, but perhaps that is something we can figure out on the
round tables. For code that solves this problem in the way I have described here, look at writer's solution in the practice room.
By
Running Wild
TopCoder Member