JOIN
Get Time
statistics_w  Match Editorial
SRM 107
Wednesday, August 7, 2002

The Problems

Division-II, Level 1: (250pt)  discuss it
Submission rate - 92% (158/172)
Success rate - 79% (124/158)
High score: n0vice, 243.79 points

This problem's hardest aspect was confusing wording. In essence, it's telling you to find a string that's optimum in some way and put it at the beginning of all the others - then write the others in series. So the solution is pretty easy. Start at the first item in the array and see how "good" it is. We can use a "quality" value without much trouble. For each item, start quality at 0. Add 1000 if it's even-length (thus ensuring that even-length strings will always be better than odd-length strings.) Now subtract the length of the strings (giving shorter strings the advantage). And now, if it's equal to or larger than your current "best", make it your best.

Note that the "equal to" part is important - since we're doing this in order, and it wants the one that appears latest, we have to replace our best if it gets matched.

From there, it's easy. Start an empty string and add your "best" component to it. Now iterate through the array again - if you're currently on the "best", don't add it to your final string (since you've already got it), but otherwise add it.

Now return that.

And that's all.

Division-II, Level 2: (450pt)  discuss it
Submission rate - 91% (156/172)
Success rate - 58% (91/156)
High score: n0vice, 447.23 points

I have to be impressed by derelikt's score on this one, which is quite probably the highest percentage of possible points I've ever seen from someone who isn't cheating. Five lines of code. This particular problem is much easier in Java than C++, since you can use StringTokenizer to divide it into blocks of A, and just add all those lengths into a HashSet, and return its size.

On the other hand, it's not really hard in C++ either. Keep a counter - "how many A's you've seen in a row" - start it at 0. Iterate through the string. If it's D, add the current value to a set< int > and reset it to 0. If it's A, increment the counter. Remember to add its current value at the end, and then remove 0 from the set< int > (you might not have one, so you can't just subtract one from its size, but myset.erase( 0 ); isn't hard to write) and return the size.

I honestly don't know which of these would work better in C#, but if C# doesn't have a Java stringtokenizer clone, the C++ technique will work with whatever C#'s equivalent of set is.

Division-II, Level 3: (1000pt) & Division-I, Level 1: (300pt)  discuss it

Div-II
Submission rate - 42% (73/172)
Success rate - 41% (30/73)
High score: biomass, 823.56

Div-I
Submission rate - 76% (80/105)
Success rate - 78% (62/80)
High score: kv, 284.83

There are really two major cases you have to look at - the one where the axis of symmetry goes straight through one column, and the one where the axis of symmetry goes between two columns. Neither of them are really hard, though.

In the end, the algorithm comes down to "scan the entire logo, and its mirror image, and add one for every time the squares on both sides aren't identical". Do that for each vertical axis - remembering both possible kinds - and return the smallest value you can achieve.

The actual implementation could be done in quite a few ways. Some people used arrays to write the data to, translated along some axis, so that the "mirror point" was always, say, location 50. In this case you could just write it in verbatim for the "axis goes through a column", and shift everything on the left side of the mirror point one more space to the left for "axis goes between columns".

radeye set his up to do the reading directly from the input data, with some logic to avoid running out of bounds.

I, personally, used a map< int, map< int, bool > >, which is nice because it mimics an infinitely-sized 2d array, defaulting to false. So I'd write the data in, then just scan 0,0 to 50,50, along with the reflection, then clear it for the next one.

Division-I, Level 2: 550pt  discuss it
Submission rate - 37% (39/105)
Success rate - 77% (30/39)
High score: ZorbaTHut, 452.58 points

I honestly don't know of anyone who did this the clever way - I ended up talking with an admin afterwards, and he explained the Good solution. So here's the good solution.

Instead of thinking of it as a series of resistors, just think of the nodes. Right now our task isn't to figure out which resistors are redundant, it's to figure out which nodes are redundant. It's pretty easy to see that if a node is redundant any resistors linking to it are likewise redundant.

First build your handy adjacency matrix. Now you've got a big loop for each point.

First, remove given point from the adjacency matrix. Then run Floyd's on the whole thing. Now for each point remaining, check to see if there's a path from it to A or B. If there isn't, then it's not part of a simple path.

Repeat for the next point, accumulating the simple paths.

In the end it's O(n^4), and you have a bunch of nodes that aren't part of simple paths. Run through your original string and if either side of a resistor isn't part of a simple path, add it to your list. Sort and return.

Now here's a somewhat hacked kludgy way that does, in fact, work :)

The way I ended up figuring out is (of course) a recursive algorithm. Keep an array of which nodes are in a simple path from A to B (start them all at "false", of course) and an array of which nodes you've been through. Now just do a depth-first search. If you find a path to B, you can set A, B, and all the nodes in your path to "true". If you run into a path that's already set to "true" from a different direction you can set all the nodes in the current path to "true" also. However, if you run into a piece of the same path you're already on, you can't take a node more than once, so don't set anything to true.

It's a bit hard to explain - you may wish to look at my source. gd[] is the "which nodes are in a simple path" array ("good"), and tk[] is the "which nodes are in the path you're currently exploring" array ("taken").

A lot of people did clever things to try to embed the actual number of connections inside the adjacency matrix. I didn't bother. I built an adjacency matrix like normal, then just went through the connections (snagging them out of the input string a second time) and tested them all. Sort them as you go, then sort the final array and return it.

Division-I, Level 3: 800pt  discuss it
Submission rate - 10% (10/105)
Success rate - 40% (4/10)
High score: Yarin, 491.65 points

Oddly enough, this problem was actually brute-forcable today, and I believe brute-force may be its only solution. In general, what you wanted to do was generate every combinations of rules, then make a really enormous array with the start pattern in the middle and run every combination of rules on it, seeing which gives you the best result.

The actual implementation was a little trickier. It's easy to calculate "lightspeed", the fastest speed at which anything in the middle can effect anything on the edges. It's one cell per iteration, since each cell only pays attention to the cells on either side. That means that, after 300 iterations, with a 50-cell starting pattern, your pattern will be, at most, 650 cells wide.

Of course, it's not *quite* that simple. Since you can have rules that generate life where there is no life - say, for example, DDD - some algorithms might start generating fringe effects on the side. My implementation kept both cells on the very edges "dead", meaning that invalid data could start creeping in from the sides at lightspeed also. So I would need another 300 cells on each side to prevent that. In the end, to keep from getting tripped up by an inevitable stupid off-by-one error, I made my entire array 2000 cells wide and started the "initial pattern" at location 1000. This meant that locations 0-300 were garbage, 700-1350 were important, and 1700-2000 were garbage.

The next thing that tripped me up was another problem, this time hinging on "infinite life" on either side. However, with the way I'd set it up, locations 500 and 1500 would be in this "infinite" area, if there was one. No problem! I made a loop to start at 500 and find the first dead cell, and another loop to start at 1500 and go backwards to find the first dead cell, and then my main "diversity finder" just went from one side to the other.

Finding diversity wasn't really hard. Keep a counter of "how many alive cells have you found in a row". Start from the first side and go to the end. If the cell you're on is alive, increment the counter. If the cell you're on is dead, add the current value of the counter to a list, then set the counter to 0. Once you've iterated all the way through just scan the list for duplicates (and zeros) and eliminate them all, then return the size of the list. Note that C++ STL's set class does the job quite nicely, though you have to remove 0 manually (not hard - myset.erase( 0 ); ).

The only remaining part is "run every combination of rules". Generating every combination of rules isn't hard either. You can think of it as an array of the possible eight rules, with a flag for whether that condition brings life or death - in fact, that's exactly how I implemented it. Then it's just a matter of trying every combination of those rules one after another - 256 tests, overall.

Of course, you need very efficient code for all of this, or it will time out. My solution runs even the worst cases in under two seconds - look at it if you want hints.

Author
By ZorbaTHut
TopCoder Member