# An MVP’s Thoughts on Marathon Match 100

**Introduction**

Marathon Match 100 was a special circumstance: the 100th marathon match and cool prizes! Vintage MM t-shirts were offered to the top 50 old-school competitors as well as cash prizes for the top scorers.

… and it was a special match indeed! In total, 287 Topcoder members competed, which is by far the highest rate of participation in the last 3 years. Another noteworthy fact was that four contestants finished the provisional leaderboard tied for first, with a perfect score of 1.0 for each of the 100 seeds!

**Problem**

The problem itself was also quite interesting: easy to approach and get a solution, but also allowing multiple approaches, even for placing in the top 10 – which is a rarity, usually most of the top 10 does more or less the same thing. I invite you to read the complete problem statement here: SameColorPairs

Short version: You’re given a N x M grid of tiles, each of a given color *1 … C, with 1 <= N <= 100, 1 <= M <= 100, 1 <= C <= 6*.

You can remove a pair of tiles if:

1. They have the same color, let’s call it *tile_color*

2. The bounding rectangle determined by the two tiles (the rectangle with the smallest area which contains both tiles) doesn’t contain tiles of other colors. That is, the rectangle has to contain only empty cells (tiles which have been removed in a previous step) or tiles of color tile_color

You want to remove as many tiles as possible by successive application of the move above.

Your score for a seed is the percentage of cleared tiles (with a maximum score of 1.0 for clearing the whole grid).

The leaderboard scoring was relative, meaning you gained points for each competitor that you beat on each seed. This meant that a 5% improvement in absolute score would not necessarily translate to the leaderboard.

**Approaches**

Experimenting a bit with the problem it was apparent that it was easy to clear most of the grid for each seed. Indeed, seeds with number of colors 2 to 4 were almost trivial, so the real challenge was scoring high on the seeds with number of colors = 5 or colors = 6.

For this kind of perfect information problems, with simple mechanics, usually three types of approaches work really well: Simulated Annealing, Beam Search and iterated greedy / Random Search.

In this problem, both iterated random search and Simulated Annealing could be made to work.

I tried Beam Search, but couldn’t make it work due to the lack of a good (and efficient) state evaluation function. It’s hard to tell how good a configuration is (grid with some tiles removed), without knowing how many more moves are possible. It’s a bit similar to Chess, were it’s hard to tell if a Knight sacrifice will work without actually simulating all the possible moves.

A. Iterated Random Search

The solution I ended up using (and which was used in various forms by a lot of the competitors) was Iterated Random Search.

It consists of two parts: RandomSearch + iterating by reusing a prefix of the best found solution

RandomSearch: 1. Pick a random ordering of the cells, order 2. For each color, pick a random ordering of cells with that color, color_order[] 3. for each cell1 in order: for each cell2 in color_order[color_of_cell(cell1)]: if move(cell1, cell2) is valid: apply the move - erase cell1 and cell2 goto step 1 4. No more moves possible - exit

*RandomSearch* will always produce a solution which can’t be extended (maximal).

Because it runs quite quickly, we can iterate on it by replaying a portion of the best_solution found so far and then applying RandomSearch.

Example: The first call of RandomSearch produces a solution of length 100 (clears 200 cells).

Take a prefix of this solution (let’s say the first 50 moves), replay it.

From that point onward, redo RandomSearch again, hopefully finding a longer solution.

Because of the randomness involved, it’s unlikely that two runs will be the same.

Length of the prefix should be tied on time remaining (when getting close to the end, reuse larger prefixes of the best solution).

This solution clears all seeds with C in 1 … 4, and most of the ones with C = 5.

I get an average of 99.90 absolute score per 100 seeds.

eldidou used the same general approach with 2 modifications:

- For the prefix I don’t use only the best solution seen so far, but a pool of the N best solutions.
- The length of the prefix is not linear to the time spent, more time is spent exploring the last cells to remove, when iterations are very fast to compute.

Thinking about it after the contest, the reason the first point helps is that the problem with iterated random search is the high variance between runs. By keeping multiple solutions around, the solution explores multiple paths at the same time, instead of being stuck with just one. This reduces the variance and increases the average score by a lot.

Iterated Random Search + reuse the prefix of the best solution would give around ~99.84 absolute score over 100 seeds.

Iterated Random Search + reuse the prefix of top N solutions was enough for a ~99.96 absolute score over 100 seeds.

B. Simulated Annealing

Simulated Annealing is usually a very powerful technique for this kind of problems. The issue in this case is that the number of possible moves (pairs of tiles to be removed) is quite high. For a 100×100 grid with 6 colors, there are around ~16600 tiles for each color, which means that the expected number of possible for a single color is (100 * 100 / 6) ^ 2 / 2 = ~ 140.000. Granted, just a few of all possible moves are valid at a time, and when making a move a lot of other moves become obsolete (since one of the tiles got erased by a previous move). However, it’s not easy to use these constraints efficiently.

To get around this, contestants such as [handle]kimiyuki[/handle] and [handle]hakomo[/handle] used another feature of this problem: locality. Instead of tackling the whole grid at once, why not break it into small sub-pieces and tackle those pieces one at a time!

A video is worth a thousand images 🙂

kimiyuki’s solution

hakomo’s solution

A potential reason of why this works so well is the fact that even though a sub-grid (sub-piece) is not cleared completely, the remaining tiles will most likely be cleared when trying the next subgrid.

Finally, because of the small size of the subgrids, the number of possible moves becomes small enough for Simulated Annealing to become feasible. Multiple methods for doing mutations (changing the solution) worked, such as swapping two moves (‘A-B C-D’ becomes (‘A-C B-D’) or dropping a move and trying other instead. I would hypothesize that even iterated random search produced good results, since the number of possible moves for each subgrid was so small.

**Being efficient**

There is a final ingredient missing: how to efficiently compute whether a move is valid, that is how to quickly answer the question ‘does a given rectangle only contains tiles of a single color?’

There are 2 aspects which need to be taken into account: Query time and update time (checking whether a move is valid, and updating the configuration after erasing tiles).

I will present a subset of the possible approaches.

In the following, we will be talking about a *NxM* grid, with C colors.

*Method 1: Trivial*

Check all possible cells. Worst case query time is O(N*M), which happens when a move is possible (since you need to check all N*M cells to be sure). Erasing tiles takes O(1) – nothing to update.

Alternatively, this method can be flipped around to O(1) per query and O(N*M) per update by storing sum[col][r] = number of tiles of color col in the (0, 0) – (r, c) rectangle. Then, the number of tiles of an arbitrary subrectangle (r1, c1) – (r2, c2) can be found via using this sum array – left as an exercise to the reader :). We also need to store this for the erased tiles (treat them as a different color.

*Method 2: Bitmasks*

This allows for O(N) query, O(C) update. I used this during the contest, so I’ll a go a bit more in-depth.

For each color and for each row, I stored ** bitmask[col][r]** = is tile

**of**

*(r, c)***?**

*color col or empty*Instead of storing a 3D matrix, we can just store a 2D array of bitmasks. I used __int128 in C++.

Now, we can check whether an entire row contains only a given color by & (and-ing) it with a bitmask corresponding to the columns present in the rectangle.

For updating, whenever we erase a tile we need to set the bit corresponding to the erased tile position to 1 for all colors (since it just became empty, is is now valid for all colors).

*Example:*

For the toy grid with H=2, W=3, C=2:

111 __1

where the _ denotes an erased cell

We have the following bitmask:

bitmask[1][0] = 111

bitmask[1][1] = 111 (because erased tiles count as a 1 for all colors)

Suppose we wanted to check the move (1, 0) – (0, 2). This would involve checking whether the rectangle (0, 0) – (1, 2) consists only of tiles of color 1. The bitmask for columns 0 to 2 is 111 (let’s call it column_bitmask, note that it’s the same for all rows).

For row 0: bitmask[1][0] & column_bitmask == column_bitmask? Yes

For row 1: bitmask[1][1] & column_bitmask == column_bitmask? Yes

Therefore, the move is valid.

Now let’s change the toy grid to

111 221

while keeping the query we’re interested in the same.

The bitmasks now are:

bitmask[1][0] = 111

bitmask[1][1] = 001 (because tiles with color 2 are not valid for color 1)

For row 0: bitmask[1][0] & column_bitmask == column_bitmask? Yes

For row 1: bitmask[1][1] & column_bitmask == column_bitmask? No, because 001 & 111 != 111

Therefore, the move is invalid.

Method 3: Store the next non-empty tiles in above / below each tile

eldidou writes:

*For the implementation, I keep for each cell, the position of the next non empty cell above it (and the one below), so that checking if a cell can be removed is in O(M), and removing it is in O(M), instead of respectively O(N*M) and O(1).*

For all methods we could also store the number of empty cells for each row in our grid and then just skip empty rows when checking a row. This would help a lot towards the latter part of the solution when the grid became sparse.

For methods 2 and 3, they can be optimized by choosing whether to go row-wise or column-wise depending on the dimensions of the grid (go row-wise for grids with N < M, column-wise otherwise).
**Closing thoughts**

It’s quite rare for a problem to have joint winners using different solutions, so this problem was the epitome of allowing multiple approaches!

Also, be sure to check (or add to 🙂 the thread here: Post your approach.

There are a lot of other details and solutions which I didn’t mention, and you can find those there. A particularly interesting one is the priority idea of battyone.

**Thanks you’s**

Thank you to nickolas or the excellent problem as well as to gorbunov for letting me write this article on such a short notice, as well as proof-reading it.