## February 17, 2021 Single Round Match 800 Editorials

#### TIEFighterAssembly

For each interesting component count the number of pieces we have, then use simple comparisons to calculate how many full units we can build. If your programming language doesn’t have a nice function to count the occurrences of a character in a string, implement your own and then reuse it multiple times, as shown below.

int countOccurrences(String haystack, char needle) { int answer = 0; for (char straw : haystack.toCharArray()) if (straw == needle) ++answer; return answer; } public int assemble(String salvagedParts) { int solarWings = countOccurrences( salvagedParts, '|' ); int wingBraces = countOccurrences( salvagedParts, '-' ); int centralCabins = countOccurrences( salvagedParts, 'O' ); return Math.min( centralCabins, Math.min( solarWings/2, wingBraces/2 ) ); }

#### SlotMachineHacking

The *state* of the slot machine is an array of numbers: for each reel the index of the currently visible fruit. If we know the current state, we can easily compute the next one: new_state[i] = (current_state[i] + steps[i]) modulo length( reels[i] ).

There is only a finite number of possible states: their total count C is simply the product of all reel lengths. Note that in general some of those C states may never be actually reached, but that doesn’t really change anything. Sooner or later (at the very least after playing C+1 rounds) some state will repeat. As soon as that happens, the machine has entered a loop.

Solution 1: Simulate C+1 games. If you find a jackpot, report it. After C+1 games report that you are in a jackpotless cycle.

The above solution was good enough to get accepted, but we can think about the problem some more.

Another nice property of this process is its *reversibility*. If I show you the current state, you can compute not just the next state but also the *previous* state uniquely (by rotating reel i by steps[i] steps in the opposite direction, essentially). What this means is that the first state that repeats must be the initial state in which all reels are at index zero. Reversible processes cannot have a pre-period that is never reentered: if the first state to repeat were some other state, we would reach it from two different states and that contradicts reversibility. Thus, we get:

Solution 2: Simulate games. If you hit a jackpot, report it. If you return to all reels at offset 0 without ever seeing a jackpot, report that it cannot be reached.

boolean haveIWon(String[] reels, int[] offsets) { for (int i=0; i<reels.length; ++i) { if (reels[i].charAt( offsets[i] ) != 'c') return false; } return true; } boolean amIBackAtStart(int[] offsets) { for (int x : offsets) if (x != 0) return false; return true; } public int win(String[] reels, int[] steps) { int N = reels.length; int X = 0; int[] offsets = new int[N]; while (true) { for (int n=0; n<N; ++n) { offsets[n] = (offsets[n] + steps[n]) % reels[n].length(); } if (haveIWon(reels,offsets)) return X; ++X; if (amIBackAtStart(offsets)) return -1; } }

The problem also has a much faster solution that wasn’t necessary due to the small constraints. For details, look up the Chinese Remainder Theorem. Using this theorem we have an efficient way to compute the offset and period of the jackpot for much larger machines than those that were valid inputs in our problem.

#### PoisonedSwamp

The key observation is that we cannot simply treat cells of the swamp as nodes of our graph. The amount of poison we have is also relevant. Thus, we need to build a new graph: the state space. The nodes of this graph are all states we can be in – i.e., triples (r,c,p) meaning that we are in row r, column c, and we have p poison. Directed edges in this graph represent possible actions we can take (steps) and they lead to nodes that represent new states. E.g., if I can step downwards from cell (r,c) into cell (r+1,c) and that cell gives 3 poison, one of the edges in the state space graph will lead from the node (r,c,4) to the node (r+1,c,7).

In this graph we want to go from the state (0,0,0) to any of the states (R-1,C-1,p) with p < 10. Once we have properly built the state space graph, this question can easily be answered using a simple graph traversal (e.g., BFS or DFS) in O(RC) time.

Below we show an even lazier implementation that runs in O(R^2 C^2) time: mark the starting state as reachable, and then repeatedly iterate over all states and mark all neighbors of each reachable state as reachable. This was still fast enough, by a wide margin.

int[] DR = {-1,1,0,0}; int[] DC = {0,0,-1,1}; public String cross(String[] swamp) { int R = swamp.length, C = swamp[0].length(); boolean[][][] reachable = new boolean[R][C][10]; reachable[0][0][0] = true; while (true) { boolean changes = false; for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) for (int p=0; p<10; ++p) if (reachable[r][p]) for (int d=0; d<4; ++d) { int nr = r + DR[d], nc = c + DC[d]; if (nr < 0 || nr >= R || nc < 0 || nc >= C) continue; char type = swamp[nr].charAt(nc); if (type == '#') continue; int np = p; if (type >= '1' && type <= '9') np += (type - '0'); if (type == 'S') np = 0; if (np < 10 && !reachable[nr][nc][np]) { reachable[nr][nc][np] = true; changes = true; } } if (!changes) break; } for (int p=0; p<10; ++p) if (reachable[R-1][C-1][p]) return "possible"; return "impossible"; }

#### MinMaxGame

If we play a few games, we can get a feeling that the numbers at the ends of the array matter.

If we only have two values (which are, at that moment, the two endpoints of the array) the person whose turn it is gets the worse of those two values.

Without loss of generality, suppose Nam will be that person at the end of the game. Nam knows that at the end he will be shown some array {x,y} and he will get min(x,y). Thus, he would like to keep the array endpoints as large as possible. How can he do that? Simply: by avoiding touching them and making arbitrary valid moves elsewhere. This is because Nam cannot increase an endpoint of the array. If he ever selects it, there are only two possible outcomes: either he removes some other value from the array (in which case the endpoint does not change), or he removes the endpoint (in which case we know that the other value must have been smaller, and that smaller value is now the new endpoint).

Nam also knows that he does not mind if Quang touches the ends of the array: due to symmetry, whenever Quang touches them, he can only keep them the same or make them larger.

Thus, Nam can look at the initial array A and say: “If I never touch the ends of the array, except for the very last move of the game, I can guarantee that the outcome will be **at least** min( A[0], A[n-1] ).”

Quang can now argue similarly. As we explained above, it is also in Quang’s best interest to avoid touching the ends of the array, and he can freely do so until the array becomes of length 3. Now Quang sees some array {x,z,y}. What should he do? Here, Quang should select the middle element and the larger of the two endpoints. This way the smallest endpoint will remain untouched and the minimum will remain the same.

Thus, Quang can look at the initial array A and say: “If I follow the above strategy, I can guarantee that the outcome will be **at most **min( A[0], A[n-1] ).”

And that settles the question: the outcome if both of them play optimally must be exactly min( A[0], A[n-1] ) because each of them has a strategy that makes sure that the opponent cannot get anything better from their point of view.

The situation is exactly reversed if Quang is the one to make the last move, and therefore in this situation the outcome of the game is max( A[0], A[n-1] ).

One final note: An interesting way to approach this problem is to binary search the answer. If we are asking the question whether the answer is at least X, we can replace all elements of A with zeros and ones: elements that are at least X become 1 and elements that are less than X become 0. Then, Nam wants the outcome 1 and Quang wants the outcome 0. The analysis of this new game is a bit simpler (pun intended) and it’s easier to discover the solution than in the general case.

#### MaximumPenalty

A slower solution that is easy to discover and implement is to binary search the answer. If we fix some value X and we ask whether it’s possible to have maximum penalty <= X, we are simply asking whether there is a schedule in which each individual job has penalty at most X. We can verify this by going backwards in time and being greedy. We know when the last job should be finished. If we have no job that has a small enough penalty at that time, we know that no valid schedule exists. Otherwise, we can place any such job there: all jobs that can end here can also end arbitrarily sooner, so their order does not matter. In this way we go back in time until we either run out of cheap jobs at some point, or construct one valid schedule with all penalties not exceeding the X we are testing.

A nicer and somewhat faster solution can be built upon the following observation: As above, we can determine the time t = sum(length) when the last job has to end. Let J be the job that has the smallest penalty at time t. The claim is that there is an optimal solution in which J is the last job.

Proof: Take any schedule where J isn’t last. Suppose this schedule ends J, K1, …, Ki. What happens if we change this schedule by moving J to the end? Each of the jobs K1, …, Ki now ends sooner, so its penalty is at most as big as before. And the new penalty of job J is at most the same as the old penalty of the job Ki. (This follows from the way J was selected.) Thus, the new maximum penalty is at most the same as the old maximum penalty. In other words, for any schedule there is an equal or better schedule that ends with J, q.e.d.

This gives us a straightforward O(n^2) algorithm in which we greedily construct one optimal solution by repeatedly applying the above result.

#### MaxOfMin

Imagine that we are building the permutation by placing the values 1, 2, … into an empty array.

At any moment, we can easily tell whether the partial permutation we have matches the values f(i) we were given. The only thing that matters are the lengths of the empty parts of the array between the values we have already placed. For example, suppose that f(7) = 2 and f(6) > 2. What this means is that after placing the 2 the longest empty segment of the array must have length exactly 6: each subarray of length 7 must contain one of the already placed values and there must be at least one subarray of length 6 that does not.

Note that we do not care about the exact placements of the values we already have, or about the relative placements of the empty subarrays between them. All that matters are the lengths of those empty subarrays. E.g., both partial permutations (_,1,_,_,_,_,2,_,_) and (_,_,2,_,1,_,_,_,_) correspond to the same state in which the empty segments have lengths 1, 2, and 4.

For each such state we can ask the question in how many ways can the partial solution be finished. We can answer all these questions using recursion with memoization. For each state we try all possible ways of placing the next number, for each of them we verify whether the new partial permutation still matches the input, and if it does, we make a recursive call to count all ways in which we can continue.

The number of states grows exponentially with n, but one well-known fact about integer partitions is that they grow very slowly and thus for n=50 the actual number of reachable states is still very small. (For example, the number 50 has only 204226 different partitions, and that’s counting partitions into all possible numbers of terms.)

**misof**