
Round Overview
Discuss this match
SRM 602 was the last match of 2013. Thousands of coders were looking to end the year with a high note. The problem setter was snuke. The coders needed to use their problem solving skills at max power to solve this challenging match. In division 1, the easy problem was a standard dynamic programming one with a twist that required coders to find a quick fix to algorithmic complexity. The medium problem required equal amounts of math and data structure knowledge and was solved by around 30 coders. Voover got the impressive speed record here. The highly analytical and mathematical hard problem was so tough and interesting that only one coder could solve it, frequent customer of tough div1 hards lyrically. This earned lyrically the first place. Second and third places went to top div1 medium solvers: Voover and yeputons. Division 2 was also challenging, there were only 5 submissions to the hard problem. Congratulations to 657228726 for winning division 2.
Used as: Division Two - Level One:
| Value | 250 |
| Submission Rate | 1243 / 1284 (96.81%) |
| Success Rate | 1144 / 1243 (92.04%) |
| High Score | amanagarwal007 for 249.98 points (0 mins 16 secs) |
| Average Score | 228.80 (for 1144 correct submissions) |
A color change happens whenever:
Old rating (rating[i-1]) was less than 1200 and new rating (rating[i]) is at least 1200.
or
Old rating (rating[i-1]) is at least 1200 and new rating (rating[i]) is less than 1200.
An exception is when `i = 0`, this time the old rating is fixed to 500, because (rating[i-1]) does not exist. A way to handle this is with a special if-then-else to treat this case. The only way to change rating color is if: rating[0] is at least 1200 (From 500 to 1200 is a color change).
Problem statement just asks the number of indexes `i` such that one of those conditions is true. We just need a loop:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int count(vector < int > rating) {
int c = 0;
if (rating[0] >= 1200) {
c = 1;
}
for (int i = 1; i < rating.size(); i++) {
if ((rating[i - 1] < 1200) && (rating[i] >= 1200)) {
c++;
}
if ((rating[i - 1] >= 1200) && (rating[i] < 1200)) {
c++;
}
}
return c;
}
In easy problems, it is useful to have a mindset that simplifies code so that there is less probability to find bugs and so that the code is faster to implement (speed is a key component in these problems).
So let’s wonder about how to optimize this. For example, we can use a condition `(x >= 1200)` to determine if current color is brown. A color change happens when the condition is false for the past rating and true for the new one or vice versa. In other words, the truth values must differ: `( (y >= 1200) != (x >= 1200)`. Where `x` is the new rating and `y` the old one. This `x` and `y` logic also allows us to treat the special `i=0` case, just initialize `y = 500`.
1
2
3
4
5
6
7
8
9
10
int count(vector < int > rating) {
int x = 500, c = 0;
for (int y: rating) {
if ((y >= 1200) != (x >= 1200)) {
c++;
}
x = y;
}
return c;*
}
Another idea is to insert 500 as the first rating. For example, in python:
1
2
3
4
5
6
7
8
9
10
class TypoCoderDiv2:
def count(self, rating):
# rating is initially a tuple and not a list, so we convert it and add[500]:
rating = [500] + [x
for x in rating]
# get the colors 0 = ciel, 1 = brown:
color = [(x >= 1200) for x in rating]
# count the number of times the color changes:
return sum(color[i] != color[i + 1]
for i in range(len(color) - 1))
PilingRectsDiv2
Problem Details
Used as: Division Two - Level Two:
| Value | 500 |
| Submission Rate | 444 / 1284 (34.58%) |
| Success Rate | 237 / 444 (53.38%) |
| High Score | ashashwat for 488.39 points (4 mins 24 secs) |
| Average Score | 274.35 (for 237 correct submissions) |
We’ll start with some important observations about how the rectangles should be placed.
There is a condition that each rectangle you place should overlap with the previous one. This condition is actually unnecessary. Remember that the final intersection of all rectangles must be at least limit. This limit means that every time a new rectangle is added, the intersection of this rectangle and the intersection of all previous rectangles must have an area of at least limit. It also means that the intersection of this rectangle and all the future rectangles must be at least limit too. In effect, the order doesn’t matter, what matters is that the final intersection is correct.
While order doesn’t matter, note that it still means there are mutually exclusive choices. A `10 x 10` square and a `1 x 11` rectangle can never intersect with an area greater than 10.
Each rectangle we place, must have the best possible rotation and position to maximize the intersection. It is best to align the left-most and top-side of all the rectangles we place. This will make the intersection better. For example, a `10 x 10` square and a `20 x 5` square can be aligned in many ways, but matching two of the sides is never a bad idea (Even though there are other ways to reach the maximum intersection, this method is guaranteed to do it):
Maximizing the intersection will make it easier to reach the minimum limit.
So we just need to pick a set of rectangles and their rotations so that the intersection has area of at least limit. How about we do it in reverse? What is the maximum number of rectangles we can pick such that a intersection of dimensions `w times h` possible? As long as `wh >= “limit”`, then this number of rectangles is a valid one. So just pick the values `(w, h)` and count the number of rectangles such that can contain a rectangle of dimensions `w times h`. If you pick many rectangles that can contain a `(w times h)` rectangle, then their intersection will be greater than or equal to `(w times h)`.
Since regardless of the rotation, we will align the rectangles. Then we can conclude that, in order for a rectangle of dimensions `x[i] times y[i]` to contain a `w times h` rectangle. One or both of the following conditions must be true:
`(w <= x[i]) & (h <= y[i])`
`(h <= x[i]) & (w <= y[i])` (rotate it)
If one of those conditions is true, then we can simply use the rectangle. In other words, just count the number of valid , big enough, rectangles. We should find the `(w,h)` such that `wh >= limit` and the number of rectangles is as large as possible.
When picking the `(w,h)` pairs, it might be complicated to know how to find them or when to stop finding them. My suggestion is to first fix `w`. Then you have to find the smallest `h` such that: `wh >= “limit”`. All intersections with `h` larger than that value will be larger and thus have a possibly smaller number of valid rectangles. This way we can try values of `w = 1, 2, … “limit”`. To calculate `h`, we can just use division:
`min h: wh >= “limit”`
`min h: h >= “limit” / w`
What is the minimum integer `h` such that it is greater than or equal to `“limit” / w` ? Note that `“limit” / w` is not necessarily a integer. We should round up: `h = |~“limit” / w~|`.
The solution goes as follows: Pick `w`, find the minimum `h`, count the number of rectangles such that `w times h` intersection fits. Return the maximum:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
int getmax(vector < int > X, vector < int > Y, int limit) {
int mx = -1;
for (int w = 1; w <= limit && limit / w <= limit; w++) {
int h = (limit + w - 1) / w;
// w is the smaller dimension, h is the larger dimension.
// count the number of rectangles such that w * h fits in each of them
int r = 0;
for (int i = 0; i < X.size(); i++) {
if ((X[i] >= w && Y[i] >= h) || (Y[i] >= w && X[i] >= h)) {
r++;
}
}
if (r != 0) {
mx = std::max(mx, r);
}
}
return mx;
}
There is a interesting discussion about this problem and some wrong approaches in the forums
Used as: Division Two - Level Three:
| Value | 1000 |
| Submission Rate | 16 / 1284 (1.25%) |
| Success Rate | 5 / 16 (31.25%) |
| High Score | 657228726 for 676.01 points (22 mins 1 secs) |
| Average Score | 493.64 (for 5 correct submissions) |
We can interpret the problem as counting the number of ways to paint cells of a board such that some specific columns and rows (the ones with ‘.’) contain absolutely no painted cells and each of the other columns and rows (the ones with ‘B’) contains at least one painted cell.
We can just leave the whole contents of the rows and columns that must not contain any painted cells untouched. Thus we only count the number of ways to paint the other cells. This reduces the problem to : In a `w times h` board, count the number of ways to paint its cells such that each column and row contains at least one painted cell. Note that `w` is the number of 'B’s in front and `h` is the number of 'B’s in side.
If you are new to dynamic programming, we cannot stop recommending the tutorial: dynamic programming from novice to advanced by Dumitru
We start with a board of `w` columns and `h` rows. Imagine we paint some of the cells in the right-most column. This column must contain at least one painted cell, so the number of cells we paint `s` must be at least 1. There are `((h),(s))` ways to paint `s` cells in this column, where `((n),(k))` is the binomial coefficient. Let’s say we picked any of them. The following happens:
If we pick any `s` cells, the result is that `s` rows will now be already-painted. We should treat these `s` rows differently when filling the remaining columns. Due to symmetry, it doesn’t matter which `s` cells we pick, the number of ways to paint the remaining columns will be the same.
This takes us to a dynamic programming idea. Let `f(x,y)` be the number of ways to paint the left-most `x` columns if `y` of the rows still do not contain any painted cell.
Base case: `f(x = 0, y = 0)`. This means that there is nothing else to do. All the columns and rows contain at least one painted cell. Result is 1 (One valid way to finish).
`f(x = 0, y != 0)`. There are no more columns to paint, but `y` of the rows are still missing a painted cell. There is no valid way to paint those `y` rows. Result is 0.
Else we pick a number `s` of cells to paint in column #`x` (1-indexed). This is the number of cells we’ll paint that come from the `y` free rows. However, there are also `h - y` rows that already contain a painted cell but can still contain a painted cell in this column. So we also pick the number `r` of already-painted rows that will be re-painted. Since column must contain at least one painted cell: `(r + s >= 1)`. There are `((y),(s))` ways to pick the unpainted rows to paint and `((h-y),®)` ways to pick the rows to repaint. The number of columns will decrease by 1 and the number of unpainted rows by `s`:
`((y),(s)) times ((h-y),®) times f(x - 1, y - s)`
Just add up this value for each `(s,r)` pair where `(s + r > 0)`.
This recurrence relation has `O(wh)` combinations of parameters (e.g: number of `x` values is `O(w)` ). In each of them we need an `O(h^2)` loop to pick `(r,s)`. If we use dynamic programming to evaluate each state of the function once, the total complexity is: `O(wh^3)`.
Problems that ask you to return the result modulo 1,000,000,007 (or a similar number) usually do so to allow us to solve problems that have large results without actually using those large numbers in calculations. We can handle these numbers with modular arithmetic. Read this recipe for more info.
We can easily pre-calculate all binomial coefficients modulo 1000,000,007 with `(n,k >= 50)` using Pascal’s triangle.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
int count(string front, string side) {
int w = 0, h = 0;
for (char ch: front) {
w += (ch == 'B');
}
for (char ch: side) {
h += (ch == 'B');
}
const int MOD = 1000000007;
// Pascal's triangle:
int C[51][51];
memset(C, 0, sizeof(C));
for (int i = 0; i <= 50; i++) {
C[i][0] = 1;
for (int j = 1; j <= i; j++) {
C[i][j] = (C[i - 1][j] + C[i - 1][j - 1]) % MOD;
}
}
long dp[w + 1][h + 1];
for (int x = 0; x <= w; x++) {
for (int y = 0; y <= h; y++) {
long & res = dp[x][y];
if (x == 0) {
// no more columns to paint:
res = ((y == 0) ? 1 : 0);
} else {
res = 0;
for (int r = 0; r <= h - y; r++) {
for (int s = 0; s <= y; s++) {
if (r + s >= 1) {
long p = C[y][s];
p = (p * C[h - y][r]) % MOD;
res += (p * dp[x - 1][y - s]) % MOD;
}
}
}
res %= MOD;
}
}
}
return (int) dp[w][h];
}
O(wh^2) improvementThere is a simple, yet unnecessary improvement to the complexity to `O(wh^2)`. Note that we don’t really need to care about which specific `r` rows are repainted, nor about the number of such rows. There is only one constraint: If `s = 0`, the number of repainted rows must be at least 1. There are `2^(h - y)` ways to pick any set of repainted rows (including the empty one) and `2^(h - y) - 1` ways to pick any set of repainted rows that includes at least one.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
MOD = 1000000007
class BlackBoxDiv2:
def count(self, front, side):
w = front.count('B')
h = side.count('B')
mem = dict()
from math
import factorial as f
C = [[f(n) / f(k) / f(n - k) for k in range(n + 1)]
for n in range(h + 1)]
def f(x, y):
if (x, y) in mem:
return mem[(x, y)]
res = 0
if x == 0:
res = (y == 0)
else:
for s in xrange(y + 1):
p = 2 ** (h - y) - (s == 0) # p is 2 ^ (h - y) if (s > 0)
else 2 ^ (h - y) - 1
res += p * C[y][s] * f(x - 1, y - s)
mem[(x, y)] = res
return res
return f(w, h) % MOD
Used as: Division One - Level One:
| Value | 250 |
| Submission Rate | 650 / 791 (82.17%) |
| Success Rate | 577 / 650 (88.77%) |
| High Score | sentinel45 for 243.00 points (4 mins 50 secs) |
| Average Score | 164.07 (for 577 correct submissions) |
Let’s solve it using recurrences. Let `f(i, x)` be the optimal number of rating changes if your current rating is `x` after you went through the first `i` matches.
Base case: `f(i = n, x)`, this means that we already went through all matches. There can be no rating changes afterwards.
Else we have two options:
We can increase rating by `D_i`. Result can be `1 + f(i+1, x + D_i)` (if color changed with this move) or `f(i+1, x + D_i)` (if color doesn’t change).
We can decrease rating by `min(x,D_i)`. Result can be `1 + f(i+1, max(0, x - D_i))` or `f(i+1, max(0, x - D_i))` depending if rating changes or not.
Pick the maximum one of them. Also make sure that if `x >= 2200`, the new value of `x` after increasing or decreasing rating cannot be greater than 2199.
The issue with this approach is that the values of `D_i` can be very large. So `x` can reach very large values and thus the number of `(x, D_i)` parameter combinations is too large even if we use dynamic programming.
The trick is to notice that the rating cannot be brown twice in a row. The rating is not brown initially. So we should first reach brown rating by increasing `x` by `D_i`.
If `x + D_i >= 2200`, and `i` is the last match, we can just end the process, there is exactly one color change.
Else, if `x + D_i >= 2200` and `i` is not the last match, then the rating must go back to ciel after match `i+1`, this means we must decrease rating in match `i+1`: `(x + D_i - D_(i+1) < 2200)`. The trick is that we can use this to our advantage: If we do match `i` and `i+1` in the same recurrence step, then we have 2 rating changes in total and rating becomes `max(0, x + D_i - D_(i+1))`. After this change, the value of `x` in the recurrence can not ever reach more than 2199.
The recurrence relation now can have only `2200 times (n+1)` pairs `(x, i)`. We can now use dynamic programming to implement it and it will be quite fast:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
int getmax(vector < int > D, int X) {
const int BROWN = 2200;
int f[D.size() + 1][BROWN];
for (int i = D.size(); i >= 0; i--) {
for (int x = 0; x < BROWN; x++) {
int & res = f[i][x];
if (i == D.size()) {
//base
res = 0;
} else {
// { x < 2200 }
// up:
if (x + D[i] >= BROWN) {
if (i + 1 < D.size()) {
// become brown, must be "ciel" after the next match
if (x + D[i] - D[i + 1] < BROWN) {
// can do it
res = 2 + f[i + 2][std::max(0, x + D[i] - D[i + 1])];
}
} else {
// one final rating change:
res = 1;
}
} else {
res = f[i + 1][x + D[i]];
}
// down:
res = std::max(res, f[i + 1][max(0, x - D[i])]);
}
}
};
return f[0][X];
PilingRectsDiv1
Problem Details
Used as: Division One - Level Two:
| Value | 550 |
| Submission Rate | 107 / 791 (13.53%) |
| Success Rate | 35 / 107 (32.71%) |
| High Score | Voover for 318.52 points (29 mins 7 secs) |
| Average Score | 239.71 (for 35 correct submissions) |
Even if we knew how to split the rectangles between the bags, we should also know a strategy to choose the best placement and rotation for each of the rectangles in a single bag. Let’s start with this: Given a bag of rectangles, how do you maximize the area of the intersection of them all?
The first observation is one we also made in, PilingRectsDiv2, the division 2 version of the problem: You should place all rectangles aligning them all using the same corner. Like in division 2, the restriction that each rectangle should overlap to the previous one is meaningless, because we need to maximize the intersection anyway.
The second observation is one that wasn’t required in division 2 but is very important now that we need to maximize the area of the intersection (and will also simplify the assignment of rectangles to bags). It is about the rotation to pick for the rectangles.
Let’s try two rectangles of dimension: `a times b` and `c times d`. Assume that: `(a <= b)` and `(c <= d)` (can rotate the rectangles if we need to). Also, one of the rectangles got to have the smallest dimension, so we can assume `(a <= c)` (If not we could just swap the rectangles). Due to symmetry, there are only two distinct ways to choose the rotations:
Pair the `a` dimension with `c` and `b` with `d`.
Pair the `a` dimension with `d` and `b` with `c`.
It was very useful to formalize this sub-problem, for example, assume that the first option is never worse than the other:
`min(a,c) min(b,d) >= min(a,d) min(b,c)`
`a min(b,d) >= a min(b,c)`
`min(b,d) >= min(b,c)`
Because `a <= c <= d`, then we reach that the original inequality is equivalent to `min(b,d) >= min(b,c)`, which is true after some case-checking. Remember that `c <= d`:
If `b <= c <= d`, the inequality becomes: `b >= b`.
If `c <= b <= d`, then the inequality becomes: `b >= c`.
If `c <= d <= b`, then the inequality becomes: `d >= c`.
In each of those three cases, the inequality is true. This means that the original inequality: `min(a,c) min(b,d) >= min(a,d) min(b,c)` is true. In other words, if you have two rectangles, it is always a good idea to pair up the smallest sides and the largest sides.
After you do it with two rectangles, the intersection will also be a rectangle that can be intersected with the third rectangle, and so and so. In all rectangles involved, we should always match smallest dimensions together and largest ones too. Let’s from now on assume that `X[i] <= Y[i]` for all rectangles. For those rectangles that don’t, we can just swap their dimensions. The optimal area will be: `min(X[i]) * min(Y[i])` for all rectangles `i` in the bag.
Being able to treat the `X` and `Y` differently will be very helpful. Let’s first assume we sorted all the rectangles in non-decreasing order of `X`. Then `X[0]` is the minimum `X` among the rectangles. One of the two bags will contain this `X[0]`, let’s call the set of rectangles in this bag “set 0” (and the other set is “set 1”). The minimum `X[]` in this set is `X[0]`. The other bag will have some `X` too. Consider `i` such that `X[i]` is the minimum rectangle in set 1. There are `O(N)` options for this `X[i]`, let’s try them all.
If `X[i]` is the minimum `X` in set 1, this means that all rectangles with index `(j < i)` belong to set 0. Rectangle `i` belongs to set 1. The remaining `2N - i - 1` rectangles have to be distributed between the two bags. Call this set `R`.
`N - i` rectangles from set `R` must be chosen for set 0. It follows that `i` is at most `N`.
The other `2N - i - 1 - (N - i) = N - 1` rectangles from `R` go to set 1.
We know the `X` dimensions of the two resulting rectangles: `X[0]` and `X[i]`. We just need to optimize the `Y` values.
Let’s ignore for now that there are up to 200000 rectangles and that we are already inside an `O(N)` loop to fix `X[i]`. Can we find the optimal `y` values for the two rectangles even if it is in `O(N)` time or maybe worse?
Without a time constraint, consider the list `y` of values `Y[j]` from set `R`. We can sort these values in `O(Nlog(N))` time. Then `y[0]`, the smallest `y` must go to either set 0 or set 1.
If we pick set 0 for `y[0]`, then the minimum `Y` value in this set will be `y[0]`. Since this value is the smallest possible, any values `Y` for the remaining `(N - i - 1)` rectangles we put in set 0 will be ignored. Since these values will be ignored and we want to maximize the result, let us put the `(N - i - 1)` smallest values. In total, we put the `(N - i)` smallest values of `y` in set 0. The remaining values will be those greater than or equal to `y[N-i]`, if we place them all in set 1, `y[N-i]` becomes the minimum `y` in this set. In total, the optimal result in case we put `y[0]` in set 0 is:
`X[0]*min(y[0], Y[0], Y[1], … Y[i-1]) * X[i]*min(y[N-i], Y[i]) `
Similarly, if we pick set 1 for `y[0]`, then we would like the `(N - 1)` smallest `y` values to be added to set 1. The result is:
`X[0]*min(y[N-1], Y[0], Y[1], …, Y[i-1]) + X[i]*min(y[0],Y[i])`
The maximum between those two results gives the optimal area if `X[i]` is the smallest `X` in set 1.
I am including code just to show how the logic works. This solution is `O(N^2log(N))` and clearly too slow, but it will be a good starting point for the optimizations that follow:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
/* Assume we already generated X and Y, and sorted the rectangles in non-decreasing X[i] order */
long getmaxSortedSlow(int N,
const vector < long > & X, vector < long > & Y) {
if (N == 1) {
/* Special case */
return X[0] * Y[0] + X[1] * Y[1];
}
long res = 0;
for (int i = 1; i <= N; i++) {
long x0 = X[0]; /* X[0] to set 0 */
long x1 = X[i]; /* X[i] to set 1 */
long y0 = Y[0]; /* preliminary smallest Y[j] for set 0, find minimum Y[j], j < i : */
for (int j = 0; j < i; j++) {
y0 = std::min(y0, Y[j]);
}
long y1 = Y[i]; /* preliminary smallest Y[j] for set 1 */
/* Get the y[] values in set of remaining rectangles: */
vector < long > y;
for (int j = i + 1; j < 2 * N; j++) {
y.push_back(Y[j]);
}
sort(y.begin(), y.end());
int t0 = N - i;
int t1 = N - 1; // t - t1 = 2*N - i - (N - i) = N
if (t0 > 0) {
//put t0 smallest y[i] in set 0
long sum1 = min(y0, y[0]) * x0 + min(y1, y[t0]) * x1;
//... or put t1 smallest y[i] in set 1
long sum2 = min(y0, y[t1]) * x0 + min(y1, y[0]) * x1;
// keep the maximum:
res = std::max({
res,
sum1,
sum2
});
} else {
// All first N go to set 0, the rest to set 1:
res = std::max(res, x0 * y0 + min(y1, y[0]) * x1);
}
}
return res;
}
The idea is to use a data structure. It is all up to list `y[]`. The trick is to notice that in step `i`, `y` contains all `Y[j]` where `(j > i)`. So each time `i` is increased, an element in `Y[j]` is removed. Perhaps it is easier to go in reverse. Initially, we can solve and find `y[]` for `i = N` in `O(N)`.
Then for each `i = N-1, N-2, … , 1`, we add `Y[i+1]` to `y[]`. In each step `i`, we need to find two things in sublinear time:
The minimum `Y[j]` where `(j <= i)`. We can precalculate this for each `i` in `O(N)` time.
The k-th smallest element in `y[]`, where `k` can be `N-i` or `N-1`.
All we need now is a data structure to represent `y[]`. It needs to be able to do two operations in sublinear time:
Insert elements.
Get the k-th smallest element of those inserted.
There are various kinds of tree variations that can do this. segment trees, binary indexed trees… A good question is whether or not it falls within the scope of this editorial to teach data structures… There are some things specific to this problem that could be mentioned:
Assume you have a node that contains the sum of elements inside a interval `[a,b)` and also has two children intervals `[a,c)`, `[c,b)` and their respective sums. How can we find the k-th element? If the sum of elements in `[a,c)` is larger than `k`, then the answer is equal to the k-th element of `[a,c)`. Else the answer is equal to the `(k - sum( [a,c) ))`-th element of `[c - b)`.
The bounds for `Y` are up to `10^9`, but the maximum number of elements is up to `10^5`. Since the tree only needs the order relationship between the elements, you can actually store their indexes (in a sorted list containing all the possible values) instead of the values themselves. This is useful if you are using a BIT which requires `O(“MAX_ELEMENT”)` space.
Once you find a suitable data structure that you like to implement, we can write the final code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
long getmaxSorted(int N,
const vector < long > & X,
const vector < long > & Y) {
// Index the Y[] values:
map < long, int > yIndex;
set < long > yValues(Y.begin(), Y.end());
int lower = 0;
vector < long > sortedY(yValues.size());
for (int y: yValues) {
yIndex[y] = lower;
sortedY[lower] = y;
lower++;
}
//------------------------------------------
vector < long > minY = Y; //minY[i] returns the minimum Y[j] : j <= i
for (int i = 1; i < Y.size(); i++) {
minY[i] = std::min(minY[i - 1], Y[i]);
}
// init the tree:
fill(treesum, treesum + 2 * MAX, 0);
// solve for i = N
long x0 = X[0];
long x1 = X[N];
long y0 = minY[N - 1];
long y1 = Y[N];
if (N == 1) {
return x0 * y0 + x1 * y1;
}
for (int j = N + 1; j < 2 * N; j++) {
insert(yIndex[Y[j]]);
}
// put all the elements you can in set 0
long res = x0 * y0 + min(y1, sortedY[getKth(0)]) * x1;
for (int i = N - 1; i >= 1; i--) {
y0 = minY[i - 1];
x1 = X[i];
y1 = Y[i];
// add Y[i+1] to y[]
insert(yIndex[Y[i + 1]]);
//int t = 2*N - i - 1;
int t0 = N - i;
int t1 = N - 1; // t - t1 = 2*N - i - (N - i) = N
/* getKth(k) gets the index of the k-th smallest y[] */
/* sorted[getKth(k)] gets that k-th smallest y[] */
//put t0 smallest y[i] in set 0
long sum1 = min(y0, sortedY[getKth(0)]) * x0 +
min(y1, sortedY[getKth(t0)]) * x1;
// ... or put t1 smallest y[i] in set 1
long sum2 = min(y0, sortedY[getKth(t1)]) * x0 +
min(y1, sortedY[getKth(0)]) * x1;
// keep the maximum:
res = std::max({
res,
sum1,
sum2
});
}
return res;
}
Used as: Division One - Level Three:
| Value | 1000 |
| Submission Rate | 2 / 791 (0.25%) |
| Success Rate | 1 / 2 (50.00%) |
| High Score | lyrically for 396.00 points (59 mins 29 secs) |
| Average Score | 396.00 (for 1 correct submission) |
This is quite an analysis problem. The solution requires us to make many reductions. Let’s start with a formal viewpoint.
Cell the situation as a graph. Take the following mirror:
When the laser enters the cell facing a specific direction, the mirror makes it exit the cell facing another direction. Likely entering another cell in that direction. So we can consider this a graph of cell/direction pairs.
Something important about this graph is that all (cell/direction) pairs have an outdegree of exactly 1 (although sometimes the exit point is not a cell but the outside of the box). The same happens with the in-degree, exactly 1. This will help us to prove some useful properties. Let’s start with one that is mentioned in the statement: The laser will always find an exit. Use a proof by contradiction: Assume the laser does not find an exit. This can only happen if the laser is stuck in a cycle. However, since the laser entered the box, it means that there must be a path of mirrors from the outside of the box to the cycle. No (cell, direction) pair in the cycle can have two different entrance points. A cycle is simply impossible within any cells that are reachable from outside the box: After visiting a (cell, direction) pair, you cannot return to it.
Consider the mirror that gets removed. Any laser path that does not visit the mirror’s cell can be ignored. If no laser paths visit the cell, then everything is possible. However, if one or more laser paths visit the cell, we need the laser path(s) to have the same exit point as before.
Assume only one laser path visits the cell. This means that one face of the mirror is not touched by any laser path that starts from outside the box.
Must notice that, if you remove the mirror, there is only one path the laser can take after visiting this cell that will take it to the original location (Once again, because all cells have an indegree and out-degree equal to 1). After modifying the path, the laser must once again reach the same cell in the same direction as before:
After removing this mirror, the new path of the laser must include a cycle that starts at that cell and returns to it. This cycle already exists before removing the mirror.
Technically, there are two laser paths, we will amend this by considering the reverse path to be the same path as the original one. Note that reversing the path does not alter the requirement. The required cycle will visit exactly the same cells and mirrors, but it in reverse:
If two different paths (completely different and not just their reverses) visit the same mirror, it is impossible to find a case in which, after removing this mirror, the two paths reach the same destination as before. The analysis is similar to the previous one, but this time two cycles are required, however, the two required cycles overlap with the two laser paths, so that would be impossible.
However, that doesn’t mean that it is impossible to remove the mirror when it is visited twice by the same path. Take a look to this:
This case is nice because if you remove the mirror, it is guaranteed that the laser exit will not change:
The path that is visited after reaching the mirror’s cell for the first time is just reversed. The way mirrors work allow this to happen.
From these two cases, we can conclude that when removing the mirror we need one of the following two things to be true:
The mirror is part of a cycle. Like the following:
This real example of the imaginary cycle we assumed in the first example above. If we remove that mirror, the laser would eventually reach the same exit point.
The mirror would be part of a cycle if its direction changes:
This is the reality behind the second case. If the same mirror is visited twice by the same laser, it means that there is something close to a cycle in its path. Just rotating the relevant mirror makes it a cycle.
When the mirror is not visited by the laser, we can remove it with no issue. However, notice that in order for this to happen, both of the paths of the mirror must belong to cycles, which falls in the first requirement.
If we want to be able to remove a mirror, we know that it is necessary that the mirror is part to a cycle or that rotating the mirror makes it part of a cycle. Let’s now notice that it is sufficient that at least one of these cycles or almost-cycles exist in the board for we to be able to remove at least one mirror.
If the mirrors in the cycle or semi-cycle are never visited by any laser path from the outside, this is true. So let’s focus on cases where at least one laser path visits one of their mirrors.
Any mirror that is part of a cycle can be removed and the exit won’t change (first case we analysed).
If there is an almost-cycle, if we need to rotate only one mirror for there to be a cycle, then we can remove this mirror. If the mirror is not visited by any laser path, this case is valid. If the mirror is visited by a laser path, then the laser path will forcefully travel through all the mirrors in the almost-cycle, and it will be the second case we analyzed.
This reduces the problem to counting the number of mirror settings that contain one cycle or that contain at least one mirror that can be rotated to make a cycle appear.
It would appear that we have done all the analysis necessary to solve the problem. In reality we are barely half-there. Solving the current version of the problem, even though we simplified is still difficult without further reductions.
My advice this time is to draw many combinations of mirrors until you start noticing a pattern.
The pattern is that you cannot draw a cycle or almost-cycle of mirrors without using any of the following shapes:
The four shapes are really the rotation of the same pattern:
1/ ?
It is not possible to make a cycle or an almost-cycle that needs at most one rotation to become a cycle without using that pattern at least once. While a formal argument seems complicated to make: If you try it on paper, it will be even difficult to make the laser rotate enough times without using one of those patterns. If you do manage to make a cycle without using those patterns, it will be quite large, and include some empty space inside. Since this empty space is enclosed, all the (cell, direction) pairs in this reduced space must belong to a cycle. It will be harder to draw a cycle in this reduced space without using the pattern:
It is impossible to fill this without using the pattern. If gaps tend to require the pattern, we need a large cycle that does not use the pattern and does not leave gaps. In order for it not to leave gaps, we need to use all the cells inside the cycle. If you need to fill all the space in an area, you will eventually need to use a small 2x2 corner to rotate the laser, this needs the pattern.
If we assume that the pattern is necessary for there to be at least one cycle or almost-cycle, then we just not to notice that the pattern, a single 2x2 part of the board containing the pattern, is sufficient to fulfill the condition. This is a big clue from the sole image in the problem statement:

The first shape is a cycle, the other ones need just one mirror to be rotated to become a cycle.
Therefore, it is worth to give this approach a try: Count the total number of boards of mirrors that contain that pattern (or any rotation of it)
It is actually very easy to have at least one of those patterns in a `NM` board. So what about we count the number of boards without the pattern? There are `2^(NM)` boards in total, subtract the number of invalid boards and we get the number of valid ones.
There are few overall test cases that do not contain this pattern. The approach here is to do further analysis and find the following formula:
1
2
3
4
5
6
7
8
9
10
11
const long MOD = 1000000007 ll;
long two[40010];
int count(int N, int M) {
// powers of two:
two[0] = 1;
for (int i = 0; i < 40000; i++) {
two[i + 1] = two[i] * 2 % MOD;
}
return ((two[M * N] - two[N] * M * M - two[M] * N * N + (N - 1) * (N - 2) * (M - 1) * (M - 2) / 2 + 2 * N * M * (N + M) - 5 * N * M + 3 * (N + M) - 3) % MOD + MOD) % MOD;
}