
Round Overview
Discuss this match
Used as: Division Two - Level One:
| Value | 250 |
| Submission Rate | 1003 / 1062 (94.44%) |
| Success Rate | 849 / 1003 (84.65%) |
| High Score | biswarupb for 249.98 points (0 mins 15 secs) |
| Average Score | 223.66 (for 849 correct submissions) |
The important constraint is that each stick can be used in at most one square and each square must contain only 4 sticks of the same length.
If all the sticks were the same length. It would be simple: Just split the sticks in groups of 4. For each group of 4, make a square. If the number of sticks of length L is x, then we will be able to create | _ _x / 4 _ _| squares, here | _ _x _ _| means rounding x down, ignoring the fractional part (if 4 does not divide x evenly).
When there are many lengths of sticks, it is the same, but we should first group the sticks by length. For each length L, find the number of sticks of that length x and add x/4.
In c++:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
int howManySquares(vector < int > sticks) {
int sum = 0;
// for each possible length:
for (int L = 1; L <= 1000; L++) {
int x = 0; // count sticks of this length
for (int y: sticks) {
if (y == L) {
x++;
}
}
sum += x / 4; // add x/4 to the result (integer division rounds down)
}
return sum;
}
Python one-liner
1
2
3
4
class ManySquares:
def howManySquares(self, sticks):
return sum(sticks.count(x) / 4
for x in set(sticks))
A more efficient idea is to calculate all counts in a single loop through all the elements.
1
2
3
4
5
6
7
8
9
10
11
int howManySquares(vector < int > sticks) {
map < int, int > cnt;
for (int L: sticks) {
cnt[L]++;
}
int res = 0;
for (auto x: cnt) {
res += x.second / 4;
}
return res;
}
HappyLetterDiv2
Problem Details
Used as: Division Two - Level Two:
| Value | 500 |
| Submission Rate | 852 / 1062 (80.23%) |
| Success Rate | 612 / 852 (71.83%) |
| High Score | pseudogenius for 499.98 points (0 mins 12 secs) |
| Average Score | 384.41 (for 612 correct submissions) |
If we take a look to the examples, it appears that whenever there is a Happy letter, the letter is the one that has the most repetitions in the string. Intuitively, appearing many more times than the other letters will make it easier for the letter to win. Let’s be more formal: Let t be the number of times a letter ch appears in the string. The other letters appear n - t times in the string. Consider that if n - t is too large (n - t >= t), it would always be possible for the other letters to dominate against ch (For each of the t ch letters, we can create pairs with one of the other n -t letters and thus remove all ch letters.). So (n - t < t) is a necessary condition.
It turns out (n - t < t) is also a sufficient condition. If we had (n - t < t) and we wanted any of the letters but ch to win. Then we would try to remove the t ch letters in every turn. We will eventually use all the n - t letters that are different to t, however.
Since the condition is necessary and sufficient, all we need to do is find the most frequent letter ch. If the number of times it appears is t and (n - t < t), then ch is the happy letter.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
char getHappyLetter(string letters) {
pair < int, char > top = {
-1,
'!'
};
// Find the letter ch that appears the most in the string:
// save it in a pair, the first value of the pair is the number of times
for (char ch: letters) {
top = std::max(top, {
count(letters.begin(), letters.end(), ch),
ch
});
}
return (letters.size() < 2 * top.first) ? top.second : '.';
}
BubbleSortWithReversals
Problem Details
Used as: Division Two - Level Three:
| Value | 1000 |
| Submission Rate | 70 / 1062 (6.59%) |
| Success Rate | 18 / 70 (25.71%) |
| High Score | sdad120 for 750.39 points (17 mins 39 secs) |
| Average Score | 542.96 (for 18 correct submissions) |
It helps to know how to predict the number of swaps that will be needed. To understand the concept of inversions and that the number of swaps in bubble sort is equal to the number of inversions in the array. In other words, we can predict that the number of swaps is equal to the number of pairs (i,j) such that (i < j) and A [i ] > A [j ].
Keeping the number of inversions present. Consider the number of inversions contributed by index j: The number of indices i < j such that A [i ] > A [j ]. The important observation is that for a given j, the order of the previous elements with index (i < j) does not change the number of inversions contributed by j.
Imagine a function f(x, k) that returned the minimum total number of inversions contributed by indexes greater than or equal to x such that we can reverse at most k non -overlapping substrings in those indexes. f(0,K) will be the solution to the problem.
Base case: x = n, there are no indexes greater than or equal to x, so the minimum contributed inversions is 0.
Else we have two options:
Make x part of no reversal. So we can predict that A will stay in position x. Calculate the number of inversions contributed by x. All the indexes i less than x will contain the elements A (Possibly the order may have changed or not, we do not care because the order does not change the result) so we can just count the number of elements A such that i < x and A [i ] > A [x ]. In addition, we need to continue calculating the remaining number of inversions, so we add f(x + 1, k).
If k > 0, we can make index x part of a reversal. We can actually just simulate all possible indexes y > x for the right end of the reversal. We consider that substring {A [x ],A [x+1 ],...A [y -1 ],A [y ]} got reversed and became {A [y ],A [y -1 ],...A [x+1 ],A [x ]}. For each index x <= i <= y, we need to calculate the contributed inversions. Note that like in the previous case, all elements (j < x) such that (A [j ] > A [i ]) are also inversions (the order of elements with index smaller than x does not matter. However, the order does matter when calculating the contributions inside the reversed array. What we can do is create a sequence B = {A [0 ], A [1 ], ..., A [x -1 ], A [y ], A [y -1 ], ... A [x+1 ],A [x ]} and calculate the number of contributions for indexes greater than x in this sequence. Afterwards, we still need to calculate the contributions of indexes greater than y, so we add f(y + 1, k - 1), because we used a reversal.
Pick the minimum result among all those cases.
This function has O(n k) pairs of arguments and is acyclic so we can use use memoization or iterative dynamic programming to evaluate each state of the function at most once. Each state requires O(n^2) operations (In order to pick y and to calculate the number of inversions) The total complexity is O(kn^3).
Iterative c++ version:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
int countInversions(const vector < int > & B, int x) {
/* Count the number of inversions in B, contributed by indexes >= x */
int c = 0;
for (int i = x; i < B.size(); i++) {
for (int j = 0; j < i; j++) {
if (B[j] > B[i]) {
c++;
}
}
}
return c;
}
int getMinSwaps(vector < int > A, int K) {
int F[51][51];
int n = A.size();
// Initialize the base case:
for (int k = 0; k <= K; k++) {
F[n][k] = 0;
}
for (int x = n - 1; x >= 0; x--) {
for (int k = 0; k <= K; k++) {
// Case 1: Do not reverse A[x]:
// Make B containing the first x+1 elements of A:
vector < int > B(A.begin(), A.begin() + x + 1);
F[x][k] = countInversions(B, x) + F[x + 1][k];
// Case 2: Reverse something
if (k >= 1) {
for (int y = x + 1; y < n; y++) {
// Make B, containing:
// * The first x elements of A
// * The remaining y - x + 1 elements of A, but reversed
vector < int > B(A.begin(), A.begin() + y + 1);
reverse(B.begin() + x, B.begin() + y + 1);
F[x][k] = std::min(F[x][k], countInversions(B, x) + F[y + 1][k - 1]);
}
}
}
}
return F[0][K];
}
Python with memoization:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
class BubbleSortWithReversals:
def getMinSwaps(self, A, K):
n = len(A)
# calculate the number of inversions of indexes >= x in sequence B:
def bubbleCost(B, x):
res = 0
for i in xrange(x, len(B)):
for j in xrange(i):
if B[j] > B[i]:
res += 1
return res
@MemoizedFunction
def f(x, k):
if x == n:
return 0
else:
# 1. don 't do a reversal
res = bubbleCost(A[: x] + (A[x], ), x) + f(x + 1, k)
# 2. do a reversal:
if k > 0:
for y in xrange(x + 1, n):
# B = A[0..., x - 1, y, y - 1, ...x + 1, x], reverse x: y + 1
B = A[: x] + tuple(reversed(A[x: y + 1]))
p = bubbleCost(B, x)
q = f(y + 1, k - 1)
res = min(res, p + q)
return res
return f(0, K)
# A
function decorator that does memoization
for us
def MemoizedFunction(f):
mem = dict()
def g( * args):
try:
return mem[args]
except(KeyError):
x = f( * args)
mem[args] = x
return x
return g
HappyLetterDiv1
Problem Details
Used as: Division One - Level One:
| Value | 250 |
| Submission Rate | 656 / 686 (95.63%) |
| Success Rate | 447 / 656 (68.14%) |
| High Score | moonlight131 for 248.23 points (2 mins 24 secs) |
| Average Score | 182.10 (for 447 correct submissions) |
We can try each of the letters and ask if it is a possible winning letter. This reduces the problem to finding out if it is possible to have a specific letter ch as the winner.
Is it possible to have a letter ch as the winner? We can see this as identifying the cases in which it is impossible. Imagine we knew of an optimal strategy that would always yield ch as the winner in cases where it is possible. If this strategy failed, then we can conclude the case is impossible. So we need to find the optimal strategy.
Imagine we have t letters equal to ch and r letters different to ch. If t > r, letter ch already won, we can make r pairs , each with a single ch letter and then t - r ch letters will remain. Once (t <= r), however, we find that this strategy will make us lose. Our only hope is to find pairs among the letters different to ch. If all those letters are the same, then we cannot remove them. If they are some distinct pairs, we might have many distinct options for what pair to pick.
For all letters distinct to ch, try to pair them together until none of them or only one of them remains. If one of those letters remains, the number of repetitions of that letter should be as least as possible (because it has to be smaller than the number of ch letters.
Since we want to minimize the final number of letters. An idea is to always pick the pair that removes one of each of the two most frequent letters. Can we prove it? Consider letters x, y, z and w with counts c _x >= c _y > c _z >= c _w. We need to prove that removing x and y is never worse than other options:
Removing (x,y) is not worse than removing (z,w), If removing (z,w) is part of the optimal strategy, then we will still be able to do it after removing (x,y).
Removing (x,y) is not worse than removing (x,z). If we do it, the counts become (c _x - 1 >= c _y - 1 >= c _z >= c _w) and once again, (x,z) is still a possible move. (Note that if a > b then a - 1 >= b).
For other cases; (x,w), (y,z) , (y,w), the argument is similar.
So we can just repeat this strategy. If no letters different than ch remain or if the only letter different than ch appears less times than ch, then ch is a possible winning letter.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class HappyLetterDiv1:
def getHappyLetters(self, letters):
res = ''
#
for each letter ch, can we do it ?
for ch in sorted(set(letters)) :
# get a dict of distinct letters and counts
other = {
x: letters.count(x) for x in letters
if x != ch
}
while len(other) > 1: # at least two different ones
# sort by frequency
least = sorted([(other[x], x) for x in other.keys()])
# remove a letter of each of the two most frequent:
a = least[-1][1]
b = least[-2][1]
def dec(y):
other[y] -= 1
if other[y] == 0:
other.pop(y)
dec(a)
dec(b)
if (len(other) == 0) or(other[other.keys()[0]] < letters.count(ch)):
res += ch
return res
(C++ code coming soon).
GraphInversions
Problem Details
Used as: Division One - Level Two:
| Value | 500 |
| Submission Rate | 286 / 686 (41.69%) |
| Success Rate | 192 / 286 (67.13%) |
| High Score | K.A.D.R for 456.44 points (8 mins 57 secs) |
| Average Score | 285.87 (for 192 correct submissions) |
This problem might appear complex - How would you be able to predict what the sequence will look like if you take a specific path? A key realization is that the graph is special (undirected, connected, exactly n edges). Remember that a tree is an undirected, connected graph with n -1 edges. Therefore the problem’s graph will be almost like a tree, but it will have exactly one extra edge. This extra edge will add a cycle to the graph, exactly one cycle. What is nice about a graph with few cycles is that there are few simple paths connecting two vertices u, v. In a graph with exactly one cycle, there will be at most 2 distinct simple paths connecting each pair of vertices. There are O(n^2) pairs of vertices and for each of them there are at most 2 simple paths, this means that there are O(n^2) distinct simple paths in the graph.
Since the number of simple paths is not very large, we can try to enumerate them all. This can be done by constructing the paths with a backtracking algorithm: Start at each of the vertices x then add a neighbor to the simple path and keep recursing. Like this pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
backtrack(x, path) {
add x to path
if (path has K or more elements) {
[calculate inversions of path, remember the minimum]
}
for each neighbor y of x {
if (y is not in path) {
backtrack(y, path)
}
}
remove x from path
}
for each vertex u {
backtrack(u)
}
This will find all O(n^2) simple paths. The problem is that for each of them it has to calculate the inversions. Currently, the algorithm we know to calculate the inversions is O(n^2), we need a faster idea.
Consider a sequence S, we can calculate the inversions for each pair (i < j), note that for each j, we only care about smaller indices. In the backtracking, we add a new element to the sequence in each step. So we can calculate the contribution to the number of inversions of each new vertex added to the simple path, at the time it is added. Specifically, the number of elements in S that are greater than the new element:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
backtrack(x, path, S, inv) {
oldinv = inv
add x to path
add V[X] to S
inv = inv + [Count elements in S greater than V[x]]
if (path has K or more elements) {
[inv is a valid number of inversions, remember the minimum]
}
for each neighbor y of x {
if (y is not in path) {
backtrack(y, path, S, inv)
}
}
remove x from path
remove value[X] to S
inv = oldinv
}
for each vertex u {
backtrack(u)
}
At first glance, it appears that we didn’t improve the algorithm much. We still need to count the number of elements in S that are greater than V, and we repeat this in every step, so the algorithm is O(n^3) which is still not fast enough.
We can think of S not necessarily as a vector/list but as a data structure. What we want is a data structure that can add elements, remove elements and count the number of existing elements that are greater than a given value. A data structure that could do each of these operations in O(log(n)) time would allow us to solve this problem in O(n^2 log(n)). We can for example, use a Binary indexed tree (BIT). As you can see, a interesting part of this problem is that we update the data structure as we do the backtracking.
Sample code without including code for the data structure:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
int getMinimumInversions(vector < int > A, vector < int > B, vector < int > V, int K) {
int N = V.size();
vector < list < int >> g(N);
for (int i = 0; i < A.size(); i++) {
g[A[i]].push_back(B[i]);
g[B[i]].push_back(A[i]);
}
const int INF = 20000000;
int res = INF;
initialize_tree();
for (int s = 0; s < N; s++) {
vector < bool > visited(N, false);
int t = 0;
int inv = 0;
std:: function < void(int) > backtrack = [ & ](int x) {
visited[x] = true;
t++;
int oinv = inv;
inv += tree_counthigher(V[x]);
tree_add(V[x]);
if (t >= K) {
res = std::min(res, inv);
}
for (int y: g[x]) {
if (!visited[y]) {
backtrack(y);
}
}
tree_remove(V[x]);
inv = oinv;
t--;
visited[x] = false;
};
backtrack(s);
// This uses a c++11 lambda, the idea is merely to be able to nest
// the function and use the global variables. I should change it
// later to use an external backtrack function.
}
return (res >= INF) ? -1 : res;
}
LaserTowersDiv1
Problem Details
Used as: Division One - Level Three:
| Value | 1000 |
| Submission Rate | 8 / 686 (1.17%) |
| Success Rate | 7 / 8 (87.50%) |
| High Score | tourist for 645.40 points (24 mins 2 secs) |
| Average Score | 487.00 (for 7 correct submissions) |
The key idea is to use a minimum cut algorithm. Think about towers shooting to their best possible targets and the two lasers intersect:
We can see this intersection as a connection between the lasers. So roughly the objective of the problem is to shoot the lasers in such a way that no pair is connected. And of course maximizing the score.
If we want a minimum cut approach, we usually need a starting vertex (source) and a final vertex (sink) so that the two vertices are not connected. How can we translate the towers into this concept?
The constraints say that to tower will face towards the direction of another tower. This means that the only intersections between lasers will happen between a horizontal and a vertical laser. What we can do is think of vertical towers as the starting point of the paths that we want to cut and horizontal towers as the ending points.
Take the following example intersections:
We can translate each of them into paths from vertical towers to horizontal ones:
Thus we will try to make the problem about making sure there are no paths that start on vertical towers and end in horizontal towers.
With those directions in mind, we should try to think of a graph. In this initial idea, each cell (including those containing towers) will be a node in the graph. The position of towers determine the direction of the edges connecting the cells.
Starting with any vertical tower, we create edges connecting all nodes towards the tower’s direction. For horizontal towers, we do the opposite.
Another issue is that we want a Minimum Cut, but the problem asks us to maximize the sum of the values in the cells targetted.
Usually the costs are associated with edges. Removing an edge is associated with a cost. We need to envision how does reducing the length of the laser translates into removing edges from the graph. Imagine that removing an edge means that the laser starting at the tower will target the cell just before the edge:
Note that removing the edge right next to a tower means that the tower does not shoot at all.
For horizontal towers, the laser length - cut edges relationship is the same, the edges just face the direction towards the tower instead of from the tower.
The cost to remove the edges is another issue. Since the original problem is maximization, we can turn it into minimization by assuming that each tower will initially target the maximum possible cell in its direction. This way removing edges means changing the target cell to another cell with possibly some reduction in the targeted cell value and this reduction will be the cost:
In the first example, the maximum value is 8, so we make the cost not to shoot the laser at all 0 (The edge connecting the tower with the first cell in front of it). If we remove the second edge (counting from the tower’s position) then the value will be 2 instead of 8, so the cost is 6.
In the second example, note that the last cell in the tower’s direction is not necessarily the best, this generates an edge of cost 0.
We are almost there. It may even appear that we have a correct solution. Just make this special graph network, pick the costs and find the minimum cut. There is , unfortunately, a problem with the current design of the graph.
Every path from a vertical tower to a horizontal tower will be cut. Even some that do not actually represent an intersection of lasers. Consider the following case
There is a path from a vertical tower to a horizontal one that is strange:
This is a problem because minimum cut will make sure to cut all possible paths even that one. For example, imagine the following cuts were already made:
This is equivalent to this laser layout:
Which is valid. But the graph still contains paths. So the minimum cut algorithm would cut more than needed.
The final idea is to realize that all paths that are equivalent to a laser intersection contain exactly one change from vertical to horizontal (Or vice versa). The invalid paths contain more changes. We can ensure that the paths only change direction once by actually creating two different subgraphs - one for horizontal edges and one for vertical edges. And connect each node in the vertical subgraph with a node of the same cell in the horizontal sub-graph. This edge wouldn’t be removable so we can imagine infinite cost for this edge.
The final step is to use minimum cut. We need to build the graph taking care of the four tower directions. We can use a single source connected to each of the vertical towers and a single sink connected to each of the horizontal towers. We generate the edges and their removal costs and then run max flow using those costs as the edge capacities.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
int countMaxEnemies(vector < string > board) {
//initialize a graph network class that solves max flow for ass
// the network class has these methods:
// * addVertex() returns a vertex id of a new vertex.
// * addEdge(u,v, f): Adds an edge from u to v using capacity f.
// * g->source will be the source and g->sink, the sink:
unique_ptr < MaxFlow::network > g(new MaxFlow::network);
// initialize the vertices, each cell has two vertices associated with
// it: The vertices of the vertical subgraph and the ones of the horizontal:
int n = board.size(), m = board[0].size();
int nodesH[n][m];
int nodesV[n][m];
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
nodesV[i][j] = g - > addVertex();
nodesH[i][j] = g - > addVertex();
// There is always an edge between vertical and horizontal.
// Representing the only allowed change of direction.
g - > addEdge(nodesV[i][j], nodesH[i][j], MaxFlow::INF);
}
}
g - > source = g - > addVertex();
g - > sink = g - > addVertex();
int maxSum = 0;
// We initialize some logic regarding the towers
char towerKinds[4] = {
'A',
'V',
'>',
'<'
};
int towerX[4] = {
-1,
1,
0,
0
};
int towerY[4] = {
0,
0,
1,
-1
};
bool towerReverse[4] = {
0,
0,
1,
1
};
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
for (int k = 0; k < 4; k++) {
if (towerKinds[k] == board[i][j]) {
// we detected a tower
// connect tower to source or sink depending on tower type:
if (towerReverse[k]) {
g - > addEdge(nodesH[i][j], g - > sink, MaxFlow::INF);
} else {
g - > addEdge(g - > source, nodesV[i][j], MaxFlow::INF);
}
// Find the maximum cell in the tower's direction:
// Pick the maximum:
int mx = 0;
int x = i, y = j;
while ((0 <= x && x < n) && (0 <= y && y < m)) {
int enemies = 0;
if ('0' <= board[x][y] && board[x][y] <= '9') {
enemies = board[x][y] - '0';
}
mx = std::max(mx, enemies);
x += towerX[k];
y += towerY[k];
// towerX and towerY
}
// knowing the maximum add costs to edges.
maxSum += mx;
x = i;
y = j;
while (true) {
int nx = x + towerX[k], ny = y + towerY[k];
if ((0 <= nx && nx < n) && (0 <= ny && ny < m)) {
int enemies = 0;
if ('0' <= board[x][y] && board[x][y] <= '9') {
enemies = board[x][y] - '0';
}
if (towerReverse[k]) {
g - > addEdge(nodesH[nx][ny], nodesH[x][y], mx - enemies);
} else {
g - > addEdge(nodesV[x][y], nodesV[nx][ny], mx - enemies);
}
x = nx;
y = ny;
} else {
break;
}
}
}
}
}
}
return maxSum - g - > maxFlow();
}