

Div2 250: ForgetfulAddition
Problem Details
For an “expression” of eight digits, like 12345678 , there are 8 distinct sums that we could use:
1 + 23456789
12 + 3456789
123 + 456789
1234 + 56789
12345 + 6789
123456 + 789
1234567 + 89
12345678 + 9
8 is the maximum number of digits, so we can try each of these options and return the minimum result out of them all. This is really an implementation problem. We need a loop for the position in which to place the + sign : 1, 2, 3, … `n-1`. String manipulation to extract the two summands. We convert the strings to integer and add them up. Thanks to the small constraints, we do not need to worry about overflow. Constraints also remove constraints such as trailing zeros (no zero digits are allowed).
1
2
3
4
5
6
7
8
9
10
11
12
int minNumber(string expression) {
int res = 2000000000; // a large number to initialize
for (int i = 1; i < expression.size(); i++) {
string left = expression.substr(0, i);
string right = expression.substr(i);
int a = stoi(left); // std::stoi is a c++11 feature :)
int b = stoi(right);
res = std::min(res, a + b);
}
return res;
}
Python one-liner:
1
2
3
class ForgetfulAddition:
def minNumber(self, s):
return min(int(s[: i]) + int(s[i: ]) for i in range(1, len(s)))
Div2 Medium: LightSwitchingPuzzle
Problem Details
Imagine the first switch, #1. If this switch is pressed, all lights will toggle. The most important light in this case is light #1. No other switch can modify the state of light #1. Therefore, if light #1 is on, we MUST press switch #1 to turn it off. If light #1 is off, then we must definitely not press switch #1. No other switch can turn this light on.
So the state of light #1 forces the action to take on switch #1. We have no liberty here. The switch must be pressed if and only if light #1 is on.
Imagine we have already analyzed switch #1, and we made the decision to press it or not. Now we have a strange sub-problem. We cannot press switch #1 any more, light #1 is off and we should decide what to do with the remaining switches.
Switch #2 will toggle light #2. Once again, there is no other switch that can toggle light #2 (switch #1 could, but pressing it would turn light #1, to fix this mistake, we would have to press switch #1 again. We are back to the start but we pressed two switches.) Once again, press #2 should be pressed if and only if switch #2 is on.
Let’s generalize. Assume that we have already processed the first `i-1` switches. All first `i-1` lights are off. We need to decide what to do with switch #i. If light `i` is off, then we shouldn’t touch switch `i` and should move on to light `i+1`. If light `i` is on, we are forced to press switch `i`.
Switch `i` should be pressed if and only if light #i is on. We can repeat this rule for the `O(n)` switches. Each switch will need an `O(n)` loop to toggle the states of all switches that are multiples of `i`: `i`, `2i`, `3i`, …
We can just repeat this for every `i` in ascending order. Just note that in the algorithm, for each `i` all of the switches before `i` have been already been decided and their corresponding lights are off.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
int minFlips(string state) {
int n = state.size();
int steps = 0;
for (int i = 1; i <= n; i++) {
if (state[i - 1] == 'Y') {
// must turn off (and toggle all the multiples)
steps++;
for (int j = i - 1; j < n; j += i) {
state[j] = ((state[j] == 'N') ? 'Y' : 'N');
}
}
}
return steps;
}
Div2 Hard: TallShoes
Problem Details
One way to solve this problem is to first ask: “Is height `h` possible?”. If we can answer this question we can find the maximum allowed height with a search, or can we? The maximum allowed cost `B` is up to `10^15`. If `h` is possible, it would mean all roads used have at least height `h`. The maximum initial height is `10^6`, so we should think of the maximum possible `h` that a road can get. If we add `x` to a road’s height, the cost becomes `x^2`. So the maximum cost we can add to `x` is `sqrt(10^15) ~= 31622776`. The maximum road height is ~ 32622776, which might be a rather large number. A simple linear search trying all candidate road heights will likely be too slow.
Imagine we knew hat height `H` was possible, this means a path in which all roads have at least `H` height can exist. Then all heights `h < H` are also possible, for each height `h`, we can just reuse those paths of at least height `H`. This means we can use a binary search for the maximum allowed `h`.
We need to find if it is possible to have a path of roads of height at least `h`. The cost of incrementing the roads as necessary should be at most `B`. So how about we find the minimum cost to have a path of roads that have a valid height? If even the minimum cost is larger than `B`, no path would be possible.
The useful idea is to think of the cost to use a given road. If the initial height of that road is `x`, and we need the height to be at least `h` then we must add `h - x` units to the height, which increments the cost to `(h-x)^2`. (This is only if `x < h`).
So now we just need to pick a path from one vertex to another and using each road introduces a cost to the path. This is just the minimum-cost path problem. We can use any standard algorithm like Dijkstra’s.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
int N;
vector < int > X, Y, height;
long B;
bool possibleHeight(long h) {
// Dijkstra
// first we have a set of all vertices that we didn't visit yet:
vector < bool > visited(N, false);
// fill distances with an unrealistic large one:
vector < long > dist(N, B + 1);
// We start at vertex 0, distance from 0 to 0 is 0:
dist[0] = 0;
while (true) {
// pick the vertex in the unvisited set that has
// the smallest distance from 0
int x = -1;
for (int y = 0; y < N; y++) {
if (!visited[y] && ((x == -1) || (dist[y] < dist[x]))) {
x = y;
}
}
if (x == -1) {
break;
}
visited[x] = true;
// for each edge connecting x with another vertex y:
for (int i = 0; i < X.size(); i++) {
if (Y[i] == x) {
swap(X[i], Y[i]);
}
if (X[i] == x) {
int y = Y[i];
long cost = 0; //find the cost to use this road
if (height[i] < h) {
cost = (h - height[i]) * (h - height[i]);
}
// there is a path from 0 to y with distance dist[x] + cost
dist[y] = std::min(dist[y], dist[x] + cost);
}
}
}
// minimum cost must be <= B:
return (dist[N - 1] <= B);
}
int maxHeight(int N, vector < int > X, vector < int > Y, vector < int > height, long B) {
this - > N = N;
this - > X = X;
this - > Y = Y;
this - > height = height;
this - > B = B;
// binary search
long lo = 0; // we know height 0 is possible
long hi = 32622777; // we know this height is impossible
while (lo + 1 < hi) {
long ha = (lo + hi) / 2;
if (possibleHeight(ha)) {
lo = ha;
} else {
hi = ha;
}
}
return (int) lo;
}
This is a STL/lambda intensive version of the same idea. This time using a more valid version of Dijkstra that is O(log(|V|) * (|E| + |V|)).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
// Library code
// takes lo, hi, P() such that:
// P(lo) is true
// P(hi) is false
// If P(x) is true and (x > y) then P(y) is true
// If P(x) is false and (x < y) then P(y) is false
// returns the maximum x such that P(x) is true
template < class D > D BinarySearch(D lo, D hi, std:: function < bool(D) > P) {
while (lo + 1 < hi) {
D ha = lo + (hi - lo) / 2;
if (P(ha)) {
lo = ha;
} else {
hi = ha;
}
}
return lo;
}
int maxHeight(int N, vector < int > X, vector < int > Y, vector < int > height, long B) {
// initialize the graph, as a vector of lists of adjacent vertices
// initialHeight saves the height of each road too.
vector < list < int >> g(N);
vector < vector < long >> initialHeight(N, vector < long > (N, B + 1));
for (int i = 0; i < X.size(); i++) {
int u = X[i], v = Y[i];
for (int k = 0; k < 2; k++) {
initialHeight[u][v] = height[i];
g[u].push_back(v);
swap(u, v);
}
}
auto possibleHeight = [ & ](int h) {
vector < long > dist(N, B + 1);
set < pair < long, int >> Q;
for (int i = 0; i < N; i++) {
Q.insert({
dist[i],
i
});
}
// Add a node x with distance from 0 equal to 0
auto push = [ & ](int x, long d) {
if (d < dist[x]) {
Q.erase({
dist[x],
x
});
dist[x] = d;
Q.insert({
dist[x],
x
});
}
};
push(0, 0);
while (!Q.empty()) {
// remove this pair from set:
auto it = Q.begin();
int x = it - > second;
Q.erase(it);
// for each neighbor of x:
for (int y: g[x]) {
long c = std::max < long > (0, h - initialHeight[x][y]);
// There's a path from 0 to y with cost: dist[x] + c*c
push(y, dist[x] + c * c);
}
}
return dist[N - 1] <= B;
};
return BinarySearch < long > (0, 32622777, possibleHeight);
}
Div1 250: WaitingForBus
Problem Details
One way to frame this problem is with recursion: “What is the expected wait time IF we know the first time a bus leaves the station is at time `t`?”. We would call this a function `f(t)`. `f(0)` solves the main problem.
If `t >= s` , then we can be certain that the wait time will be `t - s`. This will be our base case.
Otherwise, we need to think in terms of the next bus arrival time. If we decide a bus `i` will be the next bus, then we can consider the problem as if `t + “time”[i]` will be the first bus arrival (ignoring the previous arrivals). These events are decided by a random process, however. We need to find the expected value of the wait time. This is equal to the sum of the product between each wait time and its probability. So we know that for each potential bus `i`, there is a probability `“prob”[i]` that the wait time will be: `f(t + “time”[i])`, we can just multiply each of the times with its respective probability and add them up: `sum(“prob”[i] * f(t + “time”[i]))`.
The final issue is that implementing this recursive logic in the straight-forward way will introduce many unneeded branching. Since each `f(t)` will call instances of `f(t+…)` , the recurrence is acyclic so we can use memoization to ensure that each state is evaluated at most once. In addition, we only need to memoize states with `t < s`. So this approach needs `O(s)` memory and `O(s)` time.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vector < double > dp; //for memoization
double whenWillBusArrive(vector < int > & time, vector < int > & prob, int s, int t = 0) {
if (t == 0) { //start the memoization
dp.resize(s, -1.0);
}
if (t >= s) {
return t - s;
}
if (dp[t] < -0.5) {
double res = 0;
for (int i = 0; i < time.size(); i++) {
res += whenWillBusArrive(time, prob, s, t + time[i]) * prob[i] / 100;
}
dp[t] = res;
}
return dp[t];
}
Another way is to first view the problem as calculating the probability that there will be a bus arrival at a given time `t`.
For a bus to arrive at time `t`, there needs to be a bus `i` that parts at time `t - “time”[i]`. This also means that a bus must arrive at time `t - “time”[i]` . So we need to consider the probabilities of both of those events and multiply. This would be the probability bus `i` arrives at time `t`. The sum of this for all buses `i` gives the probability for time `t`.
To calculate the expected wait time, use the expected value formula. For each possible time `t >=s ` we need the probability a bus will arrive at that time. However, we need to consider only buses such that `t - “time”[i] < s` (else a prior bus arrival happened and the user did not wait for bus `i`).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
double whenWillBusArrive(vector < int > time, vector < int > prob, int s) {
int n = time.size();
const int MAX = s + * max_element(time.begin(), time.end());
vector < double > probEnd(s + 1); // probEnd[i] = probability a bus arrives at time i
probEnd[0] = 1.0;
for (int t = 1; t < s; t++) {
probEnd[t] = 0.0;
for (int i = 0; i < n; i++) {
if (t >= time[i]) {
probEnd[t] += (probEnd[t - time[i]] * prob[i] / 100);
}
}
}
double res = 0.0;
for (int t = s; t < MAX; t++) {
double p = 0.0;
for (int i = 0; i < n; i++) {
if (0 <= t - time[i] && t - time[i] < s) {
p += (probEnd[t - time[i]] * prob[i]) / 100;
}
}
res += p * (t - s);
}
return res;
}
Div1 500: TaroCutting
Problem Details
A vital observation is that we should not use devices on a single tree more than once. Imagine we first used device `a` at a time `t_1` and then device `b` at a time `t_2` on the same tree. If we were able to use device `b`, then the height was larger than `b` at `t_2`. If we forgot about using device `a`, the height at time `t_2` can only be larger or equal than it was when we used `a`. And we can still use device `b`. The final height will stay the same but we used less devices, and we can use that device `a` in some other tree at time.
If we assume that each tree gets hit by a device at most once, then the only decision for each tree consists of when to use a device and which device. A device can be used on multiple trees, but not at the same exact time. In fact, we can assume that there are `t times m` different “devices” (Where `m` was the number of devices in the input and `t` the total number of days). Each of these “devices” represents using a specific device at a given moment of time. So device `(j,k)` represents using device `j` at time `k`. We can see the problem as assigning one of these `(j,k)` devices on each tree `i`.
Let’s find what happens if we assign device `(j,k)` to tree `i`. At time `k`, the height of the tree will be: `“height”[i] + `k`*“add”[i]`. If this is greater than `“device”[j]` then it is possible to use device `j` at time `k`. The height will become `“device”[j]`. In addition, the tree will grow for the remaining `“time” - k` days. So the final height of the tree will be: `“device”[j] + “add”[j]*(“time” - k)`. In this problem, we want to minimize the final sum of heights. So we can consider each candidate final height to be the cost of assigning device `(j,k)` to tree `i`. This turns the problem into a classical assignment problem.
There are two issues with solving this as just an assignment problem. The issue is rather simple: Sometimes it is better not to use a device at all on a tree. The cost of not assigning a device is equal to the total height the tree reaches without cutting. We can solve this by using an imaginary device that can be used by any number of trees and has that assignment cost.
To make this modification, we would take the idea of using the minimum cost flow reduction to solve the assignment problem. This time, however, besides of the normal minimum cost matching , we will add a vertex for the imaginary device (not using any device). The flow network would look like this for 3 trees, 2 devices and 3 days:
This fleshes out another issue with the assignment problem, there are `O(mt)` pairs `(j,k)` for the virtual devices. This will make a minimum cost flow solution too slow for the time constraints.
There are distinct workarounds to this issue but a very interesting one still relies in minimum cost flow by modifying the network slightly. Remember the cost to assign a tree `i` to device `j` at time `k`:
`“device”[j] +“add”[i]* (t - k)`
An issue we have is that this cost is not always valid. If the height at time `k` is too small, we cannot use the device… Or can we? Imagine if we just used this cost with blatant disregard of the condition; This would make us do an invalid thing that makes the tree larger than it should have been. But remember that we have a whole vertex that we can use to assign “do not use a device on this tree”. So none of these invalid assignments would be used by the minimum cost solution anyway. This is helpful because we can now separate the cost in two parts:
`“device”[j]`
`“add”[i]* (t - k)`
This brings a very nice insight: The `“device”[j]` part depends solely on how many times you use the device. It doesn’t really change depending on which tree you assign to the device. The `“add”[i]* (t - k)` part depends only on which time you assign to each tree, the device is not relevant.
So imagine this solution idea: For each tree, you decide to use a device on it or not. If you decide to use a device on it, then you pick a time for it. For each situation in which a device is assigned to a tree, you need to pick a device. You can only pick each device at most `t` times. From this perspective, we can reach the following flow network:
There is an edge with capacity 1, cost 0 connecting the source and each tree.
There is an edge with capacity 1 connecting each tree with the “no Device” node. The cost is equal to the final height of the tree if no device is used.
There is an edge with capacity 1 connecting each tree `i` to each possible time `k`. The cost is: `“add”[i] * (t - k)`.
There is an edge of capacity 1 connecting each time `k` with a device `j`. The cost is `“device”[j]`.
There is an edge of capacity `t` connecting each device to the sink. Cost 0.
Why does this work? Each flow unit that visits the day nodes represents a tree that was assigned a device (as opposed to the “No Device” node). For each of these flow units, we need to assign a device. Each device can be assigned at most `t` times. So to acquire max flow, each tree will have either a day and a device or no device at all assigned and the minimum cost flow will yield the minimum heights.
This reduces the number of vertices to `O(nmt)` and makes the minimum cost flow solution viable.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
/* Since the objective of the editorial is not teaching how to implement
min cost max flow, that part of the code is not included. This solution
uses a Network class with the following methods:
* AddVertex() adds a vertex and returns an index for that vertex
* AddEdge(i,j, k,c) adds an edge from i to j with capacity k and cost c
* minCostMaxFlow() runs the mininimum cost flow algorithm stores minimum cost
in totalCost field. */
int getNumber(vector < int > height, vector < int > add, vector < int > device, int t) {
unique_ptr < MinCostFlow::network > g(new MinCostFlow::network());
int n = height.size();
int m = device.size();
vector < int > treeVertex(n);
vector < int > timeVertex(t);
vector < int > deviVertex(m);
for (int i = 0; i < n; i++) {
treeVertex[i] = g - > addVertex();
}
for (int i = 0; i < t; i++) {
timeVertex[i] = g - > addVertex();
}
for (int i = 0; i < m; i++) {
deviVertex[i] = g - > addVertex();
}
g - > source = g - > addVertex();
g - > sink = g - > addVertex();
int noDevice = g - > addVertex();
g - > addEdge(noDevice, g - > sink, n, 0);
for (int i = 0; i < n; i++) {
g - > addEdge(g - > source, treeVertex[i], 1, 0);
g - > addEdge(treeVertex[i], noDevice, 1, height[i] + add[i] * t);
for (int j = 0; j < t; j++) {
// tree -> time
g - > addEdge(treeVertex[i], timeVertex[j], 1, add[i] * (t - (j + 1)));
}
}
for (int j = 0; j < t; j++) {
// time -> device
for (int k = 0; k < m; k++) {
g - > addEdge(timeVertex[j], deviVertex[k], 1, device[k]);
}
}
for (int k = 0; k < m; k++) {
// device -> sink
g - > addEdge(deviVertex[k], g - > sink, n, 0);
}
g - > minCostMaxFlow();
return g - > totalCost;
}
Div1 800: WheelOfFortune
Problem Details
It actually doesn’t matter which sector Allice picks. All sectors are equally likely to be picked in each round and no matter what Alice picked, Bob can always pick any sector to the left or the right of Alice’s sector because the wheel is cyclic. So let’s assume Alice always picks sector 0.
Bob does receive relevant information in the score of the sector picked by Alice. We can imagine that if a very high number is scored by Alice, then sectors close to sector 0 might also have higher scores (because sectors are picked contiguously in each round). So the problem becomes how to know which sector should Bob pick.
What we need is to know a conditional expected value. What is the expected value of sector `i` given that the value of sector 0 is `x`? Let’s name this `f(x,i)`. For each `x`, we can calculate the probability the score of sector 0 is `x`: `p(x)`. When `x` is Alice’s score with `P(x)` probability, then the score is initially `x` then we add the maximum `f(x,i)` out of all the possible `i` values, this is the best expected Bob score. `p(x) * (x + f(x,i))`. We should add this for each `x`.
We need to find `f(x,i)` and `P(x)`. We can do this using dynamic programming.
Understand each round: There are three relevant probabilities:
The probability that neither sector 0 or sector `i` are incremented.
The probability that sector 0 is incremented but not sector `i`.
The probability that sector `i` is incremented but not sector 0`.
The probability both sector 0 and `i` are incremented.
Because of symmetry, the two probabilities in which only one sector is picked will be equal. So we only need to find the first and last probability.
There are two gaps between sector 0 and `i`. In order not to pick sector 0 or `i`, the chain of `n` contiguous sectors must appear in either of the gaps. Calculate the number of valid starting sectors in each gap (Depends on the gap length) and add the two results together for the total number of valid starting sectors, divide by `N` to find the probability.
For the probability both are picked, you can negate the problem. If you need to pick `n` contiguous sectors, there are `N - n` contiguous sectors that are not picked. So find the probability that sector 0 and `i` are not picked by this negation. This reduces to the previous sub-problem.
When attempting to use dynamic programming to solve this part of the problem we can find some issues. How do you do conditional expected value in a recursive way? Perhaps it’s best not to really do that. Consider the expected value a of a certain sector `i` given that the value of sector 0 is `x`:
`E(i | x) = P(0 | x) * 0 + P(1 | x) * 1 + … + P(M | x) * M`
We can use the definition of conditional probability:
`E(i | x) = ( (P(0 nnn x)) / (P(x))) * 0 + ( (P(1 nnn x)) / (P(x))) * 1 + … + ( (P(M nnn x)) / (P(x))) * M`
`P(y nnn x)` is the probability sector `i` will be picked in `y` rounds AND sector 0 is picked in `x` rounds. Note that no matter which best expected value is picked, we will multiply it by `P(x)`, so at the end this will be the result:
`E(i | x)*P(x) = (P(0 nnn x)) * 0 + (P(1 nnn x)) * 1 + … + (P(M nnn x)) * M`
From this jump to the dynamic programming part. For each possible `i` (distance between sector and sector 0), we need to simulate `M` rounds. So `g(x,k)` returns `E(i|x)*P(x)` considering only the first `k` rounds. Imagine if there were `k` rounds:
`g(x,k) = P_k(0 nnn x) * 0 + P_k(1 nnn x) * 1 + … + P_k( k nnn x) * k`
`P_k(y nnn x)` is the probability sector `i` scores `y` in the first `k` rounds given that sector 0 scores `x`.
And we wanted to build `k+1` from it:
`g(x,k+1) = P_(k+1)(0 nnn x) * 0 + P_(k+1)(1 nnn x) * 1 + … + P_k( k nnn x) * k+ P_(k+1)( (k+1) nnn x) * (k+1)`
When considering `k+1` rounds, there is a new round with index `k` to consider. Remember it brings 3 probabilities:
`q_0`: Probability neither sector 0 or 1 are picked.
`q_1`: Probability only sector 0 or 1 is picked.
`q_2` : Probability both sectors are picked.
Consider each `P_(k+1)(y nnn x)`, depending on the probabilities `q_0`, `q_1`, `q_2` we have:
`P_(k+1)(y nnn x) = q_0 P_k(y nnn x) + q_1 P_k( (y-1) nnn x ) + q_1 P_k( y nnn (x-1) ) + q_2 P_k( (y-1) nnn (x-1) )`
The idea is to replace each of the terms of `g(x,k+1)` like that:
`g(x,k+1) = `
`( q_0 P_k(0 nnn x) + q_1 P_k( (-1) nnn x ) + q_1 P_k( 0 nnn (x-1) ) + q_2 P_k( (-1) nnn (x-1) ) ) * 0`
`( q_0 P_k(1 nnn x) + q_1 P_k( 0 nnn x ) + q_1 P_k( 1 nnn (x-1) ) + q_2 P_k( 0 nnn (x-1) ) ) * 1`
`vdots`
`( q_0 P_k((k+1) nnn x) + q_1 P_k( k nnn x ) + q_1 P_k( (k+1) nnn (x-1) ) + q_2 P_k( k nnn (x-1) ) ) * (k+1)`
After applying the factors and organizing, each column in this can be solved separately.
`q_0 ( P_k(0 nnn x) * 0 + P_k(1 nnn x) * 1 + … + P_k( (k+1) nnn x) * (k+1) )`
It is impossible for the sector to score `k+1` rounds in the first `k` rounds:
`q_0 ( P_k(0 nnn x) * 0 + P_k(1 nnn x) * 1 + … + P_k( k nnn x) * k) = q_0 * g(x,k)`
Let’s try the `q_2` column:
`q_2 ( P_k( (-1) nnn (x-1) ) * 0 + P_k( 0 nnn (x-1) ) * 1 + … + P_k( (k-1) nnn (x-1)) * k )`
It is impossible to score -1 and that term is multiplied by 0 anyway. The issue is we are left with:
`q_2 ( P_k( 0 nnn (x-1) ) * 1 + P_k( 1 nnn (x-1) ) * 2 + … + P_k( (k-1) nnn (x-1)) * k + P_k( k nnn (x-1)) * (k+1) )`
Which seems similar to `g(i,x)` with a important difference, the probability 0 is multiplied by 1, the probability 1 by 2 and so and so. We can fix this by adding:
`q_2 ( P_k( 0 nnn (x-1) ) * 0 + P_k( 1 nnn (x-1) ) * 1 + … + P_k( (k-1) nnn (x-1)) * (k-1)+ P_k( k nnn (x-1)) * k `
`+`
` P_k(0 nnn (x-1)) + P_k(1 nnn (x-1)) + … + P_k((k-1) nnn (x-1)) + P_k(k nnn (x-1)) )`
Which is equal to:
`q_2 ( g(x-1, k) + sum(P_k(j nnn (x-1))) ) = q_2 ( g(x-1,k) + “PROB”_k(x-1) )`
Where `“PROB”_k(x-1)` is the probability sector 0 scores `x-1` in the first `k` rounds (Because we have added all the probabilities to achieve each value from 0 to `k`, so the sum will be equal to just the probability for `x-1`) . This probability can be found easily with dynamic programming. The other cases (both factored by `q_1`) are solved in a similar way. The result is a clean recurrence relation we can use in the following solution:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
double f[310][310];
double g[310][310];
double PROB[310][310];
double maxExpectedValue(int N, vector < int > s) {
int M = s.size();
// d is the distance from sector 0 to the sector we are calculating for
for (int d = 1; d < N; d++) {
for (int i = 0; i <= M; i++) {
for (int j = 0; j <= M; j++) {
g[i][j] = PROB[i][j] = 0.0;
}
}
PROB[0][0] = 1.0;
for (int k = 0; k < M; k++) {
double q0 = (double)(max(d - s[k], 0) + max(N - d - s[k], 0)) / N;
double q2 = (double)(max(d - N + s[k], 0) + max(N - d - N + s[k], 0)) / N;
double q1 = (1.0 - q2 - q0) / 2.0;
for (int j = 0; j <= k; j++) {
PROB[k + 1][j] += PROB[k][j] * (q0 + q1);
PROB[k + 1][j + 1] += PROB[k][j] * (q1 + q2);
}
for (int j = 0; j <= k; j++) {
g[k + 1][j] += g[k][j] * q0;
g[k + 1][j] += (g[k][j] + PROB[k][j]) * q1;
g[k + 1][j + 1] += g[k][j] * q1;
g[k + 1][j + 1] += (g[k][j] + PROB[k][j]) * q2;
}
}
for (int x = 0; x <= M; x++) {
f[x][d] = g[M][x];
}
}
double ans = 0.0;
for (int i = 0; i < M; i++) {
// the expecte value for Alice's score:
ans += (double) s[i] / N;
}
for (int x = 0; x <= M; x++) {
// the expecte value for Bob's score:
ans += * std::max_element(f[x], f[x] + N);
}
return ans;
}