# November 26, 2019 TCO19 Algorithm Finals

Intuitively, we want to maximize the number of pairs in our hands, then the number of single cards. This suggests a dp approach, dp[x][y] = expected number of steps needed given we have x pairs and y singles in hand.

We have dp[x][y] = 1 + sum prob[(x,y) -> (nx,ny)] * dp[nx][ny] over all valid transitions from (x,y) -> (nx,ny). Note that we don’t care about transitions where we get a triple, since those contribute zero to our expected value, so let’s only consider transitions where we don’t immediately win.

That means we can never draw a single for a pair, we can only draw at most one card for any single we have, and we can draw at most two cards for any other card. If we fix the quantity of these three values, we can greedily form new pairs and discard our hand down to the right size.

We also need to be sure to handle the case where we transition back to the same state.

public class TradeForTriples {
int mod = 998244353;
long[][] getFIF(int max, int mod) {
long[] fact = new long[max];
long[] ifact = new long[max];
long[] inv = new long[max];
inv[1] = 1;
for (int i = 2; i < max; i++) {
inv[i] = (mod - mod/i) * inv[mod % i] % mod;
}
fact[0] = 1;
ifact[0] = 1;
for (int i = 1; i < max; i++) {
fact[i] = fact[i-1] * i % mod;
ifact[i] = ifact[i-1] * inv[i] % mod;
}
return new long[][] {fact, ifact, inv};
}
public long comb(int n, int k) {
if (k < 0 || k > n) return 0;
return fact[n] * ifact[n-k] % mod * ifact[k] % mod;
}
long exp(long b, long e) {
long r = 1;
while(e>0) {
if (e % 2 == 1) r = r * b % mod;
b = b * b % mod;
e >>= 1;
}
return r;
}
long[] fact, ifact;
public int findExpectation(int n, int k, int a, int b) {
long[][] x = getFIF(101010, mod);
fact = x[0];
ifact = x[1];
long[][] dp = new long[b+1][b+1];
for (int cpairs = b/2; cpairs >= 0; cpairs--) {
for (int csingles = Math.min(k - cpairs, b - cpairs * 2); csingles >= 0; csingles--) {
long psame = 0;
dp[csingles][cpairs] = 1;
int unchosen = k - cpairs - csingles;
int handsize = cpairs * 2 + csingles;
int decksize = n * k - handsize;
long denom = exp(fact[decksize] * ifact[decksize - a] % mod, mod-2);
for (int singles = 0; singles <= a && singles <= csingles; singles++) {
for (int pairs = 0; pairs <= Math.min((a-singles)/2, unchosen); pairs++) {
int leftover = a - singles - pairs * 2;
long prob = fact[a] * exp(n-1, singles) % mod * exp(n, leftover) % mod *
exp(n*(n-1)/2, pairs) % mod * comb(csingles, singles) % mod *
comb(unchosen, pairs+leftover) % mod * comb(pairs+leftover, pairs) % mod * denom % mod;
int npairs = Math.min(b/2, cpairs + pairs + singles);
int nsingles = Math.min(b, cpairs * 2 + csingles + a) - npairs * 2;
if (npairs == cpairs && nsingles == csingles) {
psame = (psame + prob) % mod;
} else {
dp[csingles][cpairs] = (dp[csingles][cpairs] + prob * dp[nsingles][npairs]) % mod;
}
}
}
dp[csingles][cpairs] = dp[csingles][cpairs] * exp(mod + 1 - psame, mod - 2) % mod;
}
}
return (int)dp[0][0];
}
}


#### AnotherTokenGame

If Lucas has a configuration like L … (a times) L … (b times) L, where a+b > 0, then he can always guarantee a tie by only moving the middle L. If Lucas can make such a configuration on his first move, then he won’t lose. Thus, this means Greg must place tokens so that they surround blocks of >= 3 consecutive L tokens.

Next, we claim that it is optimal to surround any blocks of 2 from Lucas. Consider the state where Lucas is unable to make a move. These two tokens must be next to each other, and in particular, they need to be surrounded anyways, so we need two tokens regardless of what distance we place them. We might as well place them surrounding blocks of two at the very beginning. We’ve finally reduced this to a problem where all of Lucas’s tokens are in blocks of 1.

The answer is almost just the number of tokens, but we have some cases like L.L.L (which only requires two tokens). In this case, it’s optimal to just place two tokens.

int getSmallest(string s) {
int n = s.length();
auto this_is = [&](int i, char ch) {
return 0 <= i && i < n && s[i] == ch;
};
for(int i = 1; i + 1 < n; ++i) {
if(s[i] == 'L' && s[i+1] == 'L') {
if(s[i-1] == '.') {
s[i-1] = 'R';
}
while(this_is(i + 1, 'L')) {
++i;
}
if(i + 1 < n) {
assert(s[i+1] == '.');
s[i+1] = 'R';
}
}
}
for(int i = 0; i < n; ++i) {
if(s[i] == 'L' && (this_is(i - 1, '.') || this_is(i + 1, '.'))) {
}
// #L.L.L#
if(s[i] == 'L' && !this_is(i - 1, '.')) {
int j = i;
while(this_is(j, 'L') && this_is(j + 1, '.')) {
j += 2;
}
if(j != i && this_is(j, 'L')) {
}
}
}
}


#### CakeForSix

For a given slope x, we can find a unique line that cuts the polygon in half. This can be found using a binary search. Let l(x) be this line, and let r(l(x)) be the region to the left of this line (the complement of r(l(x)) is r(l(x+180))).

Let f(x) be the slope needed so that the area of r(l(x)) intersect r(l(f(x))) is 1/3. Let g(x) be the slope needed so the area of r(l(x)) intersect r(l(g(x))) is 2/3. f(x) and g(x) can be found with binary search as well.

We want to find an x so that l(x), l(f(x)) and l(g(x)) all intersect at one point.

Consider the line l(x) and the intersection points of l(f(x)) and l(g(x)) as F and G, respectively. We know l(x) is equal to l(x+180), and also, F and G will swap places, thus there is some point when rotating from x to 180+x where F and G will coincide. We can find this through a binary search as well (namely, the lo endpoint of the binary search will have F “below” G, while the hi endpoint will have G “above” F).

We can solve this with three nested binary searches

#include <bits/stdc++.h>
using namespace std;
#define sim template < class c
#define ris return * this
#define dor > debug & operator <<
#define eni(x) sim > typename \
enable_if<sizeof dud<c>(0) x 1, debug&>::type operator<<(c i) {
sim > struct rge { c b, e; };
sim > rge<c> range(c i, c j) { return rge<c>{i, j}; }
sim > auto dud(c* x) -> decltype(cerr << *x, 0);
sim > char dud(...);
struct debug {
#ifdef LOCAL
~debug() { cerr << endl; }
eni(!=) cerr << boolalpha << i; ris; }
eni(==) ris << range(begin(i), end(i)); }
sim, class b dor(pair < b, c > d) {
ris << "(" << d.first << ", " << d.second << ")";
}
sim dor(rge<c> d) {
*this << "[";
for (auto it = d.b; it != d.e; ++it)
*this << ", " + 2 * (it == d.b) << *it;
ris << "]";
}
#else
sim dor(const c&) { ris; }
#endif
};
#define imie(...) " [" << #__VA_ARGS__ ": " << (__VA_ARGS__) << "] "
template<typename T> T K(T a) { return a * a; }
#define K(a) K(1LL * (a))
typedef long double ll; // can be changed to 'long double'
typedef long double ld;
// const ld PI = 2 * acos(0);
const ld eps = 1e-12;
#pragma GCC diagnostic ignored "-Wnarrowing"
struct P {
ll x, y;
P operator +(P he) const {
return P{x + he.x, y + he.y};
}
P operator -(P he) const {
return P{x - he.x, y - he.y};
}
P operator *(ld mul) const {
return P{x * mul, y * mul};
}
ld operator *(P he) const {
return x * he.y - y * he.x;
}
ll dot(P b) { return x * b.x + y * b.y; }
ld len() { return sqrt(K(x) + K(y)); }
P scaleTo(ld to) { return *this * (to / len()); }
ld dist(P & b) { return (*this - b).len(); }
P rotate90() { return P{-y, x}; }
ld angle() { return atan2(y, x); }
P rotate(ld ang) {
ld c = cos(ang), s = sin(ang);
return P{x * c - y * s, x * s + y * c};
}
};
debug& operator <<(debug& dd, P p) {
dd << "(" << p.x << ", " << p.y << ")";
return dd;
}
struct L2 {
P one, two;
// P p[2]; P & operator [](int i) { return p[i]; }
// const P & operator [](int i) const { return p[i]; }
P dir() { return two - one; }
P normal() { return dir().rotate90(); }
ld dist(P he) {
return abs((he - one) * (he - two)) / one.dist(two);
}
ld segDist(P he) { // epsilon not needed, but it would work too
if((he - two) * normal() < 0 && normal() * (he - one) < 0)
return dist(he);
return min(one.dist(he), two.dist(he));
}
P inter(L2 he) {
P A = dir(), B = he.dir();
ll den = A * B;
assert(abs(den) > eps); // parallel, maybe equal
return (A * (he.one * he.two) - B * (one * two)) * (1.0 / den);
// A = (x1*y2-y1*x2)*(x3-x4)-(x1-x2)*(x3*y4-y3*x4)
// A'= (x1*y2-y1*x2)*(y3-y4)-(y1-y2)*(x3*y4-y3*x4)
// B = (x1-x2)*(y3-y4)-(y1-y2)*(x3-x4)
// return P{A / B, A' / B};
}
P project(P he) {
P unit_normal = normal().scaleTo(1);
return he + unit_normal * unit_normal.dot(one - he);
}
P reflect(P he) { return project(he) * 2 - he; }
ld product_with(P p) {
return (p - one) * (two - one);
}
};
const ld PI = 2 * acosl(0);
int n;
vector<P> polygon;
int nxt(int i) {
return (i + 1) % n;
}
int prv(int i) {
return (i + n - 1) % n;
}
P get_vector(ld dir_angle) {
return P{1, 0}.rotate(dir_angle);
}
ld get_area(vector<P> they) {
ld s = 0;
for(int i = 0; i < (int) they.size(); ++i) {
int j = (i + 1) % they.size();
s += they[i] * they[j];
}
return abs(s) / 2;
}
ld get_ratio(L2 line) {
int i = 0;
while(line.product_with(polygon[i]) <= 0) {
i = nxt(i);
}
while(line.product_with(polygon[i]) > 0) {
i = nxt(i);
}
vector<P> half;
half.push_back(line.inter(L2{polygon[i], polygon[prv(i)]}));
while(line.product_with(polygon[i]) <= 0) {
half.push_back(polygon[i]);
i = nxt(i);
}
half.push_back(line.inter(L2{polygon[i], polygon[prv(i)]}));
return get_area(half) / get_area(polygon);
}
ld ratio_of_quarter(L2 diag1, L2 diag2) { // diag2 is rotated more to the left
P mid = diag1.inter(diag2);
int i = 0;
while(diag1.product_with(polygon[i]) <= 0) {
i = nxt(i);
}
while(diag1.product_with(polygon[i]) > 0) {
i = nxt(i);
}
vector<P> quarter;
quarter.push_back(diag1.inter(L2{polygon[i], polygon[prv(i)]}));
while(diag2.product_with(polygon[i]) > 0) {
quarter.push_back(polygon[i]);
i = nxt(i);
}
quarter.push_back(diag2.inter(L2{polygon[i], polygon[prv(i)]}));
quarter.push_back(mid);
return get_area(quarter) / get_area(polygon);
}
L2 split_half(ld dir_angle) {
P dir = get_vector(dir_angle);
P leftmost = polygon[0], rightmost = polygon[0];
for(P p : polygon) {
if(p * dir < leftmost * dir) {
leftmost = p;
}
if(p * dir > rightmost * dir) {
rightmost = p;
}
}
// debug() << imie(leftmost) imie(rightmost) imie(dir);
ld low = 0, high = 1;
L2 line;
for(int rep = 0; rep < 50; ++rep) {
ld ratio = (low + high) / 2;
P maybe = leftmost + (rightmost - leftmost) * ratio;
line = L2{maybe, maybe + dir};
if(get_ratio(line) < 0.5) {
low = ratio;
}
else {
high = ratio;
}
}
line.two = line.one + (dir * 5123123);
line.one = line.one - (dir * 5123123);
return line;
}
ld memo_rotation;
L2 diag2_to_get_ratio(ld dir_angle, ld need_ratio) {
L2 diag1 = split_half(dir_angle);
ld low = 0, high = PI;
L2 diag2;
for(int rep = 0; rep < 50; ++rep) {
ld rotation_angle = (low + high) / 2;
memo_rotation = rotation_angle;
diag2 = split_half(dir_angle + rotation_angle);
if(ratio_of_quarter(diag1, diag2) < need_ratio) {
low = rotation_angle;
}
else {
high = rotation_angle;
}
}
return diag2;
}
ld ratio_on_segment(ld dir_angle, ld need_ratio) {
L2 diag1 = split_half(dir_angle);
L2 diag2 = diag2_to_get_ratio(dir_angle, need_ratio);
P p = diag1.inter(diag2);
return diag1.one.dist(p) / diag1.one.dist(diag1.two);
}
ld master_value(ld dir_angle) {
return ratio_on_segment(dir_angle, 1. / 6) - ratio_on_segment(dir_angle, 2. / 6);
}
struct CakeForSix {
vector<double> cut(vector<int> in_x, vector<int> in_y) {
n = in_x.size();
polygon.clear();
for(int i = 0; i < n; ++i) {
polygon.push_back(P{in_x[i], in_y[i]});
}
ld low = 0, high = PI;
for(int rep = 0; rep < 50; ++rep) {
ld mid = (low + high) / 2;
if((master_value(low) > 0) == (master_value(mid) > 0)) {
low = mid;
}
else {
high = mid;
}
}
debug() << imie(master_value(low));
ld angle1 = low;
L2 diag1 = split_half(angle1);
L2 diag2 = diag2_to_get_ratio(angle1, 1. / 6);
ld angle2 = low + memo_rotation;
/*L2 diag3 =*/ diag2_to_get_ratio(angle1, 2. / 6);
ld angle3 = low + memo_rotation;
P p = diag1.inter(diag2);
return {p.x, p.y, angle1, angle2, angle3};
}
};


Harshit Mehta

Sr. Community Evangelist

categories & Tags

UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close