Wednesday, October 8, 2008 Match summaryIn SRM 421 coders from Div1 saw quite a tough problem set. In the easy problem they had to implement binary search algorithm and the only one trick was accurate search termination. The medium problem could be solved with a greedy algorithm, which was not easy to see and to prove. The hard problem proposed to find not so obvious algorithm, based on dynamic programming on tree, which additionally was quite hard to implement because of many different cases. Only 3 coders managed to solve the hard problem and this allowed them to take positions in Top3. The fastest overall among them was ahyangyi who gains its second match win and our congratulations. Second and third places were took by Gluk and ftc. In Div2 coders were proposed a moderate difficulty problem set. As a result, 30 coders solved the hard problem and 12 of them solved all problems. The match win was taken by LightZero. Coders De_nis and rzukow took the rest of the podium. The ProblemsGymTrainingUsed as: Division Two  Level One:
As with many DivII Easy problems, this one turns out to be a plain simulation. If you can't train during the current minute, the only thing you can do is to rest. Otherwise you have a choice  to train or to rest. However after a bit of thinking it becomes obvious that there's no real choice, because it's always better to train. Earlier training allows you to complete the required number of training minutes faster, and you don't lose anything by doing training instead of rest, because it's always possible to shift a rest minute later and gain at least the same (or even larger) decrease of your heart pulse rate. The only question left is when 1 should be returned. Note that if you can train during one minute, then you can train during arbitrary large number of minutes (nothing prevents you from doing rest until your pulse falls back to the minimum and continue your training after this). So 1 should be returned only if you can't initially (when your pulse is minimal) do training, i.e. if minPulse + trainChange > maxPulse. Please check the reference solution for an example of Java implementation: public class GymTraining { public int trainingTime(int need, int min, int max, int train, int rest) { if (min + train > max) return 1; int cur = min; int time = 0; while (need > 0) { time++; if (cur + train <= max) { cur += train; need; } else cur = Math.max(min, cur  rest); } return time; } }EquilibriumPoints Used as: Division Two  Level Two: Used as: Division One  Level One:
In order to solve the problem, let's first identify N1 segments on which each of the equilibrium points is located. It's not hard to see that no equilibrium point P is located to the left of all input points, because all input points would force such a point P to the right and no points would force it to the left. By similar reason, no equilibrium point is located to the right of all input points. This means, all equilibrium points are located within the segment [x[0], x[N1]], where N is the number of elements in x. Note that input points are sorted in ascending order of their coordinates and this is actually a small hint. Consider some two adjacent points x[i] and x[i+1], 0 ≤ i < N1. Let P be a point between these two points and its coordinate be x_{0}. Denote the total value of gravitational forces directed to the left from P as L(x_{0}) and the total value of gravitational forces directed to the right from P as R(x_{0}). In order for P to be an equilibrium point we should have L(x_{0}) = R(x_{0}) or, equivalently, L(x_{0})  R(x_{0}) = 0. Let's investigate the behaviour of functions L and R. When x_{0} is increased, the distances between P and each of the input points located to the left of P are increased and therefore all gravitational forces between P and these points are decreased. Therefore L(x_{0}) is a monotonically decreasing function, and similarly R(x_{0}) is a monotonically increasing function. When x_{0} tends to x_{i} from the right, the distance between P and the ith point tends to +0 and therefore L(x_{0}) tends to positive infinity. The same argument shows that R(x_{0}) tends to positive infinity when x_{0} tends to x_{i+1} from the left. If we sum up all these properties, we'll see that L(x_{0})  R(x_{0}) is a monotonically decreasing function, which tends to positive infinity when x_{0} tends to x_{i}+0 and tends to negative infinity when x_{0} tends to x_{i+1}0. This means that equation L(x_{0})  R(x_{0}) = 0 has exactly one root on the segment [x[i], x[i+1]], i.e. there's exactly one equilibrium point between each pair of adjacent input points. As usual in cases when we should find the root of a monotonically increasing/decreasing function, binary search can be used to do the job. If you're not familiar with binary search, this tutorial will surely help you to fill in this gap. The most popular error made by contestants was early search termination. Note that you can safely terminate the search either after sufficiently large number of iterations (for example, 100) or when the searched segment length becomes sufficiently small (i.e. less than 1e9), but not after the value of L(x_{0})  R(x_{0}) becomes less than 1e9. For more information regarding this, please check the following thread. For clean implementation of the described approach, please check the fastest correct submission on this problem by JongMan. FloorIndicatorUsed as: Division Two  Level Three:
This problem was a good coding exercise and required both strong implementation skill and the ability to divide the problem into a few smaller and easier ones. First of all, we should learn to check if a field F of the indicator can represent digit D. After thinking for a while, we can work out the following criterion: «field F can represent digit D if, and only if there is no lit lamp in F which corresponds to unlit lamp in D.» The bruteforce solution of the problem is to check all floor numbers from 0 to 10^N1. However, it is too slow for the given constraints. Suppose now we want to calculate the average of K Ndigit numbers a1…aK where ai = di1*10^(N1) + di2*10^(N2) + … + diN. (Here dij is jth digit of ai). Now the required average is: (a1+…+aK)/K = (d11+…+dK1)/K * 10^(N1) + … + (d12+…+dK2)/K * 10^(N2) + … + (d1N+…+dKN)/K. Note that we can construct the answer using the average digit represented by each of the indicator fields. This can be done by checking the criterion mentioned above only 10*N times. Obviously, the floor indicator can represent no floor number if and only if there is a field F which can represent no digits. In this case we should return 1. See LightZero's short and clear solution for exact implementation. CakesEqualizationUsed as: Division One  Level Two:
In this writeup we'll first consider a solution, which is quite simple to come with and to prove, but is a bit slow, and then will describe how this solution can be optimized and transformed into the fast greedy solution. Obviously, if we have cake pieces and make some cuts, then each piece will be cut into some number of subpieces. Lemma 1. If we are going to cut a cake piece of mass M into X subpieces, then it is always optimal to make all subpieces equal, i.e. of mass M/X. Proof. Suppose we make these subpieces different. It means there is at least one subpiece of mass < M/X, and at least one subpiece of mass > M/X. Therefore the minimum subpiece weight m is < M/X and the maximum subpiece weight M is > M/X. Now, if we replace these subpieces by X equal subpieces of mass M/X, then m will increase (if one of these subpieces had minimal weight) or remain the same, but certainly won't decrease (m won't be larger than M/X and none of subpieces with weights less than M/X decreased its weight). Similarly, M will decrease or remain the same. So, after this change the difference M  m will decrease or remain the same, and it can't hurt to make all subpieces equal. End of proof. The proven lemma shows that in order to solve the problem for each of initial cake pieces we need to determine the number of subpieces X it will be cut into. As cutting into X subpieces requires X1 cuts, the sum of all Xs must be not greater than maxCuts + N (where N is the number of input pieces). Now, given a number M, let's try to answer the following questions. Does it possible to construct a solution, where the maximum subpiece weight is exactly M? In case of positive answer, which minimum difference between maximum and minimum subpiece weight it's possible to achieve? The answers appear to be quite easy. The ith piece must be cut into such number of subpieces X_{i}, that weight[i] / X_{i} ≤ M, which is equivalent to X_{i} ≥ weight[i] / M. In order to minimize the number of necessary cuts and at the same time to maximize the minimum subpiece weight we should make X_{i} as small as possible, i.e. X_{i} = ceil(weight[i] / M), where ceil(x) is the smallest integer ≥ x. After all X_{i} are calculated, if their sum is more than maxCuts + N or if none of weight[i] / X_{i} equals to M, then the answer for the first question is negative. And if the answer is positive, the optimal difference is M  min{weight[i] / X_{i}}. If we apply the described procedure to all possible values of M and choose the best among the constructed solutions, then the problem will be solved. Of course it makes sense to try only numbers weight[i] / j, 0 ≤ i < N, 1 ≤ j ≤ maxCuts+1 as possible candidates for M. The complexity of such solution is O(N^{2} * maxCuts), which is about 2.5*10^{8} operations, but when properly implemented, it passes system tests. For an example of such implementation, you can check Gluk's solution. Note that he calculates ceil(weight[i] / M) using integer division to avoid potential precision problems. Of course, O(N^{2} * maxCuts) is quite a large complexity for such constraints, so it would be nice to find something faster. Consider the following greedy algorithm: Set res := +infinity Set all X_{i} := 1 Test(X) For cut:= 1 to maxCuts Choose i such that weight[i] / X_{i} is maximal Set X_{i} := X_{i} + 1 Test(X) End For Return res Procedure Test(X) Set M := max(weight[i] / X_{i}) Set m := min(weight[i] / X_{i}) Set res := min(res, M  m) End Procedure In other words, our algorithm stores the number of subpieces for each cake piece (initially 1). For each cut, it chooses the piece which currently has the maximum subpiece weight and increases the number of subpieces for this piece by 1. Among all constructed solutions, the best one is chosen and returned. It appears that this greedy algorithm always gives the optimal solution. In order to prove this, let's make a small assumption (*): all values weight[i] / j, 0 ≤ i < N, 1 ≤ i ≤ maxCuts +1, are distinct. This assumption doesn't really change anything  if some of this values are the same, we can always change numbers weight[i] by arbitrary small numbers (the answer for the problem will also change by arbitrary small number) to make them all distinct. Recall the previous solution, where given a number M, we constructed a solution where the maximum subpiece weight was M and the difference between the maximum and the minimum subpiece weights was minimal. Let's call such solution an optimal solution for the given maximum subpiece weight M. Lemma 2. When the procedure Test is called, X_{i} describe the optimal solution for the given maximum subpiece weight M = max{weight[i] / X_{i}}. Proof. By the definition of M, weight[i] / X_{i} ≤ M, for all i. So we only need to prove that X_{i} is the minimum integer such that weight[i] / X_{i} ≤ M. Suppose it's not so and for some i0, weight[i0] / (X_{i0}  1) ≤ M. Let i1 be the index such that M = weight[i1] / X_{i1}. Obviously, weight[i0] / (X_{i0}  1) > weight[i0] / X_{i0}, what means i0 ≠ i1. By assumption (*), weight[i0] / (X_{i0}  1) < M. From the other side, when X_{i0} was changed from X_{i0}  1 to X_{i0}, weight[i0] / (X_{i0}  1) was the maximum subpiece weight. As this maximum only decreases during runtime of our algorithm, weight[i0] / (X_{i0}  1) > M. Contradiction. End of proof. Lemma 3. If A and B are the optimal solutions for the given maximum subpiece weights M1 and M2, where M1 < M2, then A requires more cuts than B. Proof. In solution A, X_{i}(A) = ceil(weight[i] / M1), and in solution B, X_{i}(B) = ceil(weight[i] / M2), so X_{i}(A) ≥ X_{i}(B). At least for one i, X_{i}(A) > X_{i}(B), as otherwise A and B would be the same solution. Therefore sum(X_{i}(A)) > sum(X_{i}(B)). End of proof.Corollary 1. For each number of cuts, there exists only one optimal solution for the given maximum weight, which requires this number of cuts. Corollary 2. Our greedy algorithm lists all possible optimal solutions for the given maximum weight, which require at most maxCuts cuts, and therefore is correct. For an example of implementation of this greedy algorithm take a look at the fastest submission on this problem by LayCurse. Note that his implementation has a complexity of O(N * maxCuts) and it's more than enough to pass systests. However, you can use a priority queue to obtain even better complexity of O(maxCuts * log N). Check Chmel_Tolstiy's solution for a reference. TeamManagementUsed as: Division One  Level Three:
The problem can be reformulated in terms of graph theory. We are given a tree, where vertices correspond to players and two vertices are connected by an edge if corresponding players are friends. A subset S of K vertices is possible if for each vertex V from S there exists a loyal vertex V' (possibly V itself) such that path from V to V' contains only vertices from S. In order to solve the problem let's first calculate the total number of possible sets. To do this we fix some vertex as a root and transform the unrooted tree into the rooted one. Note that we have quite a useful constraint  each player has at most 3 friends. If we choose leaf vertex (i.e. vertex of degree 1) as a root, then each vertex in our rooted tree will have at most two sons. Now we want to calculate the following function on our tree: F(V, cnt, typ), where V is a vertex, cnt is a number 0 to K, inclusive, and typ is an integer 0, 1 or 2. This function means the number of possible sets in the subtree rooted at vertex V and containing exactly cnt vertices. In addition, the definition of "possible" slightly changes based on the value of typ:
Let's find the relations which allow to calculate the values of F. First, assume that V is a leaf vertex, i.e. has no sons. Then cnt must be no more than 1, otherwise F = 0. When cnt ≤ 1, possible F values are described by the following constants: V is loyal: V is not loyal: typ typ cnt 0 1 2 cnt 0 1 2   0  1 1 0 0  1 1 0 1  1 1 1 1  0 1 0 Now let's take a nonleaf vertex V. It can have 1 or 2 sons. Let's assume it has 2 sons V1 and V2 (the case of 1 son is treated similarly and is more simple). There can be two possibilities:
Ok, we managed to calculate the total number of possible sets in the given tree  it's just T = F(R, K, 0) where R is the root of all tree. Now to solve our problem we can use the following trick. Assume that it's forbidden to include some vertices into a possible set. Under this constraints we still can calculate the total number of possible sets using almost the same DP approach (just some cases become impossible for some vertices). For each i let's calculate T_{i}  the number of possible sets when it's forbidden to take the vertex i. Note that T  T_{i} gives the number of possible sets which contain the vertex i and therefore the probability that the vertex i is included into a random possible set is (T  T_{i}) / T. Possible Java implementation of this approach is provided below: import java.util.*; public class TeamManagement { int N, K; boolean[][] adj; boolean[] isLoyal; boolean[] canTake; long[][][] cnt; void solve(int v, int p) { // find all subtrees and solve the problem for them int sub = 0; for (int i=0; i < N; i++) if (i != p && adj[v][i]) sub++; int[] num = new int[sub]; int pos = 0; for (int i=0; i < N; i++) if (i != p && adj[v][i]) { num[pos++] = i; solve(i, v); } // leaf vertex if (sub == 0) { if (isLoyal[v]) { cnt[v][0][0] = 1; cnt[v][1][0] = (canTake[v]?1:0); cnt[v][0][1] = 1; cnt[v][1][1] = (canTake[v]?1:0); cnt[v][0][2] = 0; cnt[v][1][2] = (canTake[v]?1:0); } else { cnt[v][0][0] = 1; cnt[v][1][0] = 0; cnt[v][0][1] = 1; cnt[v][1][1] = (canTake[v]?1:0); cnt[v][0][2] = 0; cnt[v][1][2] = 0; } return; } // don't take V for (int i=0; i<=K; i++) if (sub == 1) { cnt[v][i][0] += cnt[num[0]][i][0]; cnt[v][i][1] += cnt[num[0]][i][0]; } else { for (int j=0; j<=i; j++) { long add = cnt[num[0]][j][0] * cnt[num[1]][ij][0]; cnt[v][i][0] += add; cnt[v][i][1] += add; } } // take V if (!canTake[v]) return; for (int i=1; i<=K; i++) for (int typ=0; typ<3; typ++) if (typ == 1  isLoyal[v]) { // case a if (sub == 1) cnt[v][i][typ] += cnt[num[0]][i1][1]; else for (int j=0; j<=i1; j++) cnt[v][i][typ] += cnt[num[0]][j][1] * cnt[num[1]][ij1][1]; } else { // case b if (sub == 1) cnt[v][i][typ] += cnt[num[0]][i1][2]; else for (int j=0; j<=i1; j++) cnt[v][i][typ] += cnt[num[0]][j][1] * cnt[num[1]][ij1][2] + cnt[num[0]][j][2] * cnt[num[1]][ij1][1]  cnt[num[0]][j][2] * cnt[num[1]][ij1][2]; } } public double[] getDistribution(int N, int K, String[] fr, String loy) { // parse input tree this.N = N; this.K = K; isLoyal = new boolean[N]; for (int i=0; i < N; i++) isLoyal[i] = loy.charAt(i) == 'Y'; adj = new boolean[N][N]; int[] deg = new int[N]; for (int i=0; i < fr.length; i++) { int A = Integer.parseInt(fr[i].split(" ")[0]); int B = Integer.parseInt(fr[i].split(" ")[1]); adj[A][B] = adj[B][A] = true; deg[A]++; deg[B]++; } // choose correct root int R = 0; while (deg[R] > 1) R++; // calculate T cnt = new long[N][K+1][3]; canTake = new boolean[N]; Arrays.fill(canTake, true); solve(R, 1); long cAll = cnt[R][K][0]; double[] res = new double[N]; for (int i=0; i < N; i++) { // calculate T_{i} for (int j=0; j<N; j++) for (int k=0; k<=K; k++) Arrays.fill(cnt[j][k], 0); canTake[i] = false; solve(R, 1); res[i] = 1.0  cnt[R][K][0] * 1.0 / cAll; canTake[i] = true; } return res; } } 
