JOIN
Get Time
statistics_w  Match Editorial
SRM 421
Wednesday, October 8, 2008

Match summary

In SRM 421 coders from Div1 saw quite a tough problem set. In the easy problem they had to implement binary search algorithm and the only one trick was accurate search termination. The medium problem could be solved with a greedy algorithm, which was not easy to see and to prove. The hard problem proposed to find not so obvious algorithm, based on dynamic programming on tree, which additionally was quite hard to implement because of many different cases. Only 3 coders managed to solve the hard problem and this allowed them to take positions in Top-3. The fastest overall among them was ahyangyi who gains its second match win and our congratulations. Second and third places were took by Gluk and ftc.

In Div2 coders were proposed a moderate difficulty problem set. As a result, 30 coders solved the hard problem and 12 of them solved all problems. The match win was taken by LightZero. Coders De_nis and rzukow took the rest of the podium.

The Problems

GymTraining rate it discuss it
Used as: Division Two - Level One:
Value 250
Submission Rate 761 / 856 (88.90%)
Success Rate 643 / 761 (84.49%)
High Score FaroukMohamed for 247.51 points (2 mins 51 secs)
Average Score 191.02 (for 643 correct submissions)

As with many Div-II Easy problems, this one turns out to be a plain simulation. If you can't train during the current minute, the only thing you can do is to rest. Otherwise you have a choice - to train or to rest. However after a bit of thinking it becomes obvious that there's no real choice, because it's always better to train. Earlier training allows you to complete the required number of training minutes faster, and you don't lose anything by doing training instead of rest, because it's always possible to shift a rest minute later and gain at least the same (or even larger) decrease of your heart pulse rate.

The only question left is when -1 should be returned. Note that if you can train during one minute, then you can train during arbitrary large number of minutes (nothing prevents you from doing rest until your pulse falls back to the minimum and continue your training after this). So -1 should be returned only if you can't initially (when your pulse is minimal) do training, i.e. if minPulse + trainChange > maxPulse.

Please check the reference solution for an example of Java implementation:

public class GymTraining {
  public int trainingTime(int need, int min, int max, int train, int rest) {
    if (min + train > max) return -1;
    int cur = min;
    int time = 0;
    while (need > 0) {
      time++;
      if (cur + train <= max) {
        cur += train;
        need--;
      } else cur = Math.max(min, cur - rest);
    }
    return time;
  }
}
EquilibriumPoints rate it discuss it
Used as: Division Two - Level Two:
Value 500
Submission Rate 240 / 856 (28.04%)
Success Rate 91 / 240 (37.92%)
High Score Maximus4 for 451.10 points (9 mins 33 secs)
Average Score 286.39 (for 91 correct submissions)
Used as: Division One - Level One:
Value 250
Submission Rate 582 / 632 (92.09%)
Success Rate 431 / 582 (74.05%)
High Score JongMan for 246.27 points (3 mins 30 secs)
Average Score 193.80 (for 431 correct submissions)

In order to solve the problem, let's first identify N-1 segments on which each of the equilibrium points is located. It's not hard to see that no equilibrium point P is located to the left of all input points, because all input points would force such a point P to the right and no points would force it to the left. By similar reason, no equilibrium point is located to the right of all input points. This means, all equilibrium points are located within the segment [x[0], x[N-1]], where N is the number of elements in x.

Note that input points are sorted in ascending order of their coordinates and this is actually a small hint. Consider some two adjacent points x[i] and x[i+1], 0 ≤ i < N-1. Let P be a point between these two points and its coordinate be x0. Denote the total value of gravitational forces directed to the left from P as L(x0) and the total value of gravitational forces directed to the right from P as R(x0). In order for P to be an equilibrium point we should have L(x0) = R(x0) or, equivalently, L(x0) - R(x0) = 0.

Let's investigate the behaviour of functions L and R. When x0 is increased, the distances between P and each of the input points located to the left of P are increased and therefore all gravitational forces between P and these points are decreased. Therefore L(x0) is a monotonically decreasing function, and similarly R(x0) is a monotonically increasing function. When x0 tends to xi from the right, the distance between P and the i-th point tends to +0 and therefore L(x0) tends to positive infinity. The same argument shows that R(x0) tends to positive infinity when x0 tends to xi+1 from the left.

If we sum up all these properties, we'll see that L(x0) - R(x0) is a monotonically decreasing function, which tends to positive infinity when x0 tends to xi+0 and tends to negative infinity when x0 tends to xi+1-0. This means that equation L(x0) - R(x0) = 0 has exactly one root on the segment [x[i], x[i+1]], i.e. there's exactly one equilibrium point between each pair of adjacent input points. As usual in cases when we should find the root of a monotonically increasing/decreasing function, binary search can be used to do the job. If you're not familiar with binary search, this tutorial will surely help you to fill in this gap.

The most popular error made by contestants was early search termination. Note that you can safely terminate the search either after sufficiently large number of iterations (for example, 100) or when the searched segment length becomes sufficiently small (i.e. less than 1e-9), but not after the value of |L(x0) - R(x0)| becomes less than 1e-9. For more information regarding this, please check the following thread.

For clean implementation of the described approach, please check the fastest correct submission on this problem by JongMan.

FloorIndicator rate it discuss it
Used as: Division Two - Level Three:
Value 1000
Submission Rate 81 / 856 (9.46%)
Success Rate 30 / 81 (37.04%)
High Score tujf for 713.02 points (19 mins 46 secs)
Average Score 539.01 (for 30 correct submissions)

This problem was a good coding exercise and required both strong implementation skill and the ability to divide the problem into a few smaller and easier ones. First of all, we should learn to check if a field F of the indicator can represent digit D. After thinking for a while, we can work out the following criterion: «field F can represent digit D if, and only if there is no lit lamp in F which corresponds to unlit lamp in D.» The bruteforce solution of the problem is to check all floor numbers from 0 to 10^N-1. However, it is too slow for the given constraints. Suppose now we want to calculate the average of K N-digit numbers a1aK where ai = di1*10^(N-1) + di2*10^(N-2) + … + diN. (Here dij is j-th digit of ai). Now the required average is:

(a1+…+aK)/K = (d11+…+dK1)/K * 10^(N-1) + … + (d12+…+dK2)/K * 10^(N-2) + … + (d1N+…+dKN)/K.

Note that we can construct the answer using the average digit represented by each of the indicator fields. This can be done by checking the criterion mentioned above only 10*N times. Obviously, the floor indicator can represent no floor number if and only if there is a field F which can represent no digits. In this case we should return -1. See LightZero's short and clear solution for exact implementation.

CakesEqualization rate it discuss it
Used as: Division One - Level Two:
Value 500
Submission Rate 201 / 632 (31.80%)
Success Rate 85 / 201 (42.29%)
High Score LayCurse for 474.10 points (6 mins 42 secs)
Average Score 314.13 (for 85 correct submissions)

In this writeup we'll first consider a solution, which is quite simple to come with and to prove, but is a bit slow, and then will describe how this solution can be optimized and transformed into the fast greedy solution.

Obviously, if we have cake pieces and make some cuts, then each piece will be cut into some number of subpieces.

Lemma 1. If we are going to cut a cake piece of mass M into X subpieces, then it is always optimal to make all subpieces equal, i.e. of mass M/X.

Proof. Suppose we make these subpieces different. It means there is at least one subpiece of mass < M/X, and at least one subpiece of mass > M/X. Therefore the minimum subpiece weight m is < M/X and the maximum subpiece weight M is > M/X. Now, if we replace these subpieces by X equal subpieces of mass M/X, then m will increase (if one of these subpieces had minimal weight) or remain the same, but certainly won't decrease (m won't be larger than M/X and none of subpieces with weights less than M/X decreased its weight). Similarly, M will decrease or remain the same. So, after this change the difference M - m will decrease or remain the same, and it can't hurt to make all subpieces equal. End of proof.

The proven lemma shows that in order to solve the problem for each of initial cake pieces we need to determine the number of subpieces X it will be cut into. As cutting into X subpieces requires X-1 cuts, the sum of all Xs must be not greater than maxCuts + N (where N is the number of input pieces).

Now, given a number M, let's try to answer the following questions. Does it possible to construct a solution, where the maximum subpiece weight is exactly M? In case of positive answer, which minimum difference between maximum and minimum subpiece weight it's possible to achieve?

The answers appear to be quite easy. The i-th piece must be cut into such number of subpieces Xi, that weight[i] / Xi ≤ M, which is equivalent to Xiweight[i] / M. In order to minimize the number of necessary cuts and at the same time to maximize the minimum subpiece weight we should make Xi as small as possible, i.e. Xi = ceil(weight[i] / M), where ceil(x) is the smallest integer ≥ x. After all Xi are calculated, if their sum is more than maxCuts + N or if none of weight[i] / Xi equals to M, then the answer for the first question is negative. And if the answer is positive, the optimal difference is M - min{weight[i] / Xi}.

If we apply the described procedure to all possible values of M and choose the best among the constructed solutions, then the problem will be solved. Of course it makes sense to try only numbers weight[i] / j, 0 ≤ i < N, 1 ≤ j ≤ maxCuts+1 as possible candidates for M. The complexity of such solution is O(N2 * maxCuts), which is about 2.5*108 operations, but when properly implemented, it passes system tests. For an example of such implementation, you can check Gluk's solution. Note that he calculates ceil(weight[i] / M) using integer division to avoid potential precision problems.

Of course, O(N2 * maxCuts) is quite a large complexity for such constraints, so it would be nice to find something faster. Consider the following greedy algorithm:

Set res := +infinity
Set all Xi := 1
Test(X)
For cut:= 1 to maxCuts
    Choose i such that weight[i] / Xi is maximal
    Set Xi := Xi + 1
    Test(X)
End For
Return res

Procedure Test(X)
    Set M := max(weight[i] / Xi)
    Set m := min(weight[i] / Xi)
    Set res := min(res, M - m)
End Procedure

In other words, our algorithm stores the number of subpieces for each cake piece (initially 1). For each cut, it chooses the piece which currently has the maximum subpiece weight and increases the number of subpieces for this piece by 1. Among all constructed solutions, the best one is chosen and returned.

It appears that this greedy algorithm always gives the optimal solution. In order to prove this, let's make a small assumption (*): all values weight[i] / j, 0 ≤ i < N, 1 ≤ i ≤ maxCuts +1, are distinct. This assumption doesn't really change anything -- if some of this values are the same, we can always change numbers weight[i] by arbitrary small numbers (the answer for the problem will also change by arbitrary small number) to make them all distinct.

Recall the previous solution, where given a number M, we constructed a solution where the maximum subpiece weight was M and the difference between the maximum and the minimum subpiece weights was minimal. Let's call such solution an optimal solution for the given maximum subpiece weight M.

Lemma 2. When the procedure Test is called, Xi describe the optimal solution for the given maximum subpiece weight M = max{weight[i] / Xi}.

Proof. By the definition of M, weight[i] / Xi ≤ M, for all i. So we only need to prove that Xi is the minimum integer such that weight[i] / Xi ≤ M. Suppose it's not so and for some i0, weight[i0] / (Xi0 - 1) ≤ M. Let i1 be the index such that M = weight[i1] / Xi1. Obviously, weight[i0] / (Xi0 - 1) > weight[i0] / Xi0, what means i0 ≠ i1. By assumption (*), weight[i0] / (Xi0 - 1) < M. From the other side, when Xi0 was changed from Xi0 - 1 to Xi0, weight[i0] / (Xi0 - 1) was the maximum subpiece weight. As this maximum only decreases during runtime of our algorithm, weight[i0] / (Xi0 - 1) > M. Contradiction. End of proof.

Lemma 3. If A and B are the optimal solutions for the given maximum subpiece weights M1 and M2, where M1 < M2, then A requires more cuts than B.

Proof. In solution A, Xi(A) = ceil(weight[i] / M1), and in solution B, Xi(B) = ceil(weight[i] / M2), so Xi(A) ≥ Xi(B). At least for one i, Xi(A) > Xi(B), as otherwise A and B would be the same solution. Therefore sum(Xi(A)) > sum(Xi(B)). End of proof.

Corollary 1. For each number of cuts, there exists only one optimal solution for the given maximum weight, which requires this number of cuts.

Corollary 2. Our greedy algorithm lists all possible optimal solutions for the given maximum weight, which require at most maxCuts cuts, and therefore is correct.

For an example of implementation of this greedy algorithm take a look at the fastest submission on this problem by LayCurse. Note that his implementation has a complexity of O(N * maxCuts) and it's more than enough to pass systests. However, you can use a priority queue to obtain even better complexity of O(maxCuts * log N). Check Chmel_Tolstiy's solution for a reference.

TeamManagement rate it discuss it
Used as: Division One - Level Three:
Value 1000
Submission Rate 10 / 632 (1.58%)
Success Rate 3 / 10 (30.00%)
High Score Gluk for 546.82 points (32 mins 8 secs)
Average Score 521.98 (for 3 correct submissions)

The problem can be reformulated in terms of graph theory. We are given a tree, where vertices correspond to players and two vertices are connected by an edge if corresponding players are friends. A subset S of K vertices is possible if for each vertex V from S there exists a loyal vertex V' (possibly V itself) such that path from V to V' contains only vertices from S.

In order to solve the problem let's first calculate the total number of possible sets. To do this we fix some vertex as a root and transform the unrooted tree into the rooted one. Note that we have quite a useful constraint -- each player has at most 3 friends. If we choose leaf vertex (i.e. vertex of degree 1) as a root, then each vertex in our rooted tree will have at most two sons.

Now we want to calculate the following function on our tree: F(V, cnt, typ), where V is a vertex, cnt is a number 0 to K, inclusive, and typ is an integer 0, 1 or 2. This function means the number of possible sets in the subtree rooted at vertex V and containing exactly cnt vertices. In addition, the definition of "possible" slightly changes based on the value of typ:

  1. typ = 0: usual definition;
  2. typ = 1: in addition to usual definition we assume that V is already connected to some loyal vertex out of the considered subtree, so vertices of a possible set can be connected not only to loyal vertices, but also to vertex V;
  3. typ = 2:
  4. in addition to usual definition we assume that in a possible set there must be a path from some loyal vertex to vertex V containing only vertices from the set.

Let's find the relations which allow to calculate the values of F. First, assume that V is a leaf vertex, i.e. has no sons. Then cnt must be no more than 1, otherwise F = 0. When cnt ≤ 1, possible F values are described by the following constants:

V is loyal:               V is not loyal:

        typ                      typ
cnt     0   1   2         cnt    0   1   2
------------------        ------------------
  0  |  1   1   0          0  |  1   1   0
  1  |  1   1   1          1  |  0   1   0

Now let's take a non-leaf vertex V. It can have 1 or 2 sons. Let's assume it has 2 sons V1 and V2 (the case of 1 son is treated similarly and is more simple). There can be two possibilities:

  1. We don't include V into the constructed possible set. This can only be a possibility if typ=0 or typ=1. In case typ=2 we must include V. If we don't include V then both subtrees rooted at V1 and V2 become independent and we should construct possible sets in them independently. If subtree rooted at V1 will contain a set of i vertices, then subtree rooted at V2 will contain a set of cnt-i vertices. The number of possibilities to construct these sets is F(V1, i, 0) * F(V2, cnt-i, 0). To obtain the total number of possible sets we need to sum this up for all i between 0 and cnt, inclusive.
  2. We include V into the set. Here we again have two possibilities:
    • a. typ = 1 or vertex V is loyal. Here for both subtrees at V1 and V2 we can assume that their roots are already connected to some loyal vertex (either to V if V is loyal or to some outer vertex if typ=1). So the number of possibilities is the sum of F(V1, i, 1) * F(V2, cnt-i-1, 1) over all i's.
    • b. typ ≠ 1 and vertex V is not loyal. As we included a non-loyal vertex V into the set, we must make it connected to a loyal vertex from the set. Thus, in one of subtrees at V1 and V2 root must be connected to a loyal vertex (typ = 2) and in the other subtree we can assume that root is already connected to a loyal vertex in the opposite subtree (typ = 1). So the first sketch for the number of possibilities is the sum of F(V1, i, 2) * F(V2, cnt-i-1, 1) + F(V1, i, 1) * F(V2, cnt-i-1, 2) for all i's. But, in fact, we've calculated the cases when in both subtrees root is connected to a loyal vertex inside this subtree (typ = 2) twice and they need to be subtracted once. I.e. the correct number of possibilities is the sum of F(V1, i, 2) * F(V2, cnt-i-1, 1) + F(V1, i, 1) * F(V2, cnt-i-1, 2) - F(V1, i, 2) * F(V2, cnt-i-1, 2) for all i's.

Ok, we managed to calculate the total number of possible sets in the given tree -- it's just T = F(R, K, 0) where R is the root of all tree. Now to solve our problem we can use the following trick. Assume that it's forbidden to include some vertices into a possible set. Under this constraints we still can calculate the total number of possible sets using almost the same DP approach (just some cases become impossible for some vertices). For each i let's calculate Ti -- the number of possible sets when it's forbidden to take the vertex i. Note that T - Ti gives the number of possible sets which contain the vertex i and therefore the probability that the vertex i is included into a random possible set is (T - Ti) / T.

Possible Java implementation of this approach is provided below:

import java.util.*;

public class TeamManagement {
  int N, K;
  boolean[][] adj;
  boolean[] isLoyal;
  boolean[] canTake;
  long[][][] cnt;

  void solve(int v, int p) {
    // find all subtrees and solve the problem for them
    int sub = 0;
    for (int i=0; i < N; i++)
      if (i != p && adj[v][i]) sub++;
    int[] num = new int[sub];
    int pos = 0;
    for (int i=0; i < N; i++)
      if (i != p && adj[v][i]) {
        num[pos++] = i;
        solve(i, v);
      }

    // leaf vertex    
    if (sub == 0) {
      if (isLoyal[v]) {
        cnt[v][0][0] = 1; cnt[v][1][0] = (canTake[v]?1:0);
        cnt[v][0][1] = 1; cnt[v][1][1] = (canTake[v]?1:0);
        cnt[v][0][2] = 0; cnt[v][1][2] = (canTake[v]?1:0);
      } else {
        cnt[v][0][0] = 1; cnt[v][1][0] = 0;
        cnt[v][0][1] = 1; cnt[v][1][1] = (canTake[v]?1:0);
        cnt[v][0][2] = 0; cnt[v][1][2] = 0;
      }
      return;
    }
        
    // don't take V
    for (int i=0; i<=K; i++)
      if (sub == 1) {
        cnt[v][i][0] += cnt[num[0]][i][0];
        cnt[v][i][1] += cnt[num[0]][i][0];
      } else {
        for (int j=0; j<=i; j++) {
          long add = cnt[num[0]][j][0] * cnt[num[1]][i-j][0];
          cnt[v][i][0] += add;
          cnt[v][i][1] += add;
        }
      }
        
    // take V
    if (!canTake[v]) return;
    for (int i=1; i<=K; i++)
      for (int typ=0; typ<3; typ++)
        if (typ == 1 || isLoyal[v]) {
          // case a
          if (sub == 1)
            cnt[v][i][typ] += cnt[num[0]][i-1][1];
          else
            for (int j=0; j<=i-1; j++)
              cnt[v][i][typ] += cnt[num[0]][j][1] * cnt[num[1]][i-j-1][1];
        } else {
          // case b
          if (sub == 1)
            cnt[v][i][typ] += cnt[num[0]][i-1][2];
          else
            for (int j=0; j<=i-1; j++)
              cnt[v][i][typ] += cnt[num[0]][j][1] * cnt[num[1]][i-j-1][2] +
                                cnt[num[0]][j][2] * cnt[num[1]][i-j-1][1] -
                                cnt[num[0]][j][2] * cnt[num[1]][i-j-1][2];
        }
  }

  public double[] getDistribution(int N, int K, String[] fr, String loy) {
    // parse input tree
    this.N = N; this.K = K;
    isLoyal = new boolean[N];
    for (int i=0; i < N; i++)
      isLoyal[i] = loy.charAt(i) == 'Y';
    adj = new boolean[N][N];
    int[] deg = new int[N];
    for (int i=0; i < fr.length; i++) {
      int A = Integer.parseInt(fr[i].split(" ")[0]);
      int B = Integer.parseInt(fr[i].split(" ")[1]);
      adj[A][B] = adj[B][A] = true;
      deg[A]++; deg[B]++;
    }
    // choose correct root
    int R = 0;
    while (deg[R] > 1) R++;
    // calculate T
    cnt = new long[N][K+1][3];
    canTake = new boolean[N];
    Arrays.fill(canTake, true);
    solve(R, -1);
    long cAll = cnt[R][K][0];
    double[] res = new double[N];
    for (int i=0; i < N; i++) {
      // calculate Ti
      for (int j=0; j<N; j++)
        for (int k=0; k<=K; k++)
          Arrays.fill(cnt[j][k], 0);
      canTake[i] = false;
      solve(R, -1);
      res[i] = 1.0 - cnt[R][K][0] * 1.0 / cAll;
      canTake[i] = true;
    }
    return res;
  }
}

Author
By ivan_metelsky
TopCoder Member