

Round Overview
Discuss this match
The first priority is to play T songs from each album. Since there is a time limit, it is best to play the T shortest songs from each album. We are forced to play these songs, so we should just find the T shortest songs and add up their durations (we can just sort the durations). If the sum of durations of the T shortest songs of each album is much larger than the time limit ( “minutes”*60 , because the time limit is in minutes and the durations in seconds), then it is just impossible to have T songs from each album and we can return -1. Another failure condition is when there are not enough songs, if the number of songs in any album is less than T, we return -1.
After playing the T shortest songs from each album, there might still be some time available. We can play some extra songs. Since we want to maximize the number of songs played, we should play the shortest songs possible. But this time, we can pick songs from any of the albums. We cannot pick songs that were already played. We can just save the unplayed songs in some vector v. Sort v and then try from the shortest song available: If we can play the song, without exceeding the time length, increase the song count (initially 2T) by 1.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
int listen(vector < int > durations1, vector < int > durations2, int minutes, int T) {
int n = durations1.size(), m = durations2.size();
if (T > n || T > m) {
return -1;
}
// sort durations
sort(durations1.begin(), durations1.end());
sort(durations2.begin(), durations2.end());
// T smallest songs of each album. Find their duration sum:
int s = accumulate(durations1.begin(), durations1.begin() + T, 0) +
accumulate(durations2.begin(), durations2.begin() + T, 0);
if (s > minutes * 60) {
return -1;
} else {
// unplayed songs:
vector < int > v;
for (int i = T; i < n; i++) {
v.push_back(durations1[i]);
}
for (int i = T; i < m; i++) {
v.push_back(durations2[i]);
}
// sort them
sort(v.begin(), v.end());
int c = 2 * T; //current song count
// s has the current total length of played songs
for (int i = 0; i < v.size(); i++) {
// if adding the song to the duration exceeds the time limit,
// don't play it:
if (s + v[i] <= minutes * 60) {
s += v[i];
c++;
}
}
return c;
}
}
ContestScoreboard
Problem Details
This problem is mostly an implementation problem. We can just try a candidate time t , and parse the score data to calculate each team’s score at that time. Find the winning team (tie breaking using lexicographically-earliest team) and then mark the team as able to win. The challenge comes from knowing which times are good candidates to try. We need to test enough times to cover all possibilities but due to the time limit we cannot just test all times from 0 to 1000000001.
Recognize that the winning team can only change after each submission. So the only times we need to test are those that are t + 1 for any submission time in the input. The total number of submissions is not very high, with 100 characters per team data and each submission needing at least 4 characters, the limit is easily less than 25 submissions per team.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
vector < int > findWinner(vector < string > scores) {
// grab all the possible times at which the winning team might change:
// note that lexicographically-earliest team wins at time 0:
set < int > times = {
0
};
for (string s: scores) {
// parse the team score data, we can use a istringstream in c++'s case:
istringstream st(s);
string name;
st >> name;
char slash;
int score, time;
while (st >> score >> slash >> time) { // extract score and time for the submission:
// time + 1 is a good candidate
times.insert(time + 1);
}
}
vector < int > won_ever(scores.size(), 0);
// for each candidate time:
for (int t: times) {
// find the winning team. Largest score, lexicographically earliest.
int best_team_id = -1;
string best_team_name = "none";
int best_team_score = -1;
// for each team score, parse it and calculate the total score:
for (int i = 0; i < scores.size(); i++) {
istringstream st(scores[i]);
string name;
st >> name;
char slash;
int score, time;
int score_sum = 0;
// sum the scores of submissions before time:
while (st >> score >> slash >> time) {
if (time < t) {
score_sum += score;
}
}
bool winner = false;
if (score_sum > best_team_score) {
winner = true;
} else if ((score_sum == best_team_score) && (name < best_team_name)) {
winner = true;
}
if (winner) {
best_team_score = score_sum;
best_team_name = name;
best_team_id = i;
}
}
// team best_team_id won!
won_ever[best_team_id] = 1;
}
return won_ever;
}
ForbiddenStreets
Problem Details
Let’s interpret the city as a graph. For each edge (road) we need to find if the shortest path (distance) between any pair of vertices (intersections) increases or becomes impossible after deleting the edge (placing the parade there). Due to the higher than-usual time limit, the straight forward approach will work: First calculate the shortest distances between every pair of cities. Then for each road calculate the shortest distances again, but ignoring that path. Since we need to find the distances between all pairs, the Floyd-Warshall shortest paths algorithm fits very well in this problem. This yields an O( M N^3) approach.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
vector < int > find(int N, vector < int > A, vector < int > B, vector < int > time) {
const int INF = 1000000000;
int M = A.size();
// intialize the distance matrix for Floyd-Warshall:
vector < vector < int >> d(N, vector < int > (N, INF));
for (int i = 0; i < N; i++) {
d[i][i] = 0;
}
for (int i = 0; i < M; i++) {
d[A[i]][B[i]] = time[i];
d[B[i]][A[i]] = time[i];
}
// Initial Floyd-Warshall:
for (int k = 0; k < N; k++) {
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
d[i][j] = std::min(d[i][j], d[i][k] + d[k][j]);
}
}
}
vector < int > res(M);
for (int r = 0; r < M; r++) {
// Another Floyd-Warshall ignoring road r:
int nd[100][100];
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
nd[i][j] = INF;
}
nd[i][i] = 0;
}
for (int i = 0; i < M; i++) {
if (i != r) {
nd[A[i]][B[i]] = time[i];
nd[B[i]][A[i]] = time[i];
}
}
for (int k = 0; k < N; k++) {
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
if (nd[i][k] + nd[k][j] < nd[i][j]) {
nd[i][j] = nd[i][k] + nd[k][j];
}
}
}
}
// for each pair that changed, increase the road's count:
for (int i = 0; i < N; i++) {
for (int j = i + 1; j < N; j++) {
if (nd[i][j] > d[i][j]) {
res[r]++;
}
}
}
}
return res;
}
Although this approach is correct, the time limit might be very tight for it. It’s necessary to avoid overhead. For example, in c++'s case it is necessary to use native arrays, vectors might be too slow.
If you are interested in less straightforward approaches, we can actually use our knowledge of Floyd-Warshall’s algorithm to do calculate the answer quickly. Normally Floyd-Warshall remembers the shortest distance between all pairs of vertices, we can also add the list of critical roads between the vertices. For example, between vertices i and j the critical edges are those that when removed will surely increment the shortest distance between i and j. In other words, an edge is critical if it is present in all possible shortest paths between i and j.
The trick is in the update step. In Floyd-Warshall, you might find that after considering an intermediary vertex k, the recorded distance between i and j improves or stays the same. These are two moments in which we need to update the critical edges:
If moving from i to k and from k to j yields a better distance than the current distance between i and j, then we have found a new shortest path that is (currently) unique. All the critical edges used between i and k and between k and j will be critical as well. So we take the union between the lists of critical edges i -> k and k -> j.
If moving from i to k and from k to j yields the same result as the known shortest distance, then we have found an alternative path. The critical edges in this path are the same union as in the previous case. However, since the current known shortest paths are also valid, we need to take the intersection between the current list of critical edges and the new union. It needs to be the intersection, because that would describe the list of edges that are common to all the shortest paths including the old and the new.
Floyd-Warshall still needs O(N^3) checks to work. But each check might need to do O(N) iterations to find and maintain the lists of critical edges. Note that the shortest path will never have more than N-1 edges. So the approach is O(N^4). In practice, you will be interested in using a set data structure which might be logarithmic in those cases, the complexity is O(N^4 log(N)). The following code takes 0.1 seconds in the worst system test case in TopCoder’s servers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49const int INF = 1000000000; int M = A.size(); vector < vector < int >> d(N, vector < int > (N, INF)); vector < vector < set < int >>> crit(N, vector < set < int >> (N)); for (int i = 0; i < N; i++) { d[i][i] = 0; } for (int i = 0; i < M; i++) { // initially, the direct roads connecting two intersections are critical: d[A[i]][B[i]] = time[i]; crit[A[i]][B[i]] = { i }; d[B[i]][A[i]] = time[i]; crit[B[i]][A[i]] = { i }; } for (int k = 0; k < N; k++) { for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { if (d[i][k] + d[k][j] <= d[i][j]) { if (d[i][k] + d[k][j] < d[i][j]) { // new crit crit[i][j] = set_union(crit[i][k], crit[k][j]); d[i][j] = d[i][k] + d[k][j]; } else { // intersect the two critical lists auto tm = set_union(crit[i][k], crit[k][j]); crit[i][j] = set_intersection(crit[i][j], tm); } } } } } vector < int > res(M); for (int i = 0; i < N; i++) { for (int j = i + 1; j < N; j++) { for (int x: crit[i][j]) { res[x]++; } } } return res; }
FiringEmployees
Problem Details
The company is a tree so let’s think in a recursive way. Each employee that is also a manager will be a node with some children - the employees that must be fired in case the manager needs to be fired. Let’s define f(x) as the maximum profit we can make in regards to employee x.
In case the employee is not the CEO, we can fire them. In this case the profit would be 0. We also fire all of the (employees) subtrees of x , the same as setting them all to 0.
Or we can decide to keep the employee. In this case the profit is “productivity”[x] - “salary”[x], but we also need to consider the profit from the employees they manage (if any). It is possible to fire or not any of the employees. It follows that we add up f(y) for each employee y that is a direct subordinate of x.
The maximum of those two cases is the result for f(x). This is a simple recursion and we can use iterative dynamic programming to calculate it even for the maximum number of employees (2501). It is very useful that the input is presented is topologically sorted , which means that whenever someone manages someone else, the index of the manager is smaller than the index of the subordinate. We can fill the results for f[x] in decreasing order of x until we reach x = 0, always using the f[y] we found for employees with larger indexes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
int fire(vector < int > manager, vector < int > salary, vector < int > productivity) {
int n = manager.size() + 1;
vector < list < int >> g(n);
for (int i = 1; i < n; i++) {
g[manager[i - 1]].push_back(i);
}
vector < int > f(n); // the dynamic programming table
for (int x = n - 1; x >= 0; x--) {
f[x] = -1000000000;
if (x != 0) {
// fire everyone (don't fire the CEO, probably a bad idea)
f[x] = 0;
}
// keep this employee, can do something about those who depend on them
int prof = 0;
if (x != 0) {
prof = productivity[x - 1] - salary[x - 1];
}
for (int y: g[x]) {
prof += f[y];
}
// use the maximum
f[x] = std::max(f[x], prof);
}
return f[0];
}