Good job! You’re all caught up

Join challenges and check your notification settings if you don’t receive notifications. We’re actively adding new notifications. Read our blog post for more info

TOSCA Editor - Web Application Wireframe Challenge (1)
Dismiss notification
Earlier Dismiss All
You have no more notifications

May 9, 2020 Single Round Match 785 Editorials

Match Overview

Did you know that, in addition to being the day of Europe, the 9th of May is also the world’s Moscato day *and* bellydance day? Well, neither did I, but had to open this editorial somehow, right? (Having this knowledge one cannot avoid wondering if it is a coincidence they are on the same day…)

Few minutes before the match people reported having problems logging into the arena (I also had such problems), but it was too late to do anything, so we decided to continue with the match. Hopefully, enough people managed to do so (with several tries it seemed to work)! We had a total of 829 registrants, among which 10 current targets. The match promised to be interesting!

Most people in Div1 started from the Easy (275) problem, but a few (approximately one in every 25 people or so) decided to go for the Medium (500) and Hard (950). Almost all of the Div2 participants, on the other hand, followed the usual solving pattern: Easy -> Medium -> Hard.

Around two minutes after the start of the competition, the first submissions in Div2 started pouring. Div1 participants needed more time to deal with their Easy, with the first submission coming around 4:30 into the match. Around 10 minutes after the start we saw the first submissions to Div2 Medium as well. Interestingly enough, the first submission to the Div1 Hard came just a couple of minutes later than the first submissions to the Medium – around 17 minutes into the match (however, had to re-submit later on). Around 5 minutes later, tourist opened the first problem. Around 38 minutes into the match lyrically submitted his final problem (going for Hard -> Easy -> Medium). Let’s see if they’ll pass!

Gennady (tourist) had a totally different strategy. Come fashionably late for the competition (start around 20 minutes later). Solve all three problems and become first. Well, seems to be a good strategy, for him at least.

Gennady (tourist) had a totally different strategy. Come fashionably late for the competition (start around 20 minutes later). Solve all three problems and become first. Well, seems to be a good strategy, for him at least. This seemed like a good strategy until he had to resubmit his solution to the Hard, then again 51 seconds before the end of the match. He did, however, pick up 3 successful challenges.

The challenge phase proved to be rather eventful, with many 275s and 950s falling in Div1, and also quite a few 500s in Div2.

Div2 250: EllysPalMulDiv2

Used as: Division Two – Level One:

Value 250
Submission Rate 255 / 292 (87.33%)
Success Rate 242 / 255 (94.90%)
High Score cybertron1609 for 249.86 points (0 mins 41 secs)
Average Score 221.06 (for 242 correct submissions)

The first problem in the SRM 785 set was, given an integer X to find another integer Y, such that the product X * Y is a palindrome. This was an easier version of Div1’s EllysPalMul the only difference being that the allowed range for Y was only [1, 1000]. Because of that it was quite feasible to simply try all values of Y and find the first one that makes X * Y a palindrome.

In terms of code we need a function to check if a number is a palindrome, which isn’t that hard (especially in Python), but in Java/C++ can be done using the following code:

boolean isPal(int num) {
   int rev = 0;
   for (int tmp = num; tmp > 0; tmp /= 10)
       rev = rev * 10 + tmp % 10;
   return num == rev;

The rest of the problem is iterating all possible values for Y and find the first one that works (or return -1 if none of them do):

public int getMin(int X) {
   for (int Y = 1; Y <= 1000; Y++) {
       if (isPal(X * Y)) {
           return Y;
   return -1;

See the Div1’s EllysPalMul for a harder version if you like mind challenges!

Div2 500: EllysConjectureDiv2


Used as: Division Two – Level Two:

Value 500
Submission Rate 211 / 292 (72.26%)
Success Rate 74 / 211 (35.07%)
High Score cybertron1609 for 490.35 points (4 mins 0 secs)
Average Score 311.44 (for 74 correct submissions)

There actually isn’t a Div1 version of this problem, but I wanted the naming to be consistent. The problem is as follows. First, it gives the following algorithm: start with a number X, and if is even divide it by two; if it is odd add three to it until getting to a number you’ve already had (which we call “the result” for this X). The task was to find the results for all starting numbers X in some interval [L, R] with L and R up to 1,000,000,000.

This time iterating over all X and simulating the process wouldn’t cut it – although simulating the procedure for any number is relatively fast, doing it for the whole interval [1, 1000000000] will be way too slow (taking around 10-15 minutes).

Although the Collatz conjecture is still unproven, Elly’s conjecture is much simpler to prove (By the way, the only difference from the original one is that in the odd case we multiply by 1 and add 3, instead of multiplying by 3 and adding 1). As the numbers in the Collatz conjecture can grow arbitrarily, the ones in Elly’s conjecture quickly become very small. In fact, starting from any X we can get to (at most) X + 3.

The main observation we should make is that in every two steps we do at least one division (thus, the number decreases roughly in half). Indeed, if the number is even, we have a division right away; if it is odd, we add 3 to it and make it even, forcing a division in the next step. So, starting with any X, we can find the result for this X in O(log(X)) steps.

Let’s see what are the possible results. We claim that the result can be at most 6. For any number 1…6 the only possibility to get out of this range is the number 5, which yields 8, which becomes 4 in the next step – and we get in the [1, 6] range again. Numbers larger than 6 strictly decrease in every two steps (that is true since we have at least one division in those two steps, and they can grow with at most 3) – thus, they eventually fall in the [1, 6] range as well.

Okay, having said that, the actual solution I expected from most participants was to print the answers for small values of X and find a pattern. The first 20 results (for X = 1…20) are: 1, 2, 3, 4, 4, 6, 4, 4, 6, 4, 4, 6, 4, 4, 6, 4, 4, 6, 4, 4. Do you see it? If we exclude the results for 1, 2, and 3, the results for numbers larger than 3 are the repeated pattern (4, 4, 6). Thus, for any X > 3 we have result(X) = 6 if X % 3 == 0 and result(X) = 4 otherwise. This is actually true for all X in [4, 1000000000].

One should take care handling the small cases (X = 1…3), as not doing so can easily lead to a failed problem (and, quite likely, +50 points for someone else).

public long getSum(int L, int R) {
   long ans = 0;
   for (; L <= R &amp;&amp; (L < 6 || L % 3 != 0); L++)
       ans += L < 4 ? L : 4;
   long cnt6 = (R - L + 3) / 3;
   long cnt4 = (R - L + 1) - cnt6;
   return ans + cnt4 * 4 + cnt6 * 6;

As you can see, the final code is even simpler than the one for the 250 — however, this does not include the code to print the results for small answers and find the pattern.

Coming up with a mathematical proof for this is one way to be sure our solution is correct. During time-constrained competitions, however, it is often quicker to have another approach – so let’s learn that instead!

First, you can verify your observation for all numbers lower than some bound you feel comfortable (say 1,000,000) which can be done quite quickly (few seconds) and submit. After that pesky timer has stopped decreasing your score, run for all numbers in [1, 1000000000] (which finishes in several minutes) to be sure it is correct – or have a very evil test case for the challenge phase if it isn’t.

Div2 1000: EllysNimDiv2

Used as: Division Two – Level Two:

Value 1000
Submission Rate 24 / 292 (8.22%)
Success Rate 16 / 24 (66.67%)
High Score Redemptioner for 809.91 points (14 mins 29 secs)
Average Score 580.66 (for 16 correct submissions)

This was also an easier (much easier) version of the Div1 hard problem. There were two differences: N was up to 50 (instead of 100) and all numbers Ai were up to 1000 (instead of 1,000,000,000). This allowed many things to work (like the backtrack-greedy solution which is described in the Div1’s EllysNim analysis). However, the expected most-common solution for this version of the problem was based on Dynamic Programming.

First things first, let’s decode the problem statement. In order for the second player to win in a “normal-play” Nim (which is what Elly and Kris are playing) the bitwise exclusive XOR of the numbers must be zero. In division 2 this will be a much bigger blocker than in Div1, as I expect many people will not know this (but can easily be found online as the game is Nim is *very* fundamental and popular).

With this knowledge, the problem statement could be stated as following: given a set of N numbers A[i], increment some (potentially none or all of them) such that their XOR is zero. Do this with the least possible total sum of increments.

We need a dynamic programming solution which keeps track of the following information (state):

  1. Which element of A we are currently at (we have decided by how much to increase all of the previous ones).
  2. What is the current XOR of all previous (already-decided) elements.

It doesn’t make sense to increment the elements too much – in fact, we can prove that none of them is increased by more than 1024 in an optimal solution. This follows from the fact that none of the input numbers has that bit (2**10) set in the initial configuration. If we increment any of the elements enough so it becomes 1, then we have zeroes in all of its lower bits. This allows us to have a solution with strictly less than 1024 for the rest of the problem, thus we don’t need to make the current element any larger (see Div1’s EllysNim analysis for why this is true).

Now that we know the maximum XOR we can get to in an optimal solution is less than or equal to 2024 (1000 + 1024) we can have a DP table with state [50][2048]. The first dimension is the index of the current number we change, and the second one is the XOR of the previous numbers. Please note that although the elements can be at most 2024, their XOR can be larger – up to 2047, inclusive.

Inside the DP function, we need to check all possible increments for the current number (0, 1, …, 1024) and call the DP further on with the next index and the updated XOR.

After the last element, we check if the XOR is zero – if it is, then the current configuration of increments is good and we return 0 (no more increments to do). If it is not, then the current configuration is not good and we return infinity (to indicate we cannot finish the task with a zero XOR).

In terms of code, the whole solution looks as follows:

public class EllysNimDiv2 {
   private final int LIM = 2048;

   int[] a;
   int[][] dyn;

   int recurse(int idx, int cur) {
       if (idx == a.length)
           return cur == 0 ? 0 : 1000000001;
       if (dyn[idx][cur] != -1)
           return dyn[idx][cur];

       int ans = 1000000001;
       for (int add = 0; add <= 1024; add++) {
           ans = Math.min(ans, recurse(idx + 1, cur ^ (a[idx] + add)) + add);
       return dyn[idx][cur] = ans;

   public int getMin(int[] A) {
       a = A;
       dyn = new int[a.length][LIM];
       for (int i = 0; i < a.length; i++)
           Arrays.fill(dyn[i], -1);
       return recurse(0, 0);

In terms of complexity, we have O(N * maxA) states, with O(maxA) operations for each state. Thus, the total complexity is O(N * maxA^2). In terms of operations, that is 50 * 2048 * 1024 = around 100 million operations – low enough to fit nicely in the time limit.

Div1 275: EllysPalMul

Used as: Division One – Level One:

Value 275
Submission Rate 243 / 289 (84.08%)
Success Rate 101 / 243 (41.56%)
High Score lyrically for 267.67 points (4 mins 43 secs)
Average Score 180.28 (for 101 correct submissions)

The problem PalMul presented a meet-in-the-middle in a very simple form. Instead of iterating all 1 billion options for Y, we will exploit that X * Y should be a palindrome and palindromes are… well, we can kinda split them in the middle.

Since X is in [1, 100000] and Y should be in [1, 1000000000], the maximal palindrome we can get for *any* input is less than 100,000,000,000,000, thus having at most 14 digits. But since the palindromes are determined by the first half of their digits, we have no more than 10,000,000 palindromes which are potential answers for our X * Y value. Ten million isn’t that much, so we can try them all! For each candidate palindrome P we need to see if it is divisible by X, and if it is – whether P / X = Y is in the [1, 1000000000] interval. We find the lowest Y that does the trick and we are done!

In terms of code, this can be implemented in the following way:

public class EllysPalMul {
   private final int INF = 1000000001;

   private int tryPal(int x, int left, int right) {
       long pal = left;
       for (; right > 0; right /= 10)
           pal = pal * 10 + right % 10;
       if (pal % x == 0 &amp;&amp; pal / x >= 1 &amp;&amp; pal / x <= 1000000000)
           return (int)(pal / x);
       return INF;

   public int getMin(int X) {
       int ans = INF;
       for (int half = 1; half <= 10000000; half++) {
           ans = Math.min(ans, tryPal(X, half, half));
           ans = Math.min(ans, tryPal(X, half, half / 10));
       return ans == INF ? -1 : ans;

Div1 500: EllysTwoRatings

Used as: Division One – Level Two:

Value 500
Submission Rate 100 / 289 (34.60%)
Success Rate 88 / 100 (88.00%)
High Score lyrically for 468.04 points (7 mins 31 secs)
Average Score 328.16 (for 88 correct submissions)

This was a relatively standard problem. Most people in Div1 should almost immediately recognize the dynamic programming in it. There was some optimization involved (both for time and memory), but other than that it was standard.

The problem, in short, is as follows: having two integers A and B between 1 and 1000, inclusive, start changing them (alternatingly) with at most 100 in either direction, but keeping them in the [1, 1000] range. Each new value has equal probability for being chosen. What is the chance (expected value) for them to become equal at some point over 2*N iterations?

The most trivial DP will have a state of [2*N][1000][1000]. The first dimension is which iteration we are at. The second dimension is the current value of A. The third dimension is the current value of B.

Since we need the answer with double precision, this requires around 800 megabytes of memory, which is over the allowed 256MB. Luckily, we always change the value in the first dimension by one (incrementing the current iteration), so an easy fix is to make the solution iterative and use a [2][1000][1000] state instead (re-using the memory for each sequential iteration).

For each of the 2 * N * 1000 * 1000 states we need to change either A or B to some new value, which is up to 100 away from the current one. We can do this with a for-loop; however, this would be too slow, as 2 * 52 * 1000 * 1000 * 200 is around 20 billion operations – way more than we can do in the 2 seconds we have (we can safely run around 100-200 million operations in that timeframe).

Okay, how to make it faster? Without loss of generality, let’s say at the current iteration we are changing A. To fill the DP table, for any value A in [1, 1000] (which we’ll call curA) we need to consider all values newA in [max(1, curA-100), min(1000, curA+100)]. For example:

  • For curA = 123 we need the sum of DP[iter-1][newA][B] for all newA in [23, 223].
  • For curA = 124 we need the sum of DP[iter-1][newA][B] for all newA in [24, 224].

But 198 of the elements in the intervals [23, 223] and [24, 224] overlap! We can surely re-use that. For curA = 1, 2, …, 499, 500, 501, … 999, 1000 we’ll have the intervals [1, 101], [1, 102], …, [399, 599], [400, 600], [401, 601], … [899, 1000], [900, 1000]. Instead of calculating these sums over and over again, we’ll start with the interval [1, 101] for curA = 1 and add/remove (up to) one element from the back/front each time we go to the next curA, completely removing the inner cycle.

With this optimization (“inner cycle optimization”) we achieve amortized constant complexity for each state, needing roughly 104,000,000 operations – fitting quite nicely in the time limit!

One implementation of this (sorry for my Java) is as follows:

public class EllysTwoRatings {
   final static private int RAT = 1001;

   public double getChance(int N, int A, int B) {
       double[] dp = new double[RAT * RAT * 2];
       for (int i = 0; i < RAT; i++)
           dp[i * RAT + i] = dp[RAT * RAT + i * RAT + i] = 1.0;
       for (int rem = 1; rem <= N; rem++) {
           for (int who = 1; who >= 0; who--) {
               int whoOff = who == 0 ? RAT * RAT : 0;
               for (int other = 1; other < RAT; other++) {
                   double sum = 0, cnt = 0;
                   for (int r = 1; r <= 100; r++) {
                       sum += dp[whoOff + other * RAT + r];
                       cnt += 1.0;
                   for (int my = 1; my < RAT; my++) {
                       if (my - 101 >= 1) {
                           sum -= dp[whoOff + other * RAT + my - 101];
                           cnt -= 1.0;
                       if (my + 100 < RAT) {
                           sum += dp[whoOff + other * RAT + my + 100];
                           cnt += 1.0;
                       dp[who * RAT * RAT + my * RAT + other] = my == other ? 1.0 : sum / cnt;
       return Math.round(dp[A * RAT + B] * 1000000000000.0) / 1000000000000.0;

Please note that doing multi-dimensional array compression into a single dimension reduces runtime significantly in Java, thus the implementation.

Div1 950: EllysNim

Used as: Division One – Level Three:

Value 950
Submission Rate 25 / 289 (8.65%)
Success Rate 1 / 25 (4.00%)
High Score mochalka for 787.68 points (13 mins 29 secs)
Average Score 787.68 (for 1 correct submission)

This was an odd task. It seemed standard, then not. It seemed harder than it is, then much easier. In the end it turns out to be somewhere in the middle – after figuring out a very special set of test cases.

The first thing the contestants had to realize was that in order for the second player to be able to win in this version of Nim (“normal play”) the XOR of the number of stones in the piles had to be zero. This should be fairly well-known for people in Div1 so was more of a nuance (less so for Div2).

With this knowledge, the problem statement could be stated as following: given a set of N numbers A[i], increment some (potentially none or all of them) such that their XOR is zero. Do this with the least possible total sum of increments.

If the XOR of the given numbers is already zero we’re done and can return 0.

If it isn’t, then there is at least one bit which is set in an odd number of elements A[i]. Let’s take any element, which *doesn’t* have it set. We can increment that element until the bit becomes one, fixing the XOR for that bit. We potentially change the XOR of the less-significant bits, but don’t change the one of the more significant ones. Thus, if we start from the most-significant non-zero XOR bit going to less-significant ones, this will work to solve the problem. We still don’t guarantee this is the optimal solution, but it is a solution.

What happens, if there is more than one element of A which has zero in the respective bit? Which one to increment? It turns out that it is always optimal to choose the largest one, if we ignore all more-significant bits than the current one. This is the greedy part of the task. Why is this true? Let’s have two elements with values X and Y (ignoring highest bits), both of which have 0 at the current bit. In case X == Y it doesn’t matter which one we choose. Without loss of generality, let’s have X < Y. If we decide to increment X at some point it will become equal to Y. At that point we can continue from Y, achieving the same, but having changed X as well – which can be sub-optimal, since we no longer can change X to some other number K < Y at a later point (and also using more increments than we might have needed).

Finally, why we chose elements with zero in this bit – why not change elements which have a 1? Well, since we iterate and “fix” bits from most-significant to less-significant ones, all higher bits are already “okay” – there are even number of 1s in each more significant bit. If we increment one of the numbers with a 1 in the current bit so it becomes zero, then one (or more) of the higher bits becomes broken and have to be fixed again. There is always a way to achieve an optimal answer by never changing a 1 to 0 in the currently evaluated bit.

Or, at least, that would be true if N was even. If it is not, it is not *always* true – it is mostly true, except in the case when N is odd and there is a bit in which *all* numbers have a 1. This is the special set of cases which makes the problem much harder. In that case there is no element with a zero which we can increment. What to do then?

One option is (when facing a bit with all-ones with N being odd) to try changing each of the numbers to have this bit 0 and recursively solve with the same greedy+backtrack. This exploits one other observation – once we “fix” such an all-one bit then the element we fixed it with has zeroes in all of its lowest bits – thus, the same will not happen in *any* of the lowest bits – we’ll have an element with a zero in all of them!

The bad news is that it *can*, however, happen in more-significant bits. A specially- crafted test cases may make such backtracks exponential. For example:

253(10) = 11111101(2)
251(10) = 11111011(2)
247(10) = 11110111(2)
239(10) = 11101111(2)
223(10) = 11011111(2)
191(10) = 10111111(2)
127(10) = 01111111(2)

() = represents the base

Although all more-significant bits have an even number of ones (and we don’t need to change), the least-significant bit is of the “the bad type” – has an odd number of ones and no zero to fix it with. No matter which of the numbers we choose, we create a more-significant bit of “the bad type”.

Okay, what to do then? We said that once we change a bit (no matter from 0 to 1 or from 1 to 0) in an element, *all* of its less-significant bits become zero and we no longer have the nasty scenario with all-one bits there. We need to exploit that observation further. Since the initial XOR is not zero there is a most-significant bit we need to change. Well, if we use brute force for figuring out which that bit is (having O(N * log(1,000,000,000) options for that) then in the rest of the problem we either:

  1. Don’t have the nasty scenario, which we already know how to solve.
  2. We have the nasty scenario, which means the bit we changed is not the most-significant one and we can abandon it directly.

After all the thinking, the implementation turns out to be rather simple – we bruteforce all numbers and all bits for looking for the most-significant one we need to change, then run a simple greedy. Nice, right?

public class EllysNim {

   private long greedy(int[] A) {
       A = Arrays.copyOf(A, A.length);
       long ret = 0;
       for (int bit = 30; bit >= 0; bit--) {
           int idx = A.length - 1;
           while (idx >= 0 &amp;&amp; (A[idx] &amp; (1 << bit)) != 0)
               A[idx--] &amp;= (1 << bit) - 1;
           if ((A.length - idx) % 2 == 0) {
               if (idx < 0)
                   return 1000000000000000001L;
               ret += (1 << bit) - A[idx];
               A[idx] = 0;
       return ret;

   public long getMin(int[] A) {
       long ans = greedy(A);
       for (int i = 0; i < A.length; i++) {
           for (int bit = 0; bit <= 30; bit++) {
               if ((A[i] &amp; (1 << bit)) == 0) {
                   int add = (1 << bit) - (A[i] &amp; ((1 << bit) - 1));
                   A[i] += add;
                   ans = Math.min(ans, greedy(A) + add);
                   A[i] -= add;
       return ans;

We have O(N) for trying all numbers with O(log(Ai)) for each of its bits (this is the brute-force for the most-significant bit we need to change). Each of these creates an instance of the greedy. It can be implemented in several ways, but since we don’t care much for efficiency here, there is a relatively simple O(N * log(Ai)) implementation, with which the total complexity becomes O(N^2 * log^2(Ai)).


Guest Blogger


Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds