Google Code Jam Qualification Round Summary
One of the most massive algorithm competitions started last weekend!
Google Code Jam Qualification Round was held on Friday, April 6, 2018 23:00 UTC – Sunday, April 8, 2018 02:00 UTC. There were 27 hours to solve 4 problems. The fastest participant managed to solve all problems in under 1 hour.
The contest gathered 24,589 participants from all around the world. Every contestant with >= 25 points advanced to the next round - 14,081 total. Compared to 2017 with 25,288 participants (probably more due to absence of people with zero score) and 18,331 advancers, this year’s involvement is slightly smaller. However, this may be an insignificant random fluctuation.
What really changed is this year’s contest rules, as well as user interface. I will provide a short summary of changes and how it affects programmers. You can read the official rules here:
https://codejam.withgoogle.com/codejam/resources/faq
https://codejam.withgoogle.com/codejam/terms
In 2008-2017 GCJ had a peculiar contest flow. You were to download test cases, manually run them on your computer and send the output back. Judge results for “small” cases were given right away, whereas “large” tests outcome was hidden until the end of a contest. 8-minute timer started once you have downloaded the “large” input. Had one failed to submit the answer file in time, he or she would have entirely lost the ability to submit "large" test for this problem.
2018 rules are much more like ACM ICPC rules. You submit your code, not the answer. Code is compiled and run on Google’s servers. Each submission is tested on both “small” and “large” test sets, but only “small” test feedback is shown. There is no limit on number submissions, aside from “No more than 10 attempts in any 10-minute window”. Penalty is calculated as 4 * <number of incorrect attempts> + <maximum time of accepted submission>.
Notable consequences are:
Only limited set of languages can be used. The most widespread ones are supported: C++, Java, Python, Go. I’ve heard that last year someone used Excel to solve problems. I also saw Rust and OCaml in 2017;
One cannot see tests during the round;
There are time limits;It seems (although rules are rather silent about the matter) one cannot use threading or parallelization to squeeze suboptimal solutions. However, I only saw evidence of someone using such tricks in Eryx’s code (see first place in https://codejam.withgoogle.com/codejam/contest/5304486/scoreboard):
// This solution includes hidden routines to solve test cases in separate
// processes in order to make it faster. I will update them to run on a
// cluster if I get one ;)Servers take more time to judge a solution, because it is more than just diff-ing two text files;
Participants with slow internet connection benefit from not needing to download “large” test sets (and they can be several megabytes);
There is no rush with 8-minute timer. I remember a situation with some graph problem where I implemented depth-first search. However, my solution crashed on my own computer because of stack depth limit. The time was ticking, I was very stressed and did not manage to fix the problem;
Scoreboard is less accurate during the round than in 2017 rules, because even if a contestant does not intend to solve “large” test, he or she still submits code for the whole problem and gets points;
Interactive problems can now exist.
The new contest platform has some limitations compared to the old one:
It is not possible to download solutions from the scoreboard;
It is not possible to download tests;
It is not possible to track your friends’ scores;
There were issues with problem submission, but they are fixable and I hope they won’t persist.
Qualification round featured one interactive problem.
You control a robot that can dig soil on a 1000x1000 grid. You can tell robot coordinates <x, y> and it will dig in random cell <x’, y’> such that |x - x’| <= 1 and |y - y’| <= 1. You were given an integer A. Your task was to dig soil in and only in any subrectangle of area A, with no more than 1000 orders.
Participant was to send a program that made orders of the form <x, y>. Since orders were not known to judges beforehand, this problem was interactive: judges’ program answered corresponding <x’, y’> on the fly.
Small testset featured only A = 20. Large testset had A = 200.
8317 participants managed to solve A=20, 6481 solved the whole problem.
The other 3 problems were more conventional. First two problems involved bubble sort, the last (and the toughest - only 1158 people solved Large testset) problem was 3D geometry. You can read full analysis here.
Overall, the round went good. I believe that rules were changed for the greater good and we will see more interesting problems.
Good luck to all contestants and see you in Round 1A/B/C!