Register
Submit a solution
The challenge is finished.

Challenge Overview

Introduction

We have run a series of quantum computing learning challenges. Now, let’s go beyond those toys and look at a real-world problem. We are looking for efficient solutions that can solve the max-cut problem using the provided Fujitsu’s Digital Annealer platform! We hope to receive wonderful solutions.
Good Luck!

Requirements to Win a Prize

In order to receive a prize, you must do the following:
  1. Achieve a score in the top 5, according to system test results calculated using the test dataset. The score must be higher than 0.0. See the “Data Description and Scoring” section below.
  2. Within 7 days from the announcement of the contest winners, submit a complete 2-page (minimum) report that: (1) outlines your final algorithm and (2) explains the logic behind and steps of your approach. 
If you place in the top 5, but fail to do all of the above, then you will not receive a prize, which will be awarded to the contestant with the next best performance who completed the submission requirements above.

Objective

We have run Max-Cut problem on previous learning challenge (https://tc3-japan.github.io/DA_tutorial/tutorial-3-max-cut.html). In the Max-Cut problem, there are N nodes and M weighted edges. The goal of this problem is to partition these nodes into two parts to maximize the total edge weights between these two parts.

As mentioned, DA can solve up to 8192 bits of solution space. We will be limiting the use to 1024 bits.
A typical max-cut solver in DA requires n binary variables, representing whether each node belongs to the first part. A tutorial challenge hosted was for the max-cut problem at a small scale that fits into this solution space. In the real world, the number of nodes can become quite large that do not fit into the solution space of DA.
In this challenge, we are going to solve the max-cut problem on much larger graphs of 2,000 and 5,000 nodes! This will need some algorithm that can solve a large number of nodes that do not fit into the limited solution space, naturally calling DA API multiple times.

The goal is to obtain a bigger cut value as much as possible within the timeout window of 3 minutes. If the cut value is the same, faster execution time wins. 

Implementation

Implementation details will be the same as the tutorial challenge #3 - Max Cut. Please review the tutorial at this URL: https://tc3-japan.github.io/DA_tutorial/tutorial-3-max-cut.html 

In brief;
  1. You are asked to implement a function main(n_node, edges) that returns 2 groups of nodes.
  2. You can call the function solveDA(rule, da_params) multiple times with request and response as mentioned in the tutorial.
  3. Note that you will need to have BinaryQuadraticPolynomial class in your python script in addition to the above functions.

Python (3.6) is the only language allowed for this Marathon Match. You can use only the default libraries, NumPy (1.15.4), and SciPy (1.2.1) which is installed in the execution system.
Create a single python script file, zip it, and submit to Topcoder platform during the submission phase. The platform will run your script and return the results via email. Make sure that your email registered with Topcoder is accurate and up to date.

Scoring

Scores range from 0.0 (worst) to 100.0 (best). For each test case, we want to maximize the cut value you find. If the score is the same, a smaller execution time wins.
The time limits for each test case are 3 min (180 sec.).
Therefore, suppose you find a cut value using T seconds, and the max known cut value for the test data set is MAX, the score of this test case will be: 
 
score = (your_cut_value - T/180) / MAX * 100.0
 
The total score will be the average of the scores on all tested graphs.
 
In the Provisional Test, we will use 2 graphs with 2,000 and 5,000 nodes. Meaning your main function will be called 2 times, each having 180sec. time limit.
In the System Test we will use different graphs with 2,000 and 5,000 nodes.
The nodes for all datasets are randomly generated with the same density for all 2,000 sets, also the same density for all 5,000 sets. The edge weights are all 1.
 
Note:
  • As you can imagine, there is small pre- and post-process when calling DA API that your script does not have control of. To make this a fair scoring, the execution time mentioned above will remove this overhead. So, T = (actual execution time of your script) - (DA API wait time) + (only the annealing time within DA).
  • Also, we will be using a slightly higher value than the actual max known cut value for the MAX mentioned above. So, we have enough space in case you find a better value!
  • Due to how the scoring platform is hooked up with DA API, your scripts will be run one by one with other participants’ script. This means that your script will be put in a FIFO queue to run, and you might not be getting the results back immediately.

General Notes

  • This match is rated
  • The allowed programming language is Python only
  • The memory limit is 128 MB
  • The source code size limit is 6 MB
  • Each script needs to run within 180 sec. for each test case
  • Each member can make a maximum of 30 submissions for this challenge
  • You can only submit once per 60 minutes
  • The script must cut the given graph into 2 parts
  • Once you have submitted the script, the system will send you multiple emails. After your script has been executed against DA, the system will send you the last email with results from "TC3-QuantumScorer" reviewer. Overall result and a unique URL will be returned where you will be able to download a file containing all the request/response of DA API calls
  • You can include ONLY open source code in your submission, provided it is free for you to use and would be for the client as well. Code from open source libraries under the BSD or MIT licenses will be accepted. Other open source licenses may be accepted too, just be sure to ask us.
  • Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about the problem itself, possible solution techniques or related to data analysis.
.