Challenge Overview
Problem Statement | |||||||||||||
NOTE for former competitors: This contest is a repost, however we have made changes in the problem statement. Please read it carefully. BackgroundCustomer is seeking to optimize a dynamic fluid algorithm to help better model and predict characteristic efficiencies for solutions that are ultimately tested and brought to market. The current solution exists in Octave/Matlab and the customer is looking for a re-factored algorithm in alternative software languages that compute at a significantly higher speed, while remaining accurate. Prizes1st : $7,000 2nd : $4,000 3rd : $2,000 4th : $1,000 Milestone prizes:
You can win multiple milestone prizes, even with one submission. Please be aware that that evaluation of the submissions is a manual process for this contest and if you make multiple submissions between two evaluations, only your last submission date counts. TaskYour task in this contest is to improve the performance of the customer's benchmark solution while you maintain the overall accuracy. The simulation is of a certain class of fluid-flow problems. Therefore, the simulation needs to match the benchmark solution in several variables (e.g. concentration, velocity) at several time points. Competitors do not have to implement the "relative stopping criteria", and can ignore any references to "stoprel" or "plotfn". The algorithm that needs to be tuned is proprietary code (written in Octave/Matlab). The simulation utilizes the Lattice Boltzmann Method (LBM) of fluid flow. You can find the following resources useful: The solution should be GPU-optimized code. Evaluation will be done on Standard_NC12 Azure virtual machines. Specification of the docker host VM Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1035-azure x86_64) Docker version 18.09.0, build 4d60db4 NVIDIA Docker 2.0 installed NVIDIA Driver Version: 410.79 CUDA Version: 10.0 Benchmark solutionYou can download current Octave/Matlab solution from here. You can check out the current solution by following these steps:
octave --no-gui --eval "lbmst('<<test case name>>.json')" --no-line-editing -V -f
Input filesYou can find the training data here. The .json file is the main input deck specifying the dimensions of the domain, boundary conditions, material characteristics and reporting information. Among the details are domain specifications which are in named binary files described next. A detailed description of the .json input file specification can be found in the APPENDIX at the end of the problem statement. All .dt files are binary files containing the domain values at each lattice points. All binaries are arrays of double-precision floats, whose dimensions depend on the variables (see the JSON file definition above). For a complete understanding of the input files please refer to the benchmark solution. Output filesAt specified time increments, the current states of the field variables (e.g. density) are written to disk as binary files in the same format as the initial conditions in the .dt files. At every time increment (or less often, depending on the sampling frequency) various summary metrics are calculated and written to a CSV file. You may find this a convenient first check as you implement new capability. Both CSV and DT files need to be created. For a complete understanding of the output files please refer to the benchmark solution. SubmissionThere will be 22 training test cases, 14 provisioning test cases, and 7 system testing test cases. System testing will include both provisioning and system testing test cases. Timeout for running provisioning test cases is 50 minutes. For system testing it is 70 minutes. Benchmark score for traning data is 1,245,589. For provisioning it is 2,182,534, and for system testing it is 1,387,444. See scoring details below in the Scoring section. For manual evaluationYour solution must be able to evaluated by building and running a docker image. We're going to invoke the following docker commands: docker build -t "$YOUR_DOCKER_IMAGE" "$YOUR_SUBMISSION_FODLER" For each test case we will invoke: docker run --runtime=nvidia --name eval -v "$TESTCASE_FOLDER":/test:ro -v "$RESULT_FOLDER":/result "$YOUR_DOCKER_IMAGE" Running time is going to be extracted the following way: START=$(docker inspect --format='{{.State.StartedAt}}' eval) STOP=$(docker inspect --format='{{.State.FinishedAt}}' eval) START_TIMESTAMP=$(date --date=$START +%s%N) STOP_TIMESTAMP=$(date --date=$STOP +%s%N) RESULT=$((($STOP_TIMESTAMP-$START_TIMESTAMP)/1000000)) echo "$RESULT" milliseconds docker container rm eval The content of the /test folder as follows: |-- test |-- *.dt |-- <<test case name>>.json The content of the /result folder should be as follows: |-- result |-- *.dt |-- <<test case name>>.csv For the TopCoder platformYou're required to implement one method, getURL() that returns a String corresponding to the URL of your answer (.zip) for direct download. The zip file must contains everything we need to build a docker image from your code and be able to evaluate test cases as described above. |-- submission.zip |-- Dockerfile |-- *.* (files needed for your solution to build and run) You may upload your .zip file to a cloud hosting service such as Dropbox which can provide a direct link to your .zip file. To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getURL() function. You can use any other way to share your result file but make sure the link you provide should open the filestream directly. ScoringBoth CSV and DT files need to be created. They both are validated that they are correct within a tolerance. The DT files each must have both relative and absolute error less than 1e-6 for each value. The CSV file is checked to produce the same number of lines/steps as well as the same columns in order, but also for accuracy of all the columns as well. The CSV file can have a maximum of 1e-2 relative and absolute error for each column, because it may have lower precision. The following formula is used to check the accuracy: |x - \hat{x}| < tolerance OR (|x| > 1e-9 AND |x - \hat{x}| / |x| < tolerance) where x is the benchmark value and \hat{x} is the output of the contestant's solution Final score is calculated as follows: if we fail to build or run your submission you get a score of -1. If your submission fails to produce the right output or fail the accuracy check of any test cases your final score is 10,000 * number_of_test_cases_passing_accuracy_and_output_check. Otherwise, you final score is 5,000,000 - total_time_needed_to_run_all_the_test_cases_in_milliseconds. Evaluating your submissionEvaluation is going to be done in two parts:
Requirements to win a prizeIn order to receive a prize, you must do all the following after system testing is done:
APPENDIXJSON file formatThe JSON file has the following key:value pairs:
| |||||||||||||
Definition | |||||||||||||
| |||||||||||||
Examples | |||||||||||||
0) | |||||||||||||
|
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.