Submit a solution
The challenge is finished.

Challenge Overview

Welcome to Marathon Match sponsored by Asahi Kasei Corporation, a major Japanese chemical company with operations all over the world.

In this challenge we aim to create a model that can predict the shape deformation and internal stress of a molded product, with high accuracy, given a 3D structural model.

Various simulation techniques have been used to evaluate the deformation behavior of plastics. But these calculations are slow and require repetitive work to manage. In an effort to save time, some research has been done on proxy models using machine learning. But few are based on a variety of accumulated three-dimensional structural information.

Problem Description

In this problem you will be given a 3D shape, a hollow rectangular box, with walls of some thickness and with some holes cut out of it. Your code should read a model file describing this shape, and then estimate how it will be affected when pressed by a moveable bar. In addition to the displacement, you will also be asked to estimate the maximum stress experienced at each node.

The ultimate output of your code should be a CSV file, which matches the format of the ground truth files, where each line is of the form “node_id,dx,dy,dz,max_stress”. The values “dx”, “dy” and “dz” indicate the node displacement, while “max_stress” indicates maximum principal stress.

Data Description

3D model overview

The model is a rectangular box, with walls of some thickness, and with some holes cut out of it (see figure 1). It is completely fixed on the bottom (figure 2). 
figure1          figure.2
figure 1                                              figure 2

This box is pushed by a bar, which moves 7mm with constant force (figure 3).  When this happens, the box deforms (figure 4) and the walls experience stress (figure 5).

figure 3          figure 4          figure 5

figure 3                                                         figure 4                                                     figure 5

Stress is a physical quantity used to express the magnitude and direction of action of a force generated inside an object. It is used to examine the magnitude of the burden on the object, which can cause deformation or breakage.

Data files

3D data file

The 3D data is stored in JSON files. Once read, the object has several keys.
  • nodes: Nodes that make up the 3D model. Each node has a value of “node_id”, “x”, “y” and  “z”. The units of “x”, “y” and “z” are mm.
  • push_elements: Triangular elements describing the surface of the push bar. Each element is composed of three nodes for the triangle points. Each element data has a value of “element_id”, “idx” and “node_id”. Each element_id appears 3 times, once for each index (“idx”) with the corresponding “node_id”.
  • surf_elements: Triangular elements of the box. Each element is composed of 6 nodes, 3 for the points and 3 for the midpoints of each edge. Each element has a value of “element_id”, “idx”, “node_id”, with each element_id appearing 6 times, once for each “idx” as above.
  • nset_fix: The nodes ID list that are fixed in place and will not move. Each list entry has a value for “node_id”. 
  • nset_osibou: The node IDs that are part of the push bar that contacts the box. Each list entry has a value for “node_id”.
  • surf_plate: The element IDs of the box that are contacted by the push bar. The data has a value of “element_id” and “command”. The command is always “SPOS”, which means upward.
  • move_node_id: The master node of push bar, which follows the 7mm push trajectory.
  • config: The thickness of the box walls, in millimeters.

Ground truth

The ground truth CSV file has 5 columns: “node_id”, “dx”, “dy” , “dz” and “max_stress”. The
"node_id" corresponds to node_id from the 3D data. The values “dx”, “dy” and “dz” indicate the displacement of the node (mm) , and “max_stress” indicates maximum principal stress (N/mm^2). 



Data sets

There are 3136 examples in total. Those are broken down into 4 sets for sample, train, pred and test. The split is as follows:

sample | 384
train     | 1684 (+ sample data) 
pred     | 532
test      | 536

After registration we send you an email with a link to your dedicated page (we call it the “controller page”). You can download the sample data from there.

Secret data for training and prediction

You can only download the sample data in this challenge. The rest is kept in our secret environment.

To start you will want to use the sample data to develop your solution in your own environment. When you are ready to submit, we will build your solution in our environment, execute the training phase with the secret training data (train), and then execute the prediction phase with the secret provisional data (pred).

The training phase may save models or other results for use in the prediction phase. You can also submit a solution that does not benefit from the training phase, in which case you can ignore skip that phase.


Solutions will be scored based on their difference from the ground truth. The per-case score is based on the root mean squared error (RMSE), across all nodes, computed separately for displacement and stress. Mathematically, using the “hat” to indicate estimates, and with i running across the nodes:

function 1

function 2

The two summarized errors are combined at the end. Since stress and displacement have different units, stress is re-weighted to have the same median across cases as displacement. This weight is approximately 0.016.


The per-case score is then transformed as 1 / (1 + RMSE), which maps very high error to scores near 0 and no error to a score of 1. The overall score is computed as the average of per-case scores multiplied by 100. Mathematically, now using j to index the test cases:

function 4

function 5 

When the score is the same, the one which less prediction time is the winner. 



You can use Python and shell script in this challenge. We will accept code written in other languages (Java, C++, R, etc) but it will not be eligible for prizes or TCO points.

Example submission

We have provided a working example submission for you to get started with. And zipped version you can download. It follows all of the requirements of a working submission, including the model handoff from training phase to prediction phase. The rest of this section goes into more detail about the submission structure.

Prepare shell scripts

When we run your Docker image, we call the following commands. Your image needs to make sure these exist and are executable:

  • For training : /code/
  • For prediction : /code/

We run Docker container with scripts above. If you need to save files and use from those files, save under /tmp. When we run Docker container, we mount local volume to /tmp. 

If you want to submit a pre-trained model and don’t need to train on the secret data, you don't need to write the training process in

Mounted volumes and files

We mount secret data directories to the following paths for your training and prediction scripts to read from and write to.

For the training phase you should:
  • Read input 3D data for training from: /data/input/train
  • Read ground truth for training from: /data/gt/train
  • Save models for use in prediction to: /tmp

Prediction phase

And for the prediction phase you should:
  • Read models from training from: /tmp
  • Read input 3D data for prediction from: /data/input/pred
  • Write output predictions to: /data/output/pred

For each data file named $NAME (e.g., 00001), we provide the following data:
$NAME.json (e.g., 00001.json). This is the 3D data file.
$NAME.csv (e.g., 00001.csv). This is the ground truth, and is only available during training.

Predicted data

You must output your prediction in the following format:

  • Prediction file must be named /data/output/pred/$NAME.csv
  • The CSV should have the same 5 columns as the ground truth files, and only those columns. It should also include the header line “node_id,dx,dy,dz,max_stress”.

Submission Format

There are 2 ways to submit your solution. The first is to submit a Dockerfile, which we will build for you. The second is to push a Docker image to our registry directly. Instructions follow for each approach.


1.Write your solution.
2.Create Dockerfile to build. 


FROM tensorflow/tensorflow:1.15.0-gpu-py3

RUN pip install tensorflow_datasets ;\ 
   mkdir /code
COPY your_code.tar.gz /code
RUN cd /code && tar xfz your_code.tar.gz  && rm your_code.tar.gz 

3.Zip Dockerfile and your code.
Create a zip file including Dockerfile and your code. We recommend following the  directory structure from the example submission.


    ⌊ DockerFile
    ⌊ your_code

4.Submit ZIP file on this page.

Docker image

You should see the following items in your controller page:

  • Docker registry endpoint URL to push your image
  • The Submit button

1.If you choose this option, you should:
Create your Docker image from a Docker container or from a Dockerfile on your machine.

2.Tag your submission:


3.Push your image to registry endpoint.

docker push {YOUR_REPOSITRY_NAME}:latest

4.Click the “Submit” button on your controller page.

Check your submission status

You can see your latest submission status on your controller page. It will show your:

  • Submission status: Running, Complete, Success, Failed, etc.
  • Submission execution times for training and prediction, displayed separately.

Submission requirements

We set limitation to submission below:

  • You must not submit more often than every 6 hours, and the next submission can’t be made until the last submission is completed.
  • You must not take any longer than 10 hours for submission.
  • You must not take any longer than 6 hours for the training phase.
  • You must not take any longer than 4 hours for the prediction phase.
  • You must not include external datasets into submission.

Execution Environment


We run your solution on an instance with a GPU, which will provide a computational advantage if you can use it. Since this challenge uses Docker as described above, we recommend you set up Docker 19.03 or later on your computer or a cloud instance with an NVIDIA GPU. If your solution needs CUDA, use 10.0 or later.

If you’d like help setting up Docker and nvidia-tool-kit, refer to the following document, in which we describe how to set up Docker on a GPU instance.

Execution instance spec

The contest will be run on instances with the following specs:

  • CPU : 32 vCPU
  • Memory : 244 GB
  • GPU : NVIDIA Tesla M60 * 2, 16GB memory
  • NVIDIA Driver version : 430.50

Max number of running instances simultaneously is 10. If the number of instances exceeds the number, your submission will be queued.  

Solution Description

The top-scoring contestants, in order to be eligible to receive a prize payment, must submit the following:

  • A complete archive including their code and any pre-trained models or supplementary data that is used for the computation.
  • In the case that any pre-trained models are required, the code for training and instructions for using such code should also be included.
  • Instructions for running the code, such that results similar to what were submitted during the contest can be replicated. (It is understood that some algorithms rely on randomness and thus may produce small variations.)
  • Within 7 days from the announcement of the contest winners, submit a report at least 2 pages long describing the solution, including most of the following:
    • How the code works
    • Why the given approach was used
    • Limitations of the approach
    • Other approaches that were tried but did not work
    • Any information that you feel may be of use
    • Use of any open source libraries and it’s URL/source

    We accept the report written in English or Japanese.

If you place in top 5 but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contest with the next best performance who did all the above.