This Marathon Match is part of the March Madness Marathon Match Series and will have the following prizes and bonuses:
1st place - $11,000
2nd place - $6,000
3rd place - $3,500
4th place - $1,500
5th place - $1,000
In addition to the main prizes there are special prizes of $250 each for the highest scoring
first-time marathon participant,
participant who hasn't submitted since 2016, and
first-time sponsored marathon participant.
In addition, we will give $500 to every of the first 3 contestants who achieve a provisional test score strictly more than 30.
Summarization refers to the task of producing a shortened version of a document that contains the document’s key points. The need for automated document summarization arises from abundance of text data and lack of sufficient resources for manual review.
In this competition, you will be provided a dataset of news articles and corresponding summaries. Using this dataset, you are asked to develop a model that can produce summaries that capture the key points of a given input text.
We are using a public dataset in this challenge. The full dataset consists of approximately 312,085 online news articles (CNN and Daily Mail) and corresponding summaries. On average, the articles are 781 tokens in length and the summaries are 56 tokens in length. Reference: http://nlpprogress.com/english/summarization.html.
We have cleaned the validation and test sets into tokenized sequences. You can download them here: https://www.dropbox.com/s/t56lo75efsczfny/to_contestant.zip?dl=1. Every line in both files is a document. There are 13,367 articles in validation, and 11,489 articles in test.
To download the training data, please visit https://github.com/abisee/cnn-dailymail. It contains a link to a repo containing processed data. You can download the processed data and ONLY use train.bin to train your model.
We expect your model will produce two txt files (val_summary.txt, test_summary.txt). It contains one row per article. Every row is a sequence of tokens summarizing the corresponding line of val_article.txt and test_article.txt. <s> and </s> are used as the start and the end of sentences, respectively.
Here is an example of val_summary.txt:
<s> a man in suburban boston is selling snow online to customers in warmer states . </s> <s> for $ 89 , he will ship 6 pounds of snow in an insulated styrofoam box . </s>
<s> the 20th mls season begins this weekend . </s> <s> league has changed dramatically since its inception in 1996 . </s> <s> some question whether rules regarding salary caps and transfers need to change . </s>
This match uses a combination of the "submit data" and "submit code" submission styles. Your submission must be a single ZIP file with the following content:
/solution/val_summary.txt and /solution/test_summary.txt are the output your algorithm generates on the provisional test set. The format of this file is described above in the Output file section.
/code contains a dockerized version of your system that will be used to reproduce your results in a well defined, standardized way. This folder must contain a Dockerfile that will be used to build a docker container that will host your system during final testing. How you organize the rest of the contents of the /code folder is up to you, as long as it satisfies the requirements listed below in the Final testing section.
During provisional testing only your val_summary.txt file will be used for scoring, however the tester tool will verify that your submission file confirms to the required format. This means that at least the /code/Dockerfile must be present from day 1, even if it doesn't describe any meaningful system to be built. However, we recommend that you keep working on the dockerized version of your code as the challenge progresses, especially if you are at or close to a prize winning rank on the provisional leaderboard.
You must not submit more often than once every 4 hours. The submission platform does not enforce this limitation, it is your responsibility to be compliant to this limitation. Not observing this rule may lead to disqualification.
During final testing your last submission file will be used to build your docker container.
Make sure that the contents of the /solution and /code folders are in sync, i.e. your val_summary.txt and test_summary.txt files contain the exact output of the current version of your code.
To speed up the final testing process the contest admins may decide not to build and run the dockerized version of each contestant's submission. It is guaranteed however that if there are N main prizes then at least the top 2*N ranked submissions (based on the provisional leader board at the end of the submission phase) will be final tested.
During scoring your summarization output files (as contained in your submission file during provisional testing, or generated by your docker container during final testing) will be compared against expected ground truth data using the following algorithm.
If your solution is invalid (e.g. if the tester tool can't successfully parse its content, or if it contains an unknown filename), you will receive a score of 0.
Otherwise, we will calculate the Rogue Scores for every document and then take the average. Specifically, we will compute ROUGE-1, ROUGE-2, and ROUGE-L F1 scores. The details of these scores can be found here: https://rxnlp.com/how-rouge-works-for-evaluation-of-summarization-tasks/#.XJUo4RNKiL8.
This section describes the final testing work flow and the requirements against the /code folder of your submission. You may ignore this section until you decide you start to prepare your system for final testing.
To be able to successfully submit your system for final testing, some familiarity with Docker is required. If you have not used this technology before then you may first check this page and other learning material linked from there. To install docker follow these instructions. In this contest it is very likely that you will work with GPU-accelerated systems, especially in the training stage, see how to install Nvidia-docker here.
Contents of the /code folder
The /code folder of your submission must contain:
All your code (training and inference) that are needed to reproduce your results.
A Dockerfile (named Dockerfile, without extension) that will be used to build your system.
All data files that are needed during training and inference, with the exception of
the contest’s own training and testing data. You may assume that the contents of the /training and /testing folders (as described in the Input files section) will be available on the machine where your docker container runs, zip files already unpacked,
large data files that can be downloaded automatically either during building or running your docker script.
Your trained model file(s). Alternatively your build process may download your model files from the network. Either way, you must make it possible to run inference without having to execute training first.
The tester tool will unpack your submission, and the
nvidia-docker build -t <id> .
command will be used to build your docker image (the final ‘.’ is significant), where <id> is your TopCoder handle.
The build process must run out of the box, i.e. it should download and install all necessary 3rd party dependencies, either download from internet or copy from the unpacked submission all necessary external data files, your model files, etc.
Your container will be started by the
nvidia-docker run -v <local_data_path>:/data:ro -v <local_writable_area_path>:/wdata -it <id>
command (single line), where the -v parameter mounts the contest’s data to the container’s /data folder. This means that all the raw contest data will be available for your container within the /data folder. Note that your container will have read only access to the /data folder. You can store large temporary files in the /wdata folder.
Training and test scripts
Your container must contain a train and test (a.k.a. inference) script having the following specification:
train.sh <data-folder> should create any data files that your algorithm needs for running test.sh later. The supplied <data-folder> parameters point to a folder having training files and annotation data in the same structure as is available for you during the coding phase. The allowed time limit for the train.sh script is 4 days. You may assume that the data folder path will be under /data.
As its first step train.sh must delete the your home made models shipped with your submission.
Some algorithms may not need any training at all. It is a valid option to leave train.sh empty, but the file must exist nevertheless.
Training should be possible to do with working with only raw Opensetface data and publicly available external data. This means that this script should do all the preprocessing and training steps that are necessary to reproduce your complete training workflow.
A sample call to your training script (single line):
In this case you can assume that the training data looks like this:
test.sh <data-folder> <output_path> should run your inference code using new, unlabeled data and should generate an output CSV file, as specified by the problem statement. The allowed time limit for the test.sh script is 24 hours. The testing data folder contain similar data in the same structure as is available for you during the coding phase. The final testing data will be similar in size and in content to the provisional testing data. You may assume that the data folder path will be under /data.
Inference should be possible to do without running training first, i.e. using only your prebuilt model files.
It should be possible to execute your inference script multiple times on the same input data or on different input data. You must make sure that these executions don't interfere, each execution leaves your system in a state in which further executions are possible.
A sample call to your testing script (single line):
./test.sh /data/test/ val_summary.txt test_summary.txt
In this case you can assume that the testing data looks like this:
Your training and inference scripts must output progress information. This may be as detailed as you wish but at the minimum it should contain the number of epochs processed so far.
Your testing code must process the test and validation data the same way, that is it must not contain any conditional logic based on whether it works on documents/sentences/tokens that you have already downloaded or on unseen data.
First test.sh is run on the provisional test set to verify that the results of your latest online submission can be reproduced. This test run uses your home built models.
Then test.sh is run on the final validation dataset, again using your home built models. Your final score is the one that your system achieves in this step.
Next train.sh is run on the full training dataset to verify that your training process is reproducible. After the training process finishes, further executions of the test script must use the models generated in this step.
Finally test.sh is run on the final validation dataset (or on a subset of that), using the models generated in the previous step, to verify that the results achieved in step #2 above can be reproduced.
A note on reproducibility: we are aware that it is not always possible to reproduce the exact same results. E.g. if you do online training then the difference in the training environments may result in different number of iterations, meaning different models. Also you may have no control over random number generation in certain 3rd party libraries. In any case, the results must be statistically similar, and in case of differences you must have a convincing explanation why the same result can not be reproduced.
Your docker image will be built and run on a Linux AWS instance, having this configuration:
Please see here for the details of this instance type.
This match is rated.
Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about the problem itself or possible solution techniques.
Teaming is not allowed. You must develop your solution on your own. Any communication between members beyond what is allowed by the forum rules is strictly forbidden.
In this match you may use any programming language and libraries, including commercial solutions, provided Topcoder is able to run it free of any charge. You may also use open source languages and libraries, with the restrictions listed in the next section below. If your solution requires licenses, you must have these licenses and be able to legally install them in a testing VM (see “Requirements to Win a Prize” section). Submissions will be deleted/destroyed after they are confirmed. Topcoder will not purchase licenses to run your code. Prior to submission, please make absolutely sure your submission can be run by Topcoder free of cost, and with all necessary licenses pre-installed in your solution. Topcoder is not required to contact submitters for additional instructions if the code does not run. If we are unable to run your solution due to license problems, including any requirement to download a license, your submission might be rejected. Be sure to contact us right away if you have concerns about this requirement.
You may use open source languages and libraries provided they are equally free for your use, use by another competitor, or use by the client.
If your solution includes licensed software (e.g. commercial software, open source software, etc), you must include the full license agreements with your submission. Include your licenses in a folder labeled “Licenses”. Within the same folder, include a text file labeled “README” that explains the purpose of each licensed software package as it is used in your solution. Please make sure all these licenses will allow us for a commercial purpose.
External data sets and pre-trained models are allowed for use in the competition provided the following are satisfied:
The external data and pre-trained models are unencumbered with legal restrictions that conflict with its use in the competition.
The data source or data used to train the pre-trained models is defined in the submission description.
In order to receive a final prize, you must do all the following:
Achieve a score in the top five according to final system test results. See the "Final testing" section above.
Once the final scores are posted and winners are announced, the prize winner candidates have 7 days to submit a report outlining their final algorithm explaining the logic behind and steps to its approach. You will receive a template that helps creating your final report.
If you place in a prize winning rank but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all of the above.