Challenge Overview

Problem Statement

    

Intel Movidius Challenge

Prize Distribution

  • 1st - $8,000
  • 2nd - $4,000
  • 3rd - $3,000
  • 4th - $2,000
  • 5th - $1,000
  • Checkpoint Review - $200 x 10
  • Total - $20,000

The checkpoint review prizes will be awarded based upon the submissions at 02/12/18 at 9:00 PM (4 weeks after submissions open) to the top 10 contestants according to tests on a hidden review dataset. These prizes will be subject to a manual review in order to ensure legitimacy.

Background

Market research estimates there will be as many as 20 billion connected devices in the market by 2020. These devices are expected to generate billions of petabytes of data traffic between cloud and edge devices. In 2017 alone, 8.4 billion connected devices are expected in the market which is sparking a strong need to pre-process data at the edge. This has led many IoT device manufacturers, especially those working on vision based devices like smart cameras, drones, robots, AR/VR, etc., to bring intelligence to the edge.

Through the recent addition of the Movidius VPU technology to its existing AI edge solutions portfolio, Intel is well positioned to provide solutions that help developers and data scientists pioneer the low-power intelligent edge devices segment. The Intel Movidius Neural Compute Stick (NCS) and Neural Compute SDK (NCSDK) is a developer kit that aims at lowering the barrier to entry for developers and data scientists to develop and prototype intelligent edge devices.

In this challenge you will be pushing your network training skills to its limits by fine-tuning convolutional neural networks (CNNs) that are targeted for embedded applications. Contestants are expected to leverage the Neural Compute SDK's (NCSDK) mvNCProfile tool to analyze the bandwidth, execution time and complexity of their network at each layer, and tune it to get the best accuracy and execution time.

Objective

Your task is to design and fine-tune a convolutional neural network (CNN) which can classify images among a predefined set of image labels. The neural network should be deployed on the Intel Movidius Neural Compute Stick for inference purpose. NCSDK provides the tools to convert your pre-trained CNN to a binary file which can be deployed to NCS.

Refer to this technical blog for instructions on how to deploy a pre-trained CNN onto the NCS for inference purposes.

Input Data Files

All of the image files used in this contest are JPEGs obtained from ImageNet. Each of the 200 labels correspond to a single ImageNet synset, and all images with a given label were obtained from the corresponding synset. The images for each label were selected randomly from all JPEG images in the synset. Images deemed low quality were removed and replaced by another random image from the same synset. All of the images from a given synset and correspondingly label were randomly partitioned between the training, provisional, and system datasets.

The training dataset, training.tar, consists of 80,000 JPEG images corresponding to 400 images per label of each of the 200 labels. You are not limited to only using these provided images to train your network. Any additional images or resources may be used as long as they are freely available.

The ground truth image summary CSV file, training_ground_truth.csv, includes a header and a single row for each image in the training dataset, with the following columns:

  • IMAGE_NAME - Image file name, including extension
  • LABEL_INDEX - Positive integer corresponding to this images label

The label description file, categories.txt, contains 200 rows where each line contains a description of a single label. These descriptions are in order according to LABEL_INDEX. The first line corresponds to LABEL_INDEX = 1, and the last line corresponds to LABEL_INDEX = 200.

The provisional dataset, provisional.tar, which is used for the leaderboard scoring during the contest, has the same format as the training dataset with 2,000 JPEG images, 10 per label.

The review dataset, which is not provided to the contestants, is used for the mid-contest review prizes, and it consists of 2000 JPEG images, 10 per label.

The system dataset, which is not provided to the contestants, is used for the final scoring and prizes, and it consists of 16,000 JPEG images, 80 per label.

Output Files

During the contest, your submissions must be in the form of a zip file containing the following three (for TensorFlow) or four (for Caffe) files:

  • inferences.csv - Output inferences CSV
  • network.prototxt or network.meta - Network file (Caffe or TensorFlow)
  • weights.caffemodel - Weights file (for Caffe only)
  • compiled.graph - Compiled Intel Movidius internal Graphfile
  • supporting/ - folder containing any supporting scripts/code which were used to pre-process test images along with a README (folder must be included even if empty)

The included inferences file and the compiled graph file must both be created from the included network and weights files. If it is found that either of these files do not correspond to the included model files, your submission will be invalidated.

During the contest, only the inferences.csv file will be scored by the automated test harness which determines scores on the provisional leaderboard. Before any prize money is awarded, the inferences file will be validated against the network, weights, and compiled graph files.

You may use the following template project in order to create your submission zip files: https://github.com/movidius/ncappzoo/tree/master/apps/topcoder_example

1. Inferences File

The inferences file must be named "inferences.csv" and it should be in the form of a CSV file. This file should contain your model's label inferences for each of the images in the provisional dataset. This file should contain a header and one row for the top five inferred labels and associated probabilities for each image in the testing dataset. This file should contain one row per image, and the sum of all probabilities for a single image must be less than or equal to 1. You may not perform any post-processing of the inferences, and they should directly correspond to the output of "graph.GetResult()". Each row should contain the following columns:

  • IMAGE_NAME - Image filename, including extension
  • LABEL_INDEX #1 - The 1st inferred label index
  • PROBABILITY #1 - Probability of this image having the 1st inferred label
  • LABEL_INDEX #2 - The 2nd inferred label index
  • PROBABILITY #2 - Probability of this image having the 2nd inferred label
  • LABEL_INDEX #3 - The 3rd inferred label index
  • PROBABILITY #3 - Probability of this image having the 3rd inferred label
  • LABEL_INDEX #4 - The 4th inferred label index
  • PROBABILITY #4 - Probability of this image having the 4rd inferred label
  • LABEL_INDEX #5 - The 5th inferred label index
  • PROBABILITY #5 - Probability of this image having the 5th inferred label
  • INFERENCE_TIME - Time taken by NCS to make inferences for this image

For a given image, all labels not specified in the output file are assumed to have a prediction of zero probability.

2. Network File

The network file should be your model's network file in either Caffe or TensorFlow��� format. If you are using Caffe, it must be named "network.prototxt". If you are using TensorFlow���, it must be named "network.meta".

3. Weights File

If your network file is in Caffe format, you must also include the weights file named "weights.caffemodel". If you are using TensorFlow���, you should not include this file.

4. Compiled Graph File

The compiled graph file must be named "compiled.graph". You can obtain this file from the mvNCCompile tool.

5. Supporting Files

All supporting files, including image pre-processing scripts and a README file, must be included in a folder named "supporting/". Even if you do not include any supporting files, this folder should still be included (empty).

The following types of image pre-processing steps are allowed:

  1. Flipping/Mirroring
  2. Cropping/Padding
  3. Applying mean & standard deviation
  4. Changing colorspaces
  5. Geometric transformations
  6. Smoothing
  7. Morphological transformations
  8. Image pyramids
  9. Histogram operations
  10. Frequency domain filtering

Functions

During the contest, your submission zip file, containing the previously described files, will be submitted. In order for your solution to be evaluated by Topcoder's marathon match system, you must implement a class named IntelMovidius, which implements a single function: getAnswerURL(). Your function will return a String corresponding to the URL of your submission zip file. You may upload your output file to a cloud hosting service such as Dropbox or Google Drive, which can provide a direct link to the file.

To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getAnswerURL() function.

If you use Google Drive to share the link, then please use the following format: "https://drive.google.com/uc?export=download&id=XYZ" where XYZ is your file id.

Note that Google Drive has a file size limit of 25MB and can't provide direct links to files larger than this. (For larger files the link opens a warning message saying that automatic virus checking of the file is not done.)

You can use any other way to share your submission file, but make sure the link you provide opens the filestream directly, and is available for anyone with the link (not only the file owner), to allow the automated test harness to download and evaluate it.

An example of the code you have to submit, using Java:

public class IntelMovidius
{
   public String getAnswerURL()
   {
      // Replace the returned String with your submission file's URL
      return "https://drive.google.com/uc?export=download&id=XYZ";
   }
}

Scoring

Prior to scoring, the top 5 inferred probabilities for each image are normalized to sum to 1. For example, if the top 5 inferred probabilities are (0.2, 0.1, 0.1, 0.05, 0.05), this will be normalized to (0.4, 0.2, 0.2, 0.1, 0.1). Scoring is based on the logarithmic loss metric. The logloss penalty is calculated as

where N is the number of images in the testing set, M is the number of classes (200), log is the natural logarithm, yij is 1 if image i belongs to class j, and pij is the predicted probability that image i belongs to class j. If any predicted probability pij is less than 1e-15, including probabilities which are not specified (not in the top 5), pij is set to 1e-15. If any predicted probability pij is greater than 1 - 1e-15, pij is set to 1 - 1e-15.

Score is computed based on logloss penalty and inference time by the following formula

where t is the time in milliseconds taken to make 100 inferences, log is the natural logarithm, logloss_max is the maximum allowed logloss which is 15.0, and t_max is the maximum allowed average inference time in milliseconds multiplied by 100 which is 100,000. Inference time t is calculated by taking the total time to infer all N images divided by (N/100).

The maximum acceptable value for logloss is 15.0, and the maximum acceptable average inference time for a single image is 1000 ms. If your submission exceeds either of these two bounds, your score will be zero.

Final Scoring

After the submission phase of the contest has ended, all submissions will be scored against a secret system dataset. The system dataset contains 80 images per class from the same 200 classes as the images from the training and provisional datasets (16,000 images total). Final scores, ranking, and prizes are solely based on the system dataset testing.

General Notes

  • This match is rated.
  • Teaming is not allowed.
  • Provisional submissions are allowed once every 4 hours, while example submissions are allowed once every 15 minutes. You may use example submissions to test the format of your submission. Your last submission will be used for final scoring, so please ensure that it is properly formatted. You will not be allowed to submit after the contest submission deadline, so plan your provisional submissions accordingly.
  • The usage of external models and resources is allowed as long as they are freely available. You must disclose all external models and resources used in your final report.
  • Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about the problem itself or possible solution techniques.

Requirements to Win a Prize

In order to receive a prize, you must do all the following:

  1. Achieve a non-zero score in the top 5, according to the final system test results. See the "Final scoring" section above for details.
  2. Within five days from the time the winners are announced, you must provide a detailed final report describing your model. Please use this template to create your final report.
  3. Provide details of a procedure that can replicate your trained model. If a PRNG is employed, then you must use a fixed random seed. This procedure must be able to reproduce your final submitted model and predictions.
 

Definition

    
Class:IntelMovidius
Method:getAnswerURL
Parameters:
Returns:String
Method signature:String getAnswerURL()
(be sure your method is public)
    
 

Examples

0)
    
"1"
Returns: "Seed: 1"
1)
    
"2"
Returns: "Seed: 2"

This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.