Konica-Minolta: Detecting Abnormality for Automated Quality Assurance - Overall Track
Overall Track Prize Distribution
1st place - $10,000
2nd place - $7,000
3rd place - $5,000
4th place - $3,000
5th place - $1,000
Bonus - $1,000 per track winner not among the top-5 in the Overall Track.
This contest aims to create new image recognition technology that could detect abnormality of a product to be used for visual inspection purpose. As there are three levels of background images that abnormality need to be detected, there are 4 tracks in total: level-1 track, level-2 track, level-3 track, and overall track. Please submit to the overall track contest. All other 3 tracks will be automatically calculated when you submit to this track.
Requirements to Win a Prize
In order to receive a prize, you must do all the following:
If you place in the top 5 or subtrack winner, but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all the above.
In the manufacturing industry, quality assurance and production efficiency are both required. In order to ensure quality, it is necessary to inspect each item for scratches, defects, and foreign substances. For example, scratches on the surface of resin parts, written errors in labels, or dirts on fabrics, etc. As the automation of massive product creation becomes real, the quality assurance process cannot be handled by only human eye but needs some automated technology to detect these problems. And one of the methods would be to take shots of the created product to detect by image recognition. However, detecting problems or abnormality from images are not simple. It is hard to completely manage the same photographic environment (like illumination difference, or blur of the object, etc.) and also the problem that needs to be detected could be indefinite in form, or fine/minute.
In this contest, you will be given a portion of these shots which could have a certain type of abnormality (dirt) on it, with the reference image that has no abnormality (dirt) on.
The client wishes Topcoder community to come up with good deep learning model or algorithm that can detect these abnormalities (dirts) automatically that could lead to further improvements of the quality assurance process. Good Luck!
All real images and dirt masks are in the shape of 390 * 390, while all reference images are in the shape of 500 * 500. Real images and dirt masks are cropped to center evenly from the reference images. Please note that reference images and dirt images were shot separately so the images are not pixel-to-pixel same. 4,119 annotated images (including a real image, a reference image, and a dirt mask) are provided as the training data for your development and validation.
You are asked to build a model which can take a pair of 390 * 390 real image and 500 * 500 reference image as input and output the corresponding dirt mask. There are 3 levels of background images to detect dirts from, where level-1 has solid background, level-2 is images with characters, and level-3 having natural photos as background image. We believe that the level 1 is the easiest and the level 3 is the hardest.
Data Description and Scoring
There are 3 sets of images: train, dev, and test. The train and dev data can be downloaded through Google drive links. The test data will be available in the last 72 hours (i.e., after Feb 21, 11 PM EST) through this Google drive link.
Please note that there are some images without any dirt. It is evenly important for the client that your model or algorithm will not detect false dirts for images that do not have any dirt on.
Because of undisclosable reasons, each image were processed with the same Gaussian filter only on B-channel within the RGB image. It will be up to each contestant whether to use this B-channel for the solution or not.
Also, dirts on images are not manually drawn ���false��� dirts. Images with dirts on are created by combining actual dirt patterns with background images. This was done from the client���s previous research that manually drawn dirt will not produce a good algorithm or model. Ground truth mask images were created from this actual dirt patterns.
The number of images for each dirt pattern is almost evenly distributed. The train, dev, and test sets contain 60%, 20%, 20% dirt patterns respectively. There���s no overlaps between each set. The images without any dirt are also partitioned in this 60%-20%-20% way.
For this, your solution may not be dependent on these dirt patterns and should work reasonably the same to any new dirts. Moreover, the solution should not be tweaked specifically for the test data set that we will be providing in last 72 hours. Your solution created on AWS will be reviewed and if it does not meet this requirement, then you will not receive a prize, and it will be awarded to the contestant with the next best performance.
The statistics of these sets are summarized in the following table. In total, there are 6,882 images.
+------+---------+---------+---------+-------+ | | level-1 | level-2 | level-3 | total | +------+---------+---------+---------+-------+ |train | 751 | 1,495 | 1,873 | 4,119 | +------+---------+---------+---------+-------+ |dev | 245 | 510 | 650 | 1,405 | +------+---------+---------+---------+-------+ |test | 264 | 499 | 595 | 1,358 | +------+---------+---------+---------+-------+
The images are numbered from 1 to 6882. In each set, images are put into the corresponding folders: level-1, level-2, and level-3. For each image named $ID (e.g., 1), we will provide the following data:
Please note that the reference pictures of the following IDs are wrong. We have ruled them out from evaluation. But please keep the _mask.txt files in your submission as placeholders.
[340, 491, 817, 937, 1125, 1751, 1858, 2091, 2378, 2385, 3030, 3050, 3326, 3333, 3435, 3761, 3905, 4505, 4620, 4661, 4750, 4839, 5020, 5160, 5425, 5481, 5597, 5607, 6100, 6163, 6349, 6460, 6555, 6578, 6683, 6760, 6877]
Your example submissions will be evaluated against the train set. Your full submissions will be evaluated against the dev and test sets. The provisional test scores are based on the dev set, while the system test scores are based on the test set.
We are using macro-F1 as the evaluation metric. The final score is computed as the following formula.
Final Score = 1000000.0 * macro-F1
The macro-F1 score is the average F1 score for each image.
For each image, the F1 score is defined based on pixels. For each pixel in an image, there are 4 cases:
Let the total number of True Positive be TP, the total number of False Positive as FP, the total number of False Negative as FN, and the total number of True Negative as TN.
The precision (P) is defined as TP / (TP + FP) and the recall (R) is defined as TP / (TP + FN). The F1 score is computed as 2 * P * R / (P + R). Specially, when TP = 0, we define F1 is 1 if FN = 0 and FP = 0 (i.e., there is actually no dirt in all images); otherwise, F1 is 0.
During the contest, only your results will be submitted. You will submit code which implements only one function, getURL(). Your function will return a String corresponding to the URL of your answer (.zip).
This .zip file should include results (i.e., $ID_mask.tif) of all training, development, and testing data (6,882 in total). Each result is a .txt file of a 390 * 390 binary mask matrix as we described before. If some result files are missing, your submission will receive a score of -1. Note that, although you don't know the testing set in the beginning of the contest, you know their IDs and they are 390 * 390.
You may use different names for the .zip file and any of the following structure should work. When we are evaluating the submission, we will iterate through the zip file and find every file ending with "_mask.txt" and extract the number before it. Please make sure there's no duplicated filenames, if you have multiple folders in the .zip file.
submission.zip submission.zip submission.zip |-- 1_mask.txt |-- train/level-1/1014_mask.txt |-- level-1/1033_mask.txt |-- 2_mask.txt |-- train/level-1/1016_mask.txt |-- level-1/1070_mask.txt |-- ... |-- ... |-- ... |-- 6882_mask.txt |-- test/level-3/99_mask.txt |-- level-3/999_mask.txt
You may upload your .zip file to a cloud hosting service such as Dropbox which can provide a direct link to your .zip file. To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getURL() function. You can use any other way to share your result file but make sure the link you provide should open the filestream directly.
Your complete code that generates these results will be tested at the end of the contest.
The report should be a .doc/.docx or .pdf file, which describe the details of your algorithms and methods. It should include, but not limited to the following points.
Terms and NDA
This challenge will follow the below standard Topcoder Terms and NDA
 Standard Terms for TopCoder Competitions v2.1 - https://www.topcoder.com/challenge-details/terms/detail/21193/
 Appirio NDA 2.0 - https://www.topcoder.com/challenge-details/terms/detail/21153/
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.