The Purple Kernel Eater Monster Ideation Challenge

Key Information

The challenge is finished.
Show Deadlines

Challenge Overview


1st: $1,700

2nd: $1,000

3rd: $300

4th: $200

5th:$ 100

Challenge Overview

By performing image analysis, can we detect and identify kernels (i.e., seeds) that are normal versus kernels that will become the purple kernel eater monsters? You will receive two images taken on consecutive days (i.e., one day goes between images). They all seem to start out normal but can you detect something consistently about the kernel that will clue you in on whether or not it is normal or will turn into a purple monster?

Task Detail

In this challenge, you are asked to propose some algorithms which, given a day 0 image (with no discernible purple in it), indicate the kernels which will become purple one day later (i.e. which are purple in the Day 1 image).


In addition, we would like to know your thoughts on what changes in our imaging style (background, lighting, sharpness, etc.) could help you develop a better algorithm - Basically data observations and thoughts around additional data gathering.


Please check the following two images as an example.


Day 0 image:



Day 1 image:



As you can see here, there might be some changes in the imaging styles (e.g., background, lighting, sharpness etc.). So you algorithm should be robust to these differences.

Full dataset: 


Some clues to help you identify the non-purple monsters:

  1. The purple monsters are hungry and eat a lot. Therefore, they are likely to be slightly different (Obese, fat, or shaped differently)

  2. The purple monsters may have some strange colors even in the initial image. We don’t know much about this though.

  3. The purple monsters may have other features which are different from others e.g. their surface may have different texture, or they may have a different aspect ratio. There may be other features which we have not found yet (but you are welcome to find and use)

Final Submission Guidelines


Your submission should include a text, .doc, PPT or PDF document that includes the following sections and descriptions:

  • Overview: describe your approach in “layman's terms”

  • Methods: describe what you did to come up with this approach, eg literature search, experimental testing, etc

  • Materials: did your approach use a specific technology?  Any libraries? List all tools and libraries you used

  • Discussion: Explain what you attempted, considered or reviewed that worked, and especially those that didn’t work or that you rejected.  For any that didn’t work, or were rejected, briefly include your explanation for the reasons (e.g. such-and-such needs more data than we have).  If you are pointing to somebody else’s work (eg you’re citing a well-known implementation or literature), describe in detail how that work relates to this work, and what would have to be modified

  • Data:  What other data should one consider?  Is it in the public domain? Is it derived?  Is it necessary in order to achieve the aims?  Also, what about the data described/provided - is it enough?

  • Assumptions and Risks: what are the main risks of this approach, and what are the assumptions you/the model is/are making?  What are the pitfalls of the data set and approach?

  • Results: Did you implement your approach?  How’d it perform? If you’re not providing an implementation, use this section to explain the EXPECTED results.

  • Other: Discuss any other issues or attributes that don’t fit neatly above that you’d also like to include

Judging Criteria

You will be judged on the quality of your ideas, the quality of your description of the ideas, and how much benefit it can provide to the client. The winner will be chosen by the most logical and convincing reasoning as to how and why the idea presented will meet the objective. Note that, this contest will be judged subjectively by the client and Topcoder. However, the judging criteria will largely be the basis for the judgment.

  1. Effectiveness (40%)

    1. Is your algorithm effective, at least on the provided example images?

    2. Are there any interesting insights from the data analysis?

    3. PoC is not required, but it’ll be great to see some example results.

  2. Feasibility (40%)

    1. Is your algorithm efficient, scalable to a large volume of data?

    2. Is your proposed additional data possible to get?

    3. Is your algorithm easy-to-implement? Is there any related toolkit that we can use?

  3. Clarity (20%)

    1. Please make sure your report is easy to read.

    2. Figures, charts, and tables are welcome.

Submission Guideline

You can submit at most TWO solutions but we encourage you to include your great solution and details as much as possible in a single submission.

Reliability Rating and Bonus

For challenges that have a reliability bonus, the bonus depends on the reliability rating at the moment of registration for that project. A participant with no previous projects is considered to have no reliability rating, and therefore gets no bonus. Reliability bonus does not apply to Digital Run winnings. Since reliability rating is based on the past 15 projects, it can only have 15 discrete values.
Read more.


Topcoder Open 2019


Final Review:

Community Review Board


User Sign-Off


Review Scorecard