Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Topcoder is working with a group of researchers organized by the University of Chicago that are competing to understand a series of simulated environments.  We are looking for simulation models that can predict the future states of our Conflict World. You will have the data of the Conflict World for the first 710 days. We are interested in how that very same society will behave on days 711 through 730 and how continue to unfold through day 930. Actor identifiers refer to the very same actors that they did for the first 710 days. 

Task Details

In this challenge, there are 6 predictive requests in total.

Request 1 - Event Participation

We are interested in levels of event participation for the following types of events: {E8040, E6146, E1939}. For these predictions, the level of participation in an event type at a time is the number of actors who are participating in the event type at that time. 

Request 2 - Satisfaction Changes

We suspect that our informants are not homogeneous, and that the level of satisfaction they feel over time may not be the same across actors. We are interested in how their satisfaction changes over time. In particular, we would like predictions of actor satisfaction for the individual actor A3317, for actors like each of A3711, A7347, and A5939, and for the average satisfaction across the entire population. 

Request 3 - Aggregate Satisfaction 

In addition, we would be interested in how the satisfaction level of actors similar to A7347 would change if there were twice as many actors of this sort present in the world, all other populations remaining the same. This prediction should be based on an alternate world that is like the provided Conflict World in all aspects except for the population change. 

 

We seek a single number, the aggregate satisfaction of this class of actors for the first 500 days of such a world. By aggregate satisfaction, we mean that if you were to draw a curve of the satisfaction for a typical actor of this type over 500 days, we are looking for the area under the curve. (A smart-aleck 2nd Lt in S6 mumbled something about “integral” and “trapezoidal rule,” but the rest of us boxed him pretty roundly around the ears for showing off.) 

Request 4 - Confidence Intervals

The last requested prediction (the aggregate satisfaction) does not vary through time, but please provide not only your best point prediction, but also pairs of predictions defining 50%, 90%, and 95% confidence intervals. For the other predictions, which vary through time, please provide predictions as a daily time series. Again, we would be grateful for your best point prediction and pairs of predictions defining 50%, 90%, and 95% confidence intervals. 

 

Request 5 - Event Participation between Day 711 and Day 730

 

We are interested in levels of event participation for the following types of events: {E8368, E7488, E5629}. For these predictions, the level of participation in an event type at a time is the number of actors who are participating in the event type at that time. 

 

Request 6 - Common Events between Day 711 and Day 730

 

What are the 5 most popular events during the time window listed above?   For these predictions, the level of participation in an event type at a time is the number of actors who are participating in the event type at that time.  Please estimate the number of participants for each of the leading event types.

 

Request 7 - Satisfaction Levels between Day 711 and Day 730

 

We suspect that our informants are not homogeneous, and that the level of satisfaction they feel over time may not be the same across actors.  What are the average satisfaction levels of the following actors: {A5205, A7580, A81, A8614, A7649, A2061, A8386, A7894, A1157} in the time period listed above.

 

Goal of This Challenge: 

You are asked to build models to make predictions to the corresponding questions. Your solution will be judged based on the novelty as well as the performance on the given data.

 A few of the given questions will be objectively scored according to answers that are known to be correct, and others will be scored subjectively.

For all predictions, please provide clear model training and usage to create the predictions.  The reviewers should be able to easily expand your code to use a new / expanded data set. Do not leave anything to be assumed here, no matter how trivial.  This will be part of the review at the end of the challenge, so the more information you provide, and the better your documentation is, the better your chances of winning will be.

The dataset to use can be downloaded here or you can also find dataset link in the forum.

Here are links to the research request data and research request document itself.

Important Note:


Each University of Chicago Team has the ability to request additional information from the virtual world simulation teams beyond what is initially provided through a “Research Request” process.  Data files or folders that are denoted with an “RR” are the output of this process.  In the Code Document forum you’ll find a link to a Research Request document which provides the original request submitted by the University of Chicago researchers that can provide some context.  The requests have to include a plausible collection methodology (e.g. surveys or instruments that can collect data).  There may be additional data that is provided over the course of this challenge submission period.  You are encouraged to include this input into your analysis.


 


Final Submission Guidelines

Submission

The final submission must include the following items.

  • A Jupyter notebook detailing:

    • How the data is prepared and cleaned, from the tsv files

    • How each model is created, trained, and validated

    • How individual predictions are created

    • How we can plug in new data into the model for validation purposes.

      • This will be part of the review, so please ensure the reviewer can easily put in the held-back data for scoring purposes.

    • Answers / Exploration of the counterfactual prediction questions detailed above.  This should be well documented and clearly described.

 

Judging Criteria

Winners will be determined based on the following aspects:

  • Model Effectiveness (40%)

    • The research teams will be comparing your work with theirs in answering the “Unscored” Questions above.  Your submission will receive a subjective evaluation from this team.

  • Model Accuracy (30%)

    • Are your predictions on the held back review data (“Scored” Questions)  accurate?

  • Model Feasibility (20%)

    • How easy is it to deploy your model?

    • Is your model’s training time-consuming?

    • How well your model/approach can be applied to other problems?

  • Clarity of the Report (10%)

    • Do you explain your proposed method clearly?




 

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30104416