May 18, 2018 How IBM Discovery Can Be Used to Build Prediction Models for Movie Reviews

Last week, I participated in two interesting challenges from the Cognitive community where I had to opportunity to use the IBM Discovery service to build prediction models for predicting stars rating and author by analyzing the review content.

The two movies challenges were a great learning experience for me on using IBM Discovery features to build prediction model. During the course of the challenge, I learned a lot on how to utilize natural language understanding (NLU) from Discovery to extract relevant information of the review such as keywords, entities, sentiment/emotions score which can be used to find the matching stars rating or critic.

In general, the two challenges were somewhat similar as they both require the participants to use IBM Discovery to analyze the set of training reviews from http://rogerebert.com/ and build a prediction model that can predict hidden information from the input test reviews including its stars rating (first challenge) and author (second challenge). However, in terms of implementation, the approaches I used in the two challenges varied significantly, partly because I learned from the others’ winning submission in the first challenge and improved my model in the next one.

In the first challenge, my prediction model focused mainly on using aggregated sentiment score of documents and entities of certain types (Person, Organization, Company, etc.) to predict the stars rating of the review. From what I noticed in my implementation, analyzing entities works pretty well in predicting stars rating since sentiment towards certain person or organization relating to the movie such as actor, director, entertainment company often correlates with stars rating relatively well. Back then, I wasn’t aware that entities can also give out emotions score (joy, anger, sadness, disgust, fear) if configured correctly. From what I saw in other winning submissions, I predict that adding emotions score can also improve my implementation score considerably.

In the second challenge, however, my prediction model focused mainly on matching keywords and its sentiment/emotions score in order to predict the critic since I reasoned that critics often have their own writing styles that require the use of certain words or idioms, which can be easily found by looking at the keywords analyzed by IBM Discovery. The accuracy score from the hidden test (55% correct in local test, 65% correct in reviewer’s test) appears to have proven my prediction. Alternatively, I also planned to use matching entities from input review to entities used by the critic associated with the above rating, similar to what I did to keywords. However, I noticed that since entities focused mostly on special names like actors’ name, directors’ name, locations which can be varied depending on the movie the review is written about, not on the critic. The result is that the matching score is decreased significantly (around 5-7 percent in local test). So I decided to leave entities out of my prediction model.

Thanks to the thoughts and approaches posted by other winning participants in the first challenge, I learned a lot on how to improve my prediction model. My biggest mistake in the first challenge was probably ignoring the emotions score. As pointed out by Dheeraj2202 and atik97, the emotions score seems to improve the accuracy score significantly in their implementation. Additionally, atik97 mentioned that using sentiment score in the prediction model appears to have reduced the accuracy score. However, in my case, aggregation of sentiment score of documents seems to produce pretty good result. I guess it also depends on the relevance of sentiment score in the model compared to other factors like keywords and entities score. Another interesting thoughts pointed out by obog was that it can be difficult to identify whether an entity is related to the movie being reviewed or served as a comparison to other actors or movies. I considered this issue and thought of using entities’ relevance score in order to check if the entities are strongly related to the movie and not just a trivial comparison in my implementation for the next challenge. However, since entities in general don’t seem to correlate with certain critic for the reason mentioned above, I decided to wait for a future challenge to test my solution.

Overall, the two movies challenges were a great experience for me to get used to using IBM Discovery by playing around with its features and learning from other developers. The sentiment and emotions analysis of documents gives me a lot of cool ideas for side projects that I can work on in my freetime such as winning sport team predicting program, finding roommate program, etc.


VietHTran

Guest Blogger



Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds