Game Design Grad and Staff Collaborate on Metacritic Study
Published on Apr 18, 2013 by James Gregory
Metacritic is a go-to resource for millions of consumers, offering a convenient way to scan collected reviews of media releases. It can be a valuable tool to help decide if you want to put your money down on a new video game, or what to see during a night out at the movies - but how exactly are their scores aggregated?
That was the question on the minds of Full Sail Game Design graduate Scott Poorman and instructors Adams Greenwood-Ericksen and Roy Papp.
What started out as Poorman’s thesis project while attending our Master's program turned into a buzz-worthy study that set out to determine the measures used by the popular website to tally a final "Meta score." The three collaborators even held a presentation on their study at the 2013 Game Developers Conference (GDC), and we recently caught up with Adams Greenwood-Ericksen to learn more about their research.
Full Sail: What was the basis of your Metacritic study?
Adams Greenwood-Ericksen: Scott did a lot of research on his own, then together we broke it down into three points. The first was called the 'validity assessment,' where we said ‘This is the process that they use, these are potential areas where things can go wrong in that process.’
Then we did a correlational analysis, where we looked at the relationship between sales and scores. As the game’s score went up, did its sales go up?
The third part was to see if we could figure out what their weights might be – try to model this formula, plug some numbers in, and get it to work. We were reasonably comfortable with what we got, but it’s a model, so people should take it with a grain of salt.
FS: What kind of response did you get at GDC?
AG: I’ve never seen a session get that packed before, they filled all the seats a half hour before the presentation. The Q&A we did afterwards was great because there were a lot of really good questions that were asked. All kinds of places have picked it up too, and I’m getting emails from studios asking for the data.
Then there was also Metacritic’s response to us, which contained so much new information. Despite that they said we were off, it was really useful what they gave us. We found out a lot more than had been available before, and we’re actually digging back into our model to find ways to improve it.
FS: What are you looking to do with the study from here?
AG: We’re going to refine it because we learned so much in seeing people’s response – especially Metacritic themselves. We originally did it because it seemed like it would be kind of fun, but now we’re looking a lot closer at it because it’s so very visible.