Monday, June 09, 2008

A Defense of Game Review Metrics

The subject of video game reviews turned out to be a hot topic last week. In addition to my post Reviewing and Scoring Video Games, there was a column on gamesetwatch by Simon Parkin and an interesting article at PopMatters by L.B. Jeffries. Jeffries makes an excellent point in distinguishing the majority of game reviews today from the type of real criticism that the industry could use a lot more of -- reviews are targeted towards consumers making purchasing decisions while criticism is targeted towards the game makers themselves. According to Jeffries, most reviews today don't go beyond "this game is/isn't fun" to explore the why's which could help game developers make better games. Parkin also has a few points to make concerning consumer oriented reviews. Parkin contrasts video game reviews with consumer electronics reviews, noting that an objective measure of quality isn't possible for games in the way that it is for consumer electronics. Instead, Parkin says, game review scores are really a measure of how well a game lives up to its pre-release hype, although consumers still view them as an objective measure of quality.

In reading the comments to these articles and others, I've gathered that many people agree with Parkin's point that attempts to objectively rate games are fundamentally flawed. While I agree that pure objectivity is impossible, there are a number of reasons why quantitative metrics are still worthwhile:

1) Quantitative metrics allow for searching and sorting. If a reader finds a critic he often agrees with, he can quickly find all games that the critic rated highly without having to skim the text of every review.

2) Quantitative metrics allow for algorithmic processing and analysis. Even though "wisdom of crowds" aggregating sites such as metacritic are often flawed, one shouldn't condemn the entire concept. Metacritic has a lot of problems, but most are related to the site's implementation. A critic's review scores could even be used to rate the critic himself. The potential applications are endless.

3) Multidimensional metrics provide a framework for the reviewer, hopefully improving consistency when assigning scores. Flaws in games are naturally more apparent when the game is judged from different perspectives, reducing the likelihood of a reviewer reflexively handing out a perfect score to a flawed game merely because it does some things better than any game which came before it.

Futhermore, enough people like review scores to prevent them from going away anytime soon, so we might as well spend a little time thinking about creating better metrics.

Out of all the critics of game review metrics, the group I most respect are those, like Jeffries, calling for more insightful criticism and less consumer oriented reviews. I also consider this to be a very real problem, but it doesn't entirely preclude the use of metrics. Certainly there are many focused pieces of criticism which wouldn't have anything to gain by applying a numerical rating, but more macroscopic pieces which analyze the entire work could still gain a lot from quantitative metrics. I, for one, plan on writing reviews that utilize both metrics and, hopefully, insight.

Labels: , , ,

2 Comments:

At 3:28 AM, Anonymous Anonymous said...

you're right, of course, that a scale of verdicts is useful for establishing an understanding of a reviewer's character and for sorting games at a basic level. I just don't believe that reviews need what is effectively a 100-point scale - all that does is provide ammunition for silly forum fights over whether this platform exclusive hit really deserved 0.1 better than that one. I think a progression of broader categories, perhaps "masterpiece," "good," "fair," and "poor," would serve much better - with a few special categories like "flawed" or "disappointment" thrown in for special cases. In my opinion, the bigger the scorer's scale is, the bigger it needs to be. Once differences of 0.1 on a 10-point scale matter, it becomes important to rate graphics, controls, writing, acting, pacing, challenge, music, etc. individually so that if vastly different games are to be compared they can be compared in a meaningful way.

 
At 4:52 AM, Anonymous Anonymous said...

I want to first say that I too believe that games (and narratives) can be scored and that metrics are not altogether meaningless for this genre. After reading your post on an "L + N scoring" system, I thought it should be pointed out that this system has been tried many times in the past.

A ludus of a vide game has been scored as it's Gameplay / RePlay value. The Narrative portions falling under a "story" score more often -- with the score typically refering to the Background story provided, rather than the story generated as a result of gameplay and player input. And for years you could look at the bottom of a review to see these, mixed in with Graphics / Sound / Controls and of course, the Final score that tries to average them all up into some kind of meaningful metric for the consumer.

Attempting to separate the Ludus from the Narrative of a video game is easy to do for shmups and hand-eye puzzle games, and impracticle for sandbox / role playing games, or even FPS games that offer more than a linear playthrough. Even in minimalist example of a linear RPG, the video game is played like a 'choose-your-own-adventure' book. The player's actions determine the flow of the game, be it slow and explorative, random encounter farming, talk to everything that moves, etc. In any choose-your-own-adventure, the narrative is the sum total of the background story provided for the sandbox and the combined inputs of the player, as veiwed both by an outsider not playing the game, and the player.

 

Post a Comment

<< Home