A Defense of Game Review Metrics
The subject of video game reviews turned out to be a hot topic last week. In addition to my post Reviewing and Scoring Video Games, there was a column on gamesetwatch by Simon Parkin and an interesting article at PopMatters by L.B. Jeffries. Jeffries makes an excellent point in distinguishing the majority of game reviews today from the type of real criticism that the industry could use a lot more of -- reviews are targeted towards consumers making purchasing decisions while criticism is targeted towards the game makers themselves. According to Jeffries, most reviews today don't go beyond "this game is/isn't fun" to explore the why's which could help game developers make better games. Parkin also has a few points to make concerning consumer oriented reviews. Parkin contrasts video game reviews with consumer electronics reviews, noting that an objective measure of quality isn't possible for games in the way that it is for consumer electronics. Instead, Parkin says, game review scores are really a measure of how well a game lives up to its pre-release hype, although consumers still view them as an objective measure of quality.
In reading the comments to these articles and others, I've gathered that many people agree with Parkin's point that attempts to objectively rate games are fundamentally flawed. While I agree that pure objectivity is impossible, there are a number of reasons why quantitative metrics are still worthwhile:
1) Quantitative metrics allow for searching and sorting. If a reader finds a critic he often agrees with, he can quickly find all games that the critic rated highly without having to skim the text of every review.
2) Quantitative metrics allow for algorithmic processing and analysis. Even though "wisdom of crowds" aggregating sites such as metacritic are often flawed, one shouldn't condemn the entire concept. Metacritic has a lot of problems, but most are related to the site's implementation. A critic's review scores could even be used to rate the critic himself. The potential applications are endless.
3) Multidimensional metrics provide a framework for the reviewer, hopefully improving consistency when assigning scores. Flaws in games are naturally more apparent when the game is judged from different perspectives, reducing the likelihood of a reviewer reflexively handing out a perfect score to a flawed game merely because it does some things better than any game which came before it.
Futhermore, enough people like review scores to prevent them from going away anytime soon, so we might as well spend a little time thinking about creating better metrics.
Out of all the critics of game review metrics, the group I most respect are those, like Jeffries, calling for more insightful criticism and less consumer oriented reviews. I also consider this to be a very real problem, but it doesn't entirely preclude the use of metrics. Certainly there are many focused pieces of criticism which wouldn't have anything to gain by applying a numerical rating, but more macroscopic pieces which analyze the entire work could still gain a lot from quantitative metrics. I, for one, plan on writing reviews that utilize both metrics and, hopefully, insight.