As promised, here is my own personal viewpoint on why the new review system is not the amazing product Gamespot wanted it to be. I will approach my argument from two prospectives: first from the angle of a quick-reading gamer who doesn't want a full blown essay review, and secondly from a gamer who has been with Gamespot for several years (both of which apply to myself). I intend to primarily address those aspects of the previous review system which have been dropped for the new Emblem, 0.5 increment system. For those keeping track, these are all the components still available to users for use in user reviews (component scoring, difficulty rating, learning curve, etc.).
When unveiling this new system, Gamespot said their reasoning for such a makeover was caused by the fact that many games of today's era cannot be simply reviewed by a weighted averaging system ("You don't need us to tell you that graphics in Guitar Hero aren't all that important"). This argument is indeed valid for a handful of games, most of which possess a special controller of some sort (guitar, gun, DS and Wii games in general), or which are heavily reliant solely on addictive gameplay (WarioWare games immediately leaps to mind). However, there are a plethora of games reviewed under the old system that still received a proper score...why? Two reasons: the reviewer was usually less critical of the other components that didn't really matter, and the existence of the Tilt value.
You can just as easily make the argument that most if not all RPGs really didn't fit into the averaged score scheme, as there was no category for "Story". But that isn't true. Value and Tilt always took an RPGs story, and rythm games playability, a Wario game's quarkiness into consideration. It also added in the amount of fun the particular reviewer had playing the game while also controlling how greatly that reviewer's own opinion affected the score. Gamespot will deny this, but really look at the trends over the past few years of each reviewer. Jeff Gerstmann loves racing games, and was always prone to giving them a stronger nudge. Alex Navarro, the self-proclaimed bubble-gum gamer, wasn't always hyper critical of a game's flaws (such as those in the Smackdown series) because despite those flaws the games were just plain fun. The recently departed Greg Kavasin always boosted a game's rating with the Tilt value if the game tried to do someting inventive (Killer 7 comes immediately to mind). He also had a thing for RPGs.
Sure, tilt skewed the score, but knowing by how much it did was invaluable to a person looking for a good game. The same holds true to the scores for Graphics, Gameplay, Sound, and Value. Some gamers won't play a shooter unless it's uber-pretty. Some RPG fans won't play one that has a so-so soundtrack. Some action/adventure gamers won't by a game that doesn't last past 8 hours. Still others don't care about these things. Having individual scores for each judgable category allowed gamers to immediately see what they were looking for.
The further addition of the Good/Bad sections enhanced this, as gamers could see what exactly boosted/lowered any given statistic. Say voicework was amazing but the soundtrack was not so great, and the Sound score came out an 8. If I prefer voicework, I know I've found a winner. If I'm a soundtrack nut: "eh I think I'll pass because it looks like that 8 is ballooned by something that's nice but not all that essential." Or perhaps: "Hey, why is the value/tilt so low for an RPG game with such great gameplay? OIh the story blows! It's Grandia all over again. No thanks." Or lastly: "What good are great controls and sounds if I can't see the head of what I'm shooting?"
Under that scheme, every single game rec'd an excellent buffer treatment which readers could access within 10 seconds of viewing the review webpage, no matter what type of genre the game adhered to. Sure, the overall scored mattered, but not nearly as much as how that overall score was reached and what it implied. I was comfortable accepting that score because I saw where it came from, and more importantly knew that the reviewer's opinion was being held in check. And again, I could see all of this rather quickly. If I had any concerns/issues, I'd scroll down and read the review.
Queue the new review system. Suddenly there is one overall score, followed by a bunch of emblems which are cryptic in appearance and more often than not parrot the Good/Bad sections. The emblems themselves aren't necessarily all that bad. They've grown on me from what little experience I've had with them, and help to further address issues which can bother/impress a gamer (in-game adverts, artistic graphics, great story, etc.). The problem is that these emblems only work for games that receive a review score between 8-10 or 0-6. Those games will accumulate a TON of badges, as the evidence has shown. Unfortunately, games between 6-8 on the review scale don't actually receive badges. With the average being around 2 badges for these games, there isn't really any upfront information for them.
Yes, these games all fall into that range of "Mediocre-Good" games, but some gamers will still be willing to play those games if they adhere to their own needs of gaming. I myself usually would not look at a shooter that receives lower than a 9 in graphics (which, under the old system, could greatly affect the overall review score). RPGs on the other hand, the most important thing to me was that it was playable enough for me to enjoy the good story. Thus, RPGs that landed a 7.0 score were still on my radar if they had a decent story, not necessarily a badge-worthy story, but a decent one. FF1 is one such game. Gamespot gave it zero badges, and the only "useful" upfront info for the game is that it's showing its age and that it's a classic. ?!?!....
More importantly, the emblems have absolutely zero context to operate within. "So great, the voice acting is awesome but the music is repetitive. The game got an overall score of 8. How is the sound reflected in this? Wait, the graphics are artistic, the gameplay is so-so. How'd this game get an 8? I thought gameplay was a must for a Platformer game...they don't say anything about the story or game length...i'm confused."
To further my point on this, take a game like EVE Online. It probably would've received three badges: a badge for a huge world, a badge for excellence in graphics, and a badge for extremely slow paced. Now remember, there is no component score here, so EVE would've received either a 6.5 or a 7.0 for a score under the new system. That combined with 3 badges and probably the following under the Good/Bad segments: Good - Best looking Online RPG to date, that is so big; Bad - game's size fails to suck you in early w/ convoluted controls and slow pacing, lacks customization wanted in MMO. Great...how does this help me?
Furthermore, throw a game like City of Heroes in and compare the two reviews under the new scheme. CoH is a smallish world with amazing customization options for your avatar, but extremely repetitive battle system which gets boring after a month's worth of play. Yet, CoH gets an 8.0 or 8.5, and probably more badges, both good and bad. Which is better? EVE has an insane online community. CoH's is far more laid back. Both have repetitive battle schemes, but EVE can have battles with over 200 ships in a given area. One is Sci-Fi, the other superheroes in a more advanced world. You remove that one number from their review pages, and CoH and EVE look like identical games, and your own preferences would have to choose which to play. Unfortunately, all you have is a handful of badges to help you make that decision. There is no upfront assistance.
And that is the biggest problem I have with this new system. There is no upfront assistance that is useful. You can take away the overall score on any two games in the same genre who's score lands between 6.0-8.0 and they look like the same game. This is ambiguity, not quick and succint information. Also, I no longer know how certain things affected that overall score. Is the reviewer really being objective? I'm not accusing Gamespot of just throwing scores on games based on how much they like them, but all reviews are to a great extent subjective in nature. At least under the old system, I could see how subjective with the Tilt value. More importantly, I could see how the score was reached.
Other sites, like Gametrailers and IGN, do not average the score they place on games. But they do at least provide component scores. IGN does for similar categories as Gamespot used to, while Gametrailers picks the three categories which affected their rating the most and shows their individual scores. I find I trust their reviews more because of this, as I can see that it's not just a subjective take because they are being open with what it is that turn them onto a game, and what scared them away. Sure, the scores aren't averaged, but at least I can make my own decision on how imporant each score is. Neither of them are as good as Gamespot's old system, but hey that's just my opinion.
In short, I like many others am all for the emblems being used in combination with the component scores for Sound, Graphics, Gameplay, Value, and Tilt. I do want the averaged score back, but I can settle for the current approach of .5 increments and unaveraged overall scores if I had those individual scores to go with it. I don't think I'm the only one asking for this, as many less articulate and more offensive posts across Gamespot can attest to. Yes, we are the posting minority. But as Nappan and Shrek have pointed out, Gamespot typically has a 50/50 split in their posting users on opinions for their updates and changes. That has not been the case with these updates, especially if you remove all those posts that were pro the change before it even occured.
I'm done ranting and raving. I'll review games the way I like to, using the system I trust (and hope it doesn't leave as well). For those GS admins that read this, thanks for paying attention this long. You won't hear from me again about all this.
As always, the peanut gallery has spoken.