skip to main content

Dead Pixels: Why game graphics don't matter any more

Minecraft is the best selling game of all time - despite looking like it was made in the 1990s
Minecraft is the best selling game of all time - despite looking like it was made in the 1990s

The quality of the graphics have been central to games companies' sales pitch since the beginnings of the video game industry.

If you go back to the 1980s, both Nintendo and Sega were offering 8-bit consoles (the NES and the Master System).

8-bit referred to the amount of data the processor could handle at any one time. And that meant it was limited in the amount of colours that could be shown on screen, the amount of detail and movement that could happen in the game, the number of tones and notes that could be played in the music.

And that’s why this generation of games are really pixelated and basic-looking, it’s why the characters in the game tend to be very limited in what they can do – it’s almost like they’re on rails sometimes – and it’s why you had that really basic MIDI music in games like Super Mario Bros.

But then, in the late 80s, Sega put the cat among the pigeons by launching its 16-bit console – the Mega Drive. Those 8 extra bits meant its processor could handle tens of thousands of colours, instead of just a few hundred. And the complexity of a game, and the music and sound effects, could be more detailed than before.

That prompted Nintendo to respond with its SNES – the Super Nintendo Entertainment System – which was also 16-bit. And suddenly the games people could play looked and sounded sharper than before.

A few years later Sega again upped the ante with the Sega Saturn, which was 32-bit. Again, this dramatically increased the amount of detail a game could contain - and here is where we start to see real 3D games become possible (although they’re very blocky at this stage).

In response, Nintendo hit back with the N64 – which, as the name suggests, is 64-bit – so capable of even more detail than before.

And this kind of tit-for-tat arms race continued in the game industry for years.

When Sega, Nintendo and then Sony and Microsoft had kind of maxed out on bits, they started talking about how many pixels it could fit on screen. Or how fast the graphics card was, or how many cores the processor had, or how much RAM was built in, or ray tracing, or fidelity.

And all of this was in service to how detailed the images in the game could get – with game and console unveilings becoming increasingly focused on things like realistic skin textures, and in-game physics as an attempt to get things looking as real as possible.

So what’s happened?

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

Well, for a start, the return on investment has dipped considerably.

When you look back at the jump in quality between a NES game and a SNES game, or a Playstation game and Playstation 2 game, you can see (and hear) a really significant difference.

But now, if you look at what the Playstation 4 was and is capable of and compare it to what the Playstation 5 can do, it’s not so apparent. There is a difference, but it’s not nearly as impressive as it once was. One might be capable of very realistic images, but the other is capable of quite realistic images.

And when it gets to the point where you’re honing in on the textures of the clothing a character is wearing, or how the beams of light shine through the trees, you’re probably working at close to the top of what’s required to make a game enjoyable.

A recent ad for an enhanced version of a Spiderman game for the Playstation 5 Pro highlights this. In it developers talk about how, when you were swinging through New York, you’d now be able to see very clear, realistic reflections of Spiderman in the windows of the sky-scrapers. Nice, sure, but is it really going to make or break the game experience?

Sure enough, it turns out that the vast majority of people just don’t really care about that – not enough to change their mind about whether they buy a game or not... and certainly not enough to convince them to spend hundreds of euro on a new console.

And that’s a big part of the problem – these high-end, Triple-A games are not cheap – and neither are the machines you need to play them.

They’re not cheap for the games companies either.

That step from PS4 graphics to PS5 graphics might not have seemed revolutionary, but the development of the console cost Sony hundreds of millions – or maybe even billions - of dollars. The same goes for Microsoft and its Xbox console. And both companies are believed to have initially sold their consoles at a loss in order to gain market share.

Meanwhile the developers who were making the actual games would have spent a fortune doing so too. Because big games now have Hollywood-style budgets. That Spiderman 2 game, which was made by a Sony subsidiary, is said to have cost around $300m to develop, for example.

And questions are now being asked whether it’s worth that level of investment for such marginal gains. After all, games development is hugely risky. Sony’s Concord cost $200m to make was a massive flop, resulting in them pulling it within days of launch.

Warner Bros is also said to have taken a $200m bath on a game based on the Suicide Squad from the DC Comics universe.

Are consumers looking for something different, too?

Definitely – because while there was traditionally a push from gamers for bigger and better and more realistic games, that’s not really the case any more.

Even among the more hardcore gamers, there’s a growing realisation that it’s about the experience the game offers rather than the graphics. That’s partially down to the fact that so many of the games that promised amazing graphics actually ended up being disappointing – or just no fun to play.

But it’s also because the gaming audience has grown dramatically in the past decade or so – it's no longer characterised by males in their teens and 20s holed up in their bedrooms... it’s now something that has players of all genders, all ages, and all abilities.

And many of those aren’t looking for the most challenging, technical game possible –something that will take them hours and hours to complete and master. They’re maybe looking for something they can dip in and out of, or they can play with friends and family – either in the room together or over the internet.

But Sony and Microsoft – and a lot of other developers - were slow to react to this.

They continued to fight the console war, bringing out their own cutting edge, exclusive titles as a way of convincing gamers to buy their console. Lots of developers also kept bringing out games that required the latest console, or an expensive, high-end gaming PC, to enjoy.

But at the same time, there was a separate segment of the industry that experienced massive growth, because it did the opposite – and focused on making itself accessible to as many people as possible.

What do you mean?

Well if you look at some of the most popular games today – and even of all time – they're not the ones with the most impressive graphics.

Minecraft is the best selling game of all time with more than 300 million sales in its 13-and-a-bit years of existence – that represents billions and billions of euro of revenue, and it’s before you could all of the in-game purchases, merchandise sales and so on.

But if you were judging it by graphics alone, the game never should have been a hit. And it wasn’t action packed, or high-octane – it was a relatively slow, quiet game. Even by the standard of 2011, it looked extremely basic – it was more like something you’d have seen on the NES or SNES back in the 80s or 90s.

But it turned out that none of that really mattered – if anything they added to the charm of the game. What mattered was the game was fun to play.

Just as importantly, it was also accessible to any type of player. You didn’t need to have a specific console or a powerful PC to play it; that opened the door to casual and young users who didn’t have hundreds or thousands of euro to spend on their rig.

The same goes for really all of the biggest games of today.

Fortnite, PUBG, Roblox – none of them look amazing, but they’re fun to play and they’re accessible to users.

And they’ve largely taken what you might call the Netflix approach to the platforms – so rather than tying themselves as an exclusive to one machine, they’re instead making sure they’re available everywhere. That includes older PCs, last generation consoles and smartphones and tablets.

And that gives them access to a much bigger audience than would have been possible otherwise.

You say Sony and Microsoft were slow to react to this – what about Nintendo?

Nintendo were well ahead of the curve on this shift away from the graphics arms race.

Because after its Gamecube lost the battle against Sony’s Playstation 2, despite it technically being a more powerful console, they decided to take a completely different approach.

And in 2006, while Sony and Microsoft were launching beefed up, new consoles, it came along with the Wii – which was essentially a generation behind in terms of its computing power.

But Nintendo focused instead on the novelty of building in motion controls – and they reckoned that having this feature would draw in a wider audience beyond the hardcore gamer. And the fact that it wasn’t using the latest and greatest hardware meant it was able to sell it for much cheaper than the rivals – and at a profit, too.

And it was a good bet – because while the Xbox 360 and PS3 were pitching at the hardcore gamer, the Wii became a family console. And it was so cheap, relatively, and had enough good games on it, that it also became the second console of even the hardcore.

And in the end the Wii out-sold the PS3 and Xbox 360 by around 14 million units over its life-time. The motion controls were dismissed as a gimmick initially, both Microsoft and Sony then rushed to try to add similar features on to their own consoles.

Now, in fairness, it should be said that Nintendo then followed up the Wii with the Wii U, which was a total flop – but it then rebounded with the Switch in 2017.

And again, it followed the formula of being less powerful, but cheaper than rivals – with the focus on making it easy and fun to play – and it’s worked.

The Switch is now the third best selling console of all time, only slightly behind Nintendo’s own DS handheld, and the Playstation 2 – the sales of which benefitted from the fact that it could also double as a DVD player.

The recently-announced Switch 2 is also hotly anticipated – even though it looks as though it won’t compete power-wise with the consoles Sony and Microsoft have had on the market for five years now.

I should say, though, it’s not all about being well priced and the unique controls. It’s also benefitted from Nintendo knowing how to make great games.

Its Super Mario and Mario Kart games are always great fun, and always sell well – Animal Crossing has been a huge hit, and an example of the kind of gentle gaming that more and more people are being drawn to.

And the likes of Legend of Zelda: Breath of the Wild have been critically acclaimed and huge sellers – and they look amazing as well, which kind of undermines whatever argument there is for having the most powerful console possible.

So neither the Wii or the Switch would have been successes without Nintendo backing them up with good games – but that highlights even more the realisation many gamers are having that graphics are not everything.