By now, nearly everyone with an internet connection has heard something about the Xbox One reveal. Most of my next-gen predictions were proven correct as I read several round-ups on the matter: Usual banter about hyper-speed processing and superior graphics, Check. Profile-specific purchases and the supposed downfall of the used games market, Check. Baffling denial of an always-online console but the need for a persistent internet connection, Check. Every console will ship with a Kinect and there will be a push for more games/features to become Kinect-enabled… wait, really?
At first glance of screenshots and press releases, I had assumed that there would be some sort of Kinect accessory, but I was rather surprised to hear that Microsoft decided to make Kinect ownership mandatory for the next console generation. With the excellent sales and consumer reaction to the device, maybe such a development is not so surprising. Much of this initial shock was based on my own personal experiences with the Kinect.
Last year, Laura and I borrowed the motion-sensing accessory from a friend of ours, mainly to try a variety of demos and see just how different Skyrim was with actual shouting. For the most part, I found the Kinect to be an interesting, but ultimately gimmicky toy. Navigating through virtual environments using my own gestures and movement was engaging at times, but thanks to regular sensor re-adjustment and frequent delays, I never felt fully immersed in the experience. These problems were compounded by the space restrictions of GIMMGP Headquarters, which provided roughly ten feet between the television stand and the couch. Plus, the lack of a controller in hand just felt wrong to my doddering old gamer self. Slashing the enemy with my arms, running in place, and jumping to avoid obstacles just couldn’t replace the comfortable heft of a traditional controller; my anchor to the video game world had vanished.
Long before developers were including force-feedback technology and pushing motion controls for every console, all of the gamers I knew were already having their sense of touch and spatial recognition engaged by video games. The most common example could be seen whenever we would gather to play Mario Kart 64 (or any racing game, really). As each of us steered through a hard turn, we would twist the controllers and lean our bodies into the curve. These motions did nothing to affect our in-game performance, but that didn’t matter; a connection had been made. Controller and screen melted away, and the weight of our bodies in the kart would register with our characters’ movements.
These sorts of spatial connections would also occur whenever I played a game with pushing or pulling. The crate puzzles in Ocarina of Time task the player with moving boxes around, normally under a time limit. Whenever Link would struggle against a block, the resistance he met became palpable. I would lean forward and put extra pressure against the analog stick, struggling against the drag of a massive crate. Watching Laura play Katamari Damacy, I saw her performing similar actions: leaning her body, pushing the controller towards the screen, knowing in her heart that this helped the massive ball of junk lurch forward. With the Kinect, I felt none of these immersions. Even when pantomiming a push or pull, or leaning into a curve, I was met with no resistance; nothing tactile to provide a response. Without the anchor of a controller to translate my movements, I felt like I was flailing about in an open space with no bearing on the in-game world.
From everything I have read about the new Kinect sensor, it seems that Microsoft is trying to enhance the gaming experience through an improved sensor and biometric readings. The upgraded Kinect will be able to better detect player movement and facial features, along with estimating heart rate through a variety of factors. These readings could be used to change game difficulty on the fly, or alter in-game achievements over time to better suit different play styles. While these science fiction nuances may be impressive to some, the thought of a persistent camera monitoring my every move does not scream immersion to me (rather Big Brother and paranoia). Unless the improved sensor can recreate the moments of spatial connection and total engagement that I found from a traditional console, then I will just let a controller stay as its namesake and stick with the classics.
I’ve been playing Dragon Age II (very slowly) for a couple weeks now and I think, finally, we’ve clicked. How do I know? Because the gameplay and my characters from the game pop into my thoughts when I’m not playing. And when that happens, distracting as it may be, I starting thinking about where I’m going to go and what I’m going to do next in the game.
And then I start thinking about just playing the game – being in my house, controller in hand, calm and comfortable, ready to explore the unknown. [happy sigh]
How do you know when you’ve hit your stride with a game? Is it love at first play or does it take awhile to build up a relationship?
It is so strange to think that over time; nearly all innovations become commonplace fixtures. Whenever new technology comes along to shake things up for the better, the whole world marvels at the novelty, eager to own a fresh gadget (or condemn it as witchcraft). But over time, what seemed so brilliant and beneficial becomes yet another part of our everyday routine. For us gamers, the various leaps in visual technology over the last two decades are a perfect example of this sentiment. I often take for granted that the sorts of images the consoles of today can produce would blow my childhood mind to pieces. On an even more basic level, just the thought of a controller that vibrates would have been some sort of voodoo to my twelve-year old self.
In the early part of 1997, my brother and I received a promotional video from Nintendo Power. Upon this VHS tape was a laughably bad dramatization of a kidnapped Nintendo employee spilling the beans to goons from Sony and Sega about Nintendo’s next top secret project. This “classified information” was none other than Star Fox 64, the soon-to-be summer blockbuster for the Nintendo 64. After seeing plenty of screenshots for Star Fox in previous issues of Nintendo Power, my brother and I were psyched to see the game in action. The gameplay footage on the video sold us on Star Fox 64 before it was even released, but little did we know that the odd peripheral featured on the tape would change our lives far more than some vulpine pilot.
The Rumble Pack debuted alongside Star Fox 64 in this silly promo video. The captive Nintendo rep from the video bragged that the new accessory would let players “feel the game,” and the Rumble Pack would make Star Fox 64 “the coolest cinematic gaming experience out there.” He explained that if an Arwing took a hit or if you dropped a Nova Bomb on screen, the Rumble Pack would cause the controller to vibrate rapidly in the player’s hands. As we watched the Sony and Sega goons convulse in overdramatic response to the shaking controllers, my brother and I wanted so desperately to experience the Rumble Pack for ourselves. With our enthusiasm running on full steam, my brother made sure that Star Fox 64 was at the top of his birthday list.
While this all seems a bit overhyped compared to the technology of today, keep in mind, this was the first time that a player would receive force feedback through a controller. We didn’t have your fancy built-in rumble technology back in my day! At its release, Nintendo could have further drained customer pockets by selling the Rumble Pack as a separate entity, but this was new ground in the video game market, so the Big N had to tread carefully. To ensure that every player who bought Star Fox 64 would receive such an immersive experience, Nintendo bundled the little gray block in every copy of the game. This was such a smart move, since Nintendo could market the product as a sort of “two-for-one” deal, as well as a new gaming experience.
Needless to say, my brother and I were floored by the Rumble Pack. We each played through the single-player campaign of Star Fox, the two of us reacting just as foolishly as the actors from the promo video; shaking and looking at each other with slack jaws and sunned gazes. When it came time for multiplayer battles, we would flip a coin to decide who would get the Rumble Pack to start with, and each subsequent use was determined by the victor. We even set the controller down during vibration heavy cutscenes, just to watch it rumble across the ground as some enemy battle cruiser disintegrated on-screen, an entire year before Psycho Mantis was ordering players to do so in Metal Gear Solid.
Initially, this idea of shaking controllers seemed like a passing novelty; something that would only work with Star Fox 64, maybe a handful of future Nintendo 64 releases, and then we would all move on to some other hardware fad. Even with this notion, the Rumble Pack never left its controller; the first player slot had a permanent addition in our household. Other companies took notice of this technology, and sure enough, Sony was rolling out the Dual Shock controller only months later in Japan. With the debut of built-in haptic technology on the Sony Playstation, force feedback controllers were here to stay.
Note: I’m using ‘open world’ in a very broad sense and discuss games that are not really open world at all (like Mass Effect and LA Noire) and are merely non-linear or offer a degree of exploration. The reason for this is that my main focus here is to look at why these elements are becoming so popular in the games industry over strict linearity, so the distinction between true open world games and those with open world elements isn’t particularly important for my purposes.
If you follow my blog at all, you probably know that I love me some of that open world action. In the last decade in particular, the number of open world games has been on the rise and some of them have been incredible. However, it does seem that more and more these days, developers are turning to an open world or sandbox structure and though I wouldn’t necessarily say it’s become the norm, it’s certainly getting that way. Consider how many recent AAA titles in the past few years have been open world games. There’s Red Dead Redemption, any of the GTA games, Skyrim, the new Tomb Raider and Far Cry 3 to name just a few. Old franchises that weren’t previously open world have switched over to this structure. Sequels of franchises that were previously open world in a more limited sense have been lauded as bigger and better with each new sequel, like Assassin’s Creed. Developers have bragged about the size of the maps as if that somehow means the game is now better. To many, it seems that having increasingly expansive worlds has somehow become linked with quality and innovation in a game. My question is, can the narrative or any other element of a game suffer from, in essence, being too open world and expansive? The short answer is, in my opinion, a resounding yes. To be clear, my point isn’t that developers should stop making open world games or that they shouldn’t keep trying to push the limits of how expansive a game can be, because if done well, these types of games often are innovative, entertaining, immersive, creative and can enhance both story and gameplay. If done incorrectly however, the results can be at best boring and at worst game-breaking. To that end, I do think that developers need to be a little more cautious in deciding whether a game should be open world or not as it doesn’t necessarily mean it will automatically make it a better game and that they should also be careful in balancing that openness with other elements that they think are important.
Firstly, open world does not equal more fun. That seems to be the premise for developers jumping on the open world bandwagon and I think that’s simply not true. I’m not talking about the fact that massive maps can often be quite daunting (especially for completionists), because that’s something you can get over once you become more absorbed in the wold. The truth is, many open world games can be quite boring, especially those filled with travelling and fetch quests. I’m sure most of you have played open world games like Assassin’s Creed where you get to take full advantage of the amazing scenery on your long rides into the next area. The problem with this is that it gets old pretty quickly. There’s only so much time I can spend watching my character ride around on a horse, no more how heroically they do it. Of course, many games get around this issue by introducing fast travel, but if everything’s so far apart (and many games do this) that you have to constantly fast travel, it begs the question, what’s the point of all having this wonderful expansive world? It’s not like you’re seeing it much. There are also games that feature huge maps, but have no fast travel or still require you to travel excruciatingly long distances and have mind-numbingly boring ways of getting from Point A to Point B, which is even worse. It’s true that it’s difficult to make something as repetitive as driving or riding fun and I can only think of a few games that have done it really well, Far Cry 3 being one. The problem then is not simply with making the game open world, but a problem of often making games too massive in size without actually thinking about how a player is actually going to traverse that territory in a fun way and still takes advantage of all those areas you’ve created.
Having an unbelievable and frankly, daunting number of collectibles and loot items is also a common feature of open world games that can often actually make the experience more boring than fun. I have nothing against collecting or looting items as such. It can be a fun addition to a game that takes advantage of an expansive world and adds optional content that gives you more fun things to do. However, done wrong, it can end up feeling like collecting just for the sake of collecting and a repetitive exercise that doesn’t really add anything significant to the game other than more hours logged. In games like Far Cry 3, it doesn’t bother me too much, because it’s completely optional and you wouldn’t really miss out on anything from not collecting everything, other than a few extra weapons, for instance. Also games that manage to work the collectibles into the main storywork well, because collecting becomes less of a pointless, repetitive exercise. What does irritate me is when the collectibles are artificially made an important element of the game, forcing you traverse the whole map. Sure you don’t have to collect all the items, but then you would be missing out. That’s how I felt with the voxophones in BioShock Infinite. They didn’t just tell you back story, they actually told you crucial parts of the main story or least information that I doubt anyone would voluntarily choose to miss. It feels like I’m being punished for not exploring, which should surely be up to the player. Adding in as much exploration as possible and padding the game with tons of extra items and loot doesn’t automatically make a game fun. How to implement exploration is just as important as deciding to include it in the first place and that’s something I feel developers sometimes forget.
Secondly, I don’t think that all genres or stories are inherently suited to being open world. Not all first-person shooters, for instance, would benefit from the open world format for instance. Much as I adore Mass Effect (that’s probably one of the biggest understatements I have ever made right there), I do feel that the first two games suffered from attempting to balance action and exploration and as a result fell a bit short on both at times. It’s a difficult line to walk and I’m not suggesting that they should have cut out either (God no!) My point is just that it is difficult to balance exploration with other elements of a game and that thought needs to be put into how to do that or whether it would enhance the experience at all. There are many games where the open world elements can feel completely superfluous, like LA Noire. Driving around and completing little side quest frankly felt like a chore and took away from the important parts of the game. Rather than add to your experience, those extra elements just feels pointless and you end up either just ignoring it or just grinding through it. There’s not only no need to add in open-world elements into a game, it can end up just diluting what would otherwise have been a fantastic experience on its own.
Thirdly, games that have huge open worlds can sometimes suffer visually as well, with each area having have less detail than those of more linear games or those with smaller maps. For some games it doesn’t matter, for instance sandbox games such as Minecraft or Dwarf Fortress. The point is to have an extremely expansive world, which you can control or manipulate and the graphics are deliberately basic for that reason. However, the bigger the world the more likely there are to be horrendous glitches. Red Dead Redemption, for instance, has some absolutely hilarious ones (check it out on Youtube). Also there’s often less to do than it first appears. In many open world or non-linear games there are big open spaces that you have to travel between or areas for exploration, but there’s actually very little in them. It’s mainly an illusion of space and all the items and quests could have been packed into a much smaller area, rather than forcing you to traverse the map just to collect one little thing. Bigger is definitely not better and if you have a huge map, but with very little actually going on in its various parts, then personally I would prefer to play a game with more detail packed into a smaller map. That doesn’t mean that open-world games are inherently less detailed at all. Batman: Arkham Asylum and Far Cry 3 did very well at packing tons of detail into fairly large areas. It’s all about balance and not simply expanding for the sake of expanding.
Lastly and most importantly to me is that the story can suffer for adding in open world elements for no reason and the result is that it feels less like a less coherent world. Too many side quests can detract from the apparent urgency of the main plot and make it more difficult to suspend disbelief at times and can even lead to narrative inconsistencies. I’m sure everyone’s come across a point in an open world or non-linear game where a character tells you ‘quick! Get to the next area and talk to so and so or we’ll all die! We’re counting on you!’ Instead of taking this to heart, your character wanders around for the next three hours collecting things and talking to NPCs and ‘exploring’. It can lead to a feeling of disconnect when you do continue with the main mission, only for everyone to act like you weren’t just a complete douchebag for abandoning them in their time of need. Also, running around talking to tons of characters can mean that the characters you do meet are less developed. The benefit of more linear games is that it’s easier to follow specific characters around and there’s more time dedicated to getting to know them. Here there is a major difference between a non-linear game and a truly open world game. The more open world, the more these dangers exist. Sometimes having millions of possibilities can feel more like a lack of direction and that can take away from the main narrative. It’s not a surprise that many games with the best stories are linear or at least more linear than a fully open world or sandbox game, although of course not exclusively. As many point out, there’s absolutely nothing wrong with a story being linear. In fact linear stories are tried and true. When was the last time you read a non-linear book? It’s all about finding that balance and figuring out whether your story would benefit from your game being non-linear. Personally, I don’t think games should be open world unless there’s a real reason to do so; in other words, if it would really further the plot or if the exploration aspect of the game is simply more important.
There seems to be an obsession with open world games or at least with having some open world elements. Personally, I think developers need to be more cautious and I don’t like the trend of simply making games bigger or having more options simply for the sake of it. Of course, in the end it comes down to enjoyment and for some people, exploration is more important. If you’re enjoying yourself, that’s the important thing and there many games that incorporate or focus on the exploration aspect of a game and do it very well and they are no less important than games that depend on its tightly told narrative. Those two types of games are also not mutually exclusive. At the same time, I think it’ll be a while before we see a truly narratively strong and truly open world game. That doesn’t mean we should give up, but there does need to be more awareness that there is a balancing act going on or at least that there needs to be a decision for sacrifice. I welcome more open world games, but I also think it’s a pit trap that many a good game could fall into, never to return.
As someone who grew up playing video games, I often take for granted just how many tropes have become commonplace to me. Recently, I was watching my wife’s first play-through of Aladdin on the Super Nintendo. While she was playing, Laura became quite upset at the idea that enemies were respawning the moment she wandered off-screen. “It makes no sense! When I kill them, they should stay dead!”
Initially, I tried to explain to Laura that traditionally, enemies respawn once out-of-sight; that’s just how things work. I gave her examples of older games and how often this system of enemy placement occurred. But the longer I tried to justify this idea, the more I realized that my better half was right. It does not make any sort of sense that an obstacle, once removed, would immediately return upon looking away from it. Respawning enemies may provide the player with a greater challenge, but this dated idea can break the game narrative and frustrate a newcomer away from gaming completely.
For so many years, I had grown to accept that enemies would reappear the moment I moved backward or forward in a game. This is not surprising, since most of my earliest experiences with video games were the side-scrolling titles on the Nintendo Entertainment System. When Mario tottered too far left or right in his flagship adventure, a fresh Goomba or Koopa Troopa would appear where he had previously defeated one. Somewhere in my young mind, I must have rationalized that since Bowser did have an entire horde of minions at his disposal, it only makes sense that new enemies would appear where old foes had fallen.
An interesting side-effect of these respawning enemies was the player often had the option to avoid combat completely as he/she journeyed across the screen. In Mega Man 2, my brother and I would use the temporary invincibility of taking damage (another odd gaming trope) to bypass whole swaths of obstacles. Part of learning these older games was figuring out when discretion was the better part of valor. Once you determined that enemies were going to keep on reappearing no matter how often you defeated them, the new goal of a game became passing through an area as quickly as possible.
As time passed, and games required more time, staff, and piles of money to produce, the idea that a player would avoid battles to progress in a game became an issue to the industry. Additionally, worlds were being built in glorious three-dimensions, so the player had even more space to avoid combat and race to the finish. When so much effort had been put into the character models and combat systems of a game, developers needed a new method to keep players engaged for longer periods of time. Then, an idea struck: what if players had to defeat all enemies in an area to move forward? From this notion, a strong fork appeared in action/adventure games. One path led to titles such as Super Mario 64 and Spyro the Dragon, where the focus was on collecting items in an open world and combat was only obligatory when facing a boss. The other path led to games like Devil May Cry and God of War, where elaborate combat and cinematic battles were kings of design.
The idea was simple: action games would provide players with a story, which would be doled out in-between grand battles that could not be avoided. A hero on a quest would logically encounter obstacles in the form of enemies, but to ensure that a player would take care of these foes, magical or environmental barriers would be placed and only removed once threats were eliminated. When Dante is faced with a gaggle of evil puppet demons, a magic seal is placed over the exits until all of the marionettes are defeated. This system of forced battles and enemy placement became the new norm, and a slew of series adopted the trope. But this seemingly novel approach actually made its debut in the arcades many years earlier.
For a time before Street Fighter II overtook the arcades and turned them into virtual dojos, the side-scrolling brawler ruled the screens. Games like Double Dragon and Final Fight gated player progress with obligatory battles long before modern action titles. When a street-brawling hero wandered into a new location, the screen would stop, a group of enemies would gang up on the protagonist, and a player could not move forward until the area was cleared. At the time, the point of these gated battles would be to drain a player’s health and lives, which would force a player to drop more quarters to continue, thus a profit is made. Now that this trope has become mainstream once again, we are left to wonder the reasoning behind these design choices, particularly since most console games are a one-time purchase.
One common argument revolves around the idea that the value of a game is the length of time needed to complete the main storyline. When a new game costs over 60 dollars, it is understandable that a consumer would want to get their money’s worth in gameplay duration. It is for this reason why many developers will pad a game’s length with forced battles. For example, it took Laura and I roughly a week to complete Bioshock Infinite. Much of our time spent playing was fighting off wave after wave of enemies in gated battles that barred our progression of the story. If it were not for these battles (or Eilzabeth’s sub-par lock-picking skills), Laura and I would have breezed through the game in a much shorter amount of time. Conversely, my playthrough of Dishonored took less than two days to complete, and I spent most of my time Blinking away from combat, focused on the narrative goals in Dunwall. Both of these games cost $59.99 at time of release, but logic dictates that the longer game is of greater value, right?
Actually, I found that being forced into battle over and over with no option to move forward until all enemies are dealt with detracted from Bioshock Infinite’s greater narrative. I was often frustrated that my exploration of the gorgeous and detailed land of Columbia was being interrupted by some guy shooting at me. The trope of forced battles did not strengthen the game, but weakened my engagement and enjoyment of the title. Additionally, by forcing the same scripted battles on the player with every single playthrough, there is little reason to replay a game more than once, save for the meager variety in weapon options. On the other hand, giving a player the option to avoid combat provides a greater variety of play styles, which will (hopefully) lead to more replays. After beating Dishonored on a non-lethal playthrough, I still had the opportunity to play the game with reckless abandon, fighting every citizen who dared to look my way.
I guess Laura was right: it is silly that an enemy would respawn the moment you look away. The notion that a game’s challenge comes from a constant barrage of reappearing baddies has become rather dated. But the supposed alternative is not much better. Gated battles and forced skirmishes do not always lend to a game’s worth. There needs to be a middle ground, a place where these tropes do not exist and more choice is put in the player’s hands. After all, some people want to experience endless battles that could never happen in real life, while others want the freedom to become fully immersed in a story that is not their own.