A lot of the earlier articles on Bumbershoot Software were written “from my old notebooks”—old notes or forum posts or conversations that I’d kept around and then worked into full articles. That’s fine for retrocoding stuff, where the state of the art moves fairly slowly, but it’s not useful for fields that are still rapidly innovating. My old notebooks still have a lot of material from my 2004-2014 span being involved with the interactive fiction community, but a lot of the figures there are now reasonably well known names in academia and industry, so much of my notes on what we’d now put under the umbrella of “interactive narrative” are extremely outdated. I do sporadically check in on Nick Montfort’s and Emily Short’s blogs to sample the academic and avant-garde commercial viewpoints in the space, but even there I’ve drifted far enough away from both the state of the art and the goals of the craft that there isn’t much I can talk about or grapple with.
However, Emily Short has a new post up on “Choice Poetics” that struck a chord. Back when I’d write nine or ten thousand words each year about the annual Interactive Fiction Competition, I ended up refining some principles and approaches to the entries in the competition that ended up serving me pretty well in subsequent years when approaching computer games more generally. The papers linked in that article—particularly the one formally analyzing Papers, Please—have forced me to re-evaluate some of those old conclusions.
So, let’s talk about complicity in video games. I’m using this term in both a narrow and a broader sense. In the narrow sense, it’s talking about things a game can do to make a player feel bad about atrocities they commit or have their character commit within a game. More broadly, I want to move this beyond the crimes that the word “complicity” suggests and consider situations where the player feels responsible for their actions within or to a game world.
I first attempted to formalize this in 2010 when reviewing Jason McIntosh’s The Warbler’s Nest, because it was brief and extremely pointed and thus crystallized for me what made me feel like it was laying the responsibility for the events of the game directly at my feet. To wit, if a player is to feel responsible for an atrocity:
- The atrocity must be optional. Otherwise the player is no more responsible for these events than a reader is made responsible for the events in a story by the act of turning a page.
- The atrocity must be the easy path. Otherwise it’s just Bonus Cruelty you have to work harder for.
- The player must know the stakes. Otherwise their actions do not have the necessary import. Lock a player in a room with two unlabeled buttons, and make it clear that the game will do nothing else until one of the two buttons is pressed, and the player will feel no responsibility for the result. Indeed, this is only fair; were such a situation to obtain in real life, the responsibility for the consequences of this action would be firmly on the head of whoever rigged up the buttons, or locked the button-pusher in the room.
That was almost a decade ago. By now, I don’t think any of these hold up as well as they did when I originally devised them. Here are the main problems or complications that, I think, have emerged with it:
- The player’s interaction mode may be too far removed for the analysis to apply. If game offers a predefined good path and evil path, the default assumption for many players is that they are expected to be completionist. The atrocities on the “evil path” may be optional, but if the player seeks to see the whole of the game’s content—particularly if certain maps, challenges, characters, or other such content is gated behind progress within it—they cease being optional when in pursuit of that goal. More simply, a player who has decided to rain meteors and dinosaur attacks upon their SimCity metropolis will feel no more remorse about this than they would for knocking over a tower of Jenga sticks. A player interacting with a game as an artifact to experiment with or as a library to exhaust has already made a decision that removes them from the analysis.
- People feel responsible for their actions in real life even if they didn’t know the stakes. Blind or actively deceptive choices still fail, but there’s a lot more leeway here than I had originally formulated. Allowing the player character to feeding a starving child a peanut butter sandwich and then rewarding them by having child die in front of them from anaphylactic shock would be a rather mean thing to do, but a player might feel some responsibility for the action, since they acted without some obvious information (the child’s peanut allergy) that they in principle “should” have known. On the other hand, one could imagine a hospital/surgery simulation where the epilogue reveals that the player character was not a surgeon at all, but a deluded lunatic who was just cutting people up in a barn. In such a game the player would be unlikely to feel even fleeting remorse for the actions presented at the time as saving lives.
- If an atrocity is difficult enough to perform, the analysis breaks down completely. This is hard to explain without actually working through examples, so I’ll defer that to my worked examples before.
Below the fold, I’ll work through a number of games and show how the original and the modified analyses interact with it. That also means I will, by necessity, be spoiling some of the narrative mechanics in them, so I will list the games here as the spoiler warning. Proceed at your own risk if you have not played Utlima IV, Bioshock, The Warbler’s Nest, Spec Ops: The Line, Fallen London, Papers Please, and Undertale. I’ll try to be vague about exact details that are not commonly discussed, but it’s still going to require going further than I’d like without spoiler warnings.