Theoretical Background: (Computer) Game theory

I have a long history with computer games, and I approach this research from a gamer’s perspective using ethnographic methods. My motivation for writing about what I do comes from my desire to help people learn to be active participants in their communities. I see social problems all around me, and I think games could be a powerful tool in exploring these social problems. Games are inherently interactive in the sense that they require players to make choices to progress a narrative, and this choice-making process has the potential to challenge people to think reflectively about moral, ethical, and social problems. Previous research has focused on games from a game theory perspective (Smith, 2005, Zagal, Rick, & Hsi, 2006), where an examination of game rules leads to ideas about how people will behave and therefore how designing in certain ways can construct certain types of communities, but I argue that looking at rules and constructed models does not adequately explain actual player behavior. My interest in game theory literature stemmed from an experience I had while playing through Star Wars: Knights of the Old Republic (KotOR) (Bioware, 2003) twice a few years ago (in a galaxy far, far away).

Knights of the Old Republic is a computer role-playing game (CRPG) which lets players make moral choices as a Jedi Knight. I wanted to play once making all the Light Side choices and once making all the Dark Side choices, so I could see the whole set of outcomes for the progression of the story that the developers designed into the game. While I was playing a Dark Jedi, I noticed that sometimes the choices I made were the same ones I made as a Light Jedi. For example, in the game, I was presented with the classic game theory model, the Prisoner’s Dilemma (Felkins, 2001, PD), only in KotOR it had Star Wars trappings. I had to choose whether to betray a friend (a Wookiee warrior) for selfish reasons, and he had to make the same decision of whether to betray me. In both cases, I chose to stand by my hairy friend. I’d never betray a friend as a Light Jedi, of course, because I was being selfless. As a Dark Jedi, however, I reasoned that if I betrayed my friend for immediate benefit, we would not be able to use each other for mutual personal gain in the future, so I actually ended up standing by him in my second play-through, too.

Wow. Making a selfless choice and making a selfish choice actually lead to the same conclusion. Game theory simulates considering future interactions with each other by modeling iterated versions of the Prisoner’s Dilemma (Felkins, 2001, PD). In this model, it has been demonstrated that the most benefiting choice is to initially cooperate with the other person with no betrayal. Then, in all subsequent iterations, the choice should be to reward or penalize the other person by either cooperating or betraying him or her depending on what the other person did in the previous iteration. If both parties were to do this, they would never betray each other. Yet, KotOR did not present this scenario as a recurring one. My choices were motivated by how I saw myself playing a particular character rather than “rational” thought as presented in traditional game theory literature. My point here is that the choices I made while playing KotOR were more complex and tied to how I saw myself playing a particular person in a socially situated world than the reasoned choices I would have made in an abstract construct. This mirrors Gee (2003) when he writes about players role-playing what they want their characters to be.

The Prisoner’s Dilemma is part of a larger set of situations that economists and game theorists call “social dilemmas” (Hardin, 1968, Axelrod, 1985, Felkins, 2001, SD) only most social dilemmas have many people making choices of whether to cooperate or “defect.” Basically, a situation is considered a social dilemma when an individual’s immediate self-serving choice is not the same as the choice he or she would make to benefit the community as a whole. A common feature of many models of social dilemmas is that the whole community benefits when a certain number of people cooperate. In other words, not everyone has to cooperate—just a critical number of them—to benefit everyone. What this means is that someone could defect and “free-ride” so long as enough people are cooperating. It’s relatively easy to show how two people can rationalize cooperating with each other (by not betraying each other and maximizing their benefit over time). It’s much harder to convince someone who belongs to a larger community that cooperating makes sense.

The body of literature from people looking at social dilemmas in games has mostly focused on how different games support cooperation through various game mechanics and rules. If a team of players is trying to figure out how to most efficiently beat another team of players or a set scenario in the game, they will choose to do such and such because of certain game rules and how the game works. I found, however, that my experiences with games, in general, and with World of Warcraft, in particular, showed that the choices being made in certain situations—even ones which could clearly map onto social dilemma models—weren’t so “cut and dry” and “rational.” One could argue about game mechanics all one wanted, but a sense of actual game playing behavior in a real game context rather than some sort of construct will never be realized. Smith (2005, p. 7) made this same comment when he said, “One challenge for video game studies, which has so far been largely neglected, is the examination of the relationship between game design and actual player behavior.” I would take that argument further by saying real social situations—like the ones I experienced in World of Warcraft—are messy and complex.[2] Rationality does not equal emotional, self-identification, nor does intention equal action, and even if it did, a lot of decisions are being made without full knowledge of their consequences or how it affects other players.

Using ethnographic methods (Steinkuehler, 2004, Wolcott, 1997, Hayano, 1982) lets me both write about my personal experiences and explore issues of cooperation in a real-world social space. Of particular use is the idea of divisions of labor (Strauss, 1985, Stevens, 2000), where the different tasks associated with a particular project are assumed by different people depending on social factors. In WoW, those factors include game mechanics and relationships of trust.


[2] I strongly believe that some “virtual” worlds, like World of Warcraft, are every bit as “real” as our day-to-day off-screen world, in terms of people behaving in a rich, complex social space.

One thought on “Theoretical Background: (Computer) Game theory”

Leave a Reply

sporadic ramblings of a gamer in academia