I’ll be in three sessions next week at Games Learning Society conference next week.
Two of them are fireside chats with others:
- Big Debate: Are Online Games Building or Destroying Community? And How Mangled Is It?Wednesday, June 15, 2:00–3:00, Capitol View
- Writing the Games-Based DissertationThursday, June 16, 2:00–3:00, Capitol View
The third is a Hall of Fail presentation about a research project I was part of 7 years ago! When I first started graduate school, I was learning tons about games and learning, games studies, and games research. One of the best things the group I was with was trying to do was create a model of engagement in games. We came up with a great model, informed by many disciplines, but we got hung up on validating the model. So the presentation is basically about the methodological failure my group encountered while attempting to validate the model.
- Modeling but Not Measuring Engagement in Computer Games Thursday, June 16, 10:30–12:00, Browsing Library
Full DRAFT paper here. Abstract below the break. Slides below:
Modeling but Not Measuring Engagement in Computer Games
Thu., June 16, 10:30–12:00, Browsing Library
In 2005, the Digital Games Research Group at the University of Washington presented a model of engagement in games (see Figure 1) that was informed by diverse disciplines including game design theory, presence literature from virtual reality (VR) and simulations research, narrative immersion from literary theory, and motivation literature from psychology and cognitive science. Our theoretical model was comprehensive at the time, and we believe it is still a very useful model to think about how to measure engagement with games as a product of user interface, realistic or consistent simulation and systems modeling, and narrative and role-play.
To measure engagement using our model, we created a data collection toolkit for use in a lab setting. These included a pre- and post-game series of questions based on Witmer & Singer’s presence questionnaire (1998), a mini-survey based on flow theory (Csikszentmihalyi, 1990), detailed forms for researchers to fill out while observing participants playing, and post-game interview questions. To validate the model, we conducted a few initial pilot tests where participants played a commercial game (The Curse of Monkey Island) that we knew was “good” via its average meta-review score on gamerankings.com. We compared this with an educational game (The Oregon Trail 5th Edition), hypothesizing that the commercial game would score higher than the educational one and that our measurements for Curse would reflect its aggregate gamerankings score.
Unfortunately, the results of our pilot tests failed to give us measures that reflected the metascore for Curse, and, what’s more, The Oregon Trail scored higher for our participants! Possible reasons for this include the fact that many game reviews are not written until the reviewer has finished the game, that many memorable and immersive elements to a game’s story do not occur until hours into a game, and that we did not run enough participants in our initial tests to have anything statistically reliable. While our testing toolkit was well suited to uncover issues with usability, it was ill equipped to shed light on the affective measures of engagement with a game’s full experience. We shared our model that year (Chen et al., 2005) but did not move forward with validating it and never produced a final research paper.
This presentation will cover our model and its theoretical underpinnings, which we believe to be extremely timely and important, as evidenced by other scholars from around the world continuing to cite our work from 2005. Sharing our model and how we failed to measure it is also important because there seems to be a new push in games for learning research on measuring engagement that may be following in our footsteps by not including methods that are ecologically valid. Thus, this paper presents a case where data collection methods failed to provide a good way to validate a model of engagement. We will also discuss how this helped shape our early careers as games scholars (e.g., pushing Chen into ethnography) and our current thoughts on how new research methods could be used to finally validate our model of engagement.
Chen, M., Kolko, B., Cuddihy, E., & Medina, E. (June 2005). Modeling and measuring engagement in computer games. Presentation at the annual conference for the Digital Games Research Association (DiGRA), Vancouver, Canada.
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. HarperCollins Publisher, New York.
Witmer, B., & Singer, M. (1998). “Measuring presence in virtual environments: A presence questionnaire.” Presence: Teleoperators and Virtual Environments, 7(3), 225-240.