Geoff Engelstein is one of the best people in board gaming at teasing out and explaining the complex factors that go into game design and game experiences. His contributions to the Ludology podcast and his GameTec segments alone have dramatically improved my understanding of games. His encyclopedia of game mechanisms with Isaac Shalev is an amazing resource.
His newest book, Achievement Relocked: Loss Aversion and Game Design, discusses the psychological concept of loss aversion–the idea that we feel more bad about losing something than we feel good for gaining that same thing. And, especially compared to the rest of Geoff’s work, I can’t help but feel a bit disappointed. At the same time, I’ve been thinking about the book all day. Perhaps my disappointment is unwarranted.
Slim
First the critique. I don’t know if there’s enough information here to warrant a book. Multiple times throughout my read I got the impression that Geoff was stalling for time, trying to fill space before moving onto the next concept. He positions himself here as a neutral idea deliveryman, and I wish he had gotten a bit more spicy with it. I want to read about how Geoff sees loss aversion impacting the future of games. What aspects of it haven’t been fully explored yet? As it is the slim 100-and-change page book seems to be struggling to find things to say. At the end of the first chapter 4 pages are spent summarizing what you’re about to read in the following chapters.
Additionally, I don’t know if there was anything in the book I didn’t already know from being a casual fan of Engelstein’s and someone with some knowledge of economics. If you feel like you know the basics of loss aversion I don’t know how much more information you’ll get by reading this book. If you know nothing about it, the book is a solid primer.
Food for Thought
Now the walkback. Even though I do find Achievement Relocked slim, it’s gotten my brain working. I’d like to highlight a couple places in the book I think could be teased out a bit more.
The most commonly referenced study in the book is one where participants were offered either an 80% chance at $4000 or a 100% chance at $3000. A large majority picked the sure thing even though the expected value of the first option is greater ($3200 vs. $3000). This is presented as a sort of arbitrary psychological error, but I don’t think it should be dismissed so easily. For instance, on page 50 Engelstein says, “The point here isn’t that people are bad at math. Well, in a way it is, but the important factor is that people rely on rules of thumb or gut feelings to make these decisions”.
The assumption, for at least the first half of the book, is that people are making incorrect decisions because they’re choosing the option with the lower EV. However, I think there’s an element missing from the equation. I believe that people gravitate towards more certainty when there is something to gain because they are consciously trying to avoid the stress of randomness. Choosing the 80% option means having to endure the uncertainty of knowing if you’re getting the money or not for a period of time. It’s not just a choice between an 80% chance of $4k or a 100% chance of $3k. It’s also a choice between enduring a measure of stress or not.
I don’t think Geoff would disagree with any of this analysis. I don’t actually know as he doesn’t tap into it much. But what’s the line between irrational psychological error and rational psychic preference? What’s the difference between a gut feeling and a conscious avoidance of stress?
In terms of game design this might not be so significant, as all that matters is what people actually choose and how that knowledge can inform our designs.
What is Correct
But in the way we talk about games it can matter quite a bit. Games, to some degree or another, reward mathematically correct play. I’ve discussed before how they can be a great tool to practice overcoming cognitive biases we may not like, because games will punish poor decision making. While this is a strength, it can also be a weakness, because most games set up a zero-sum victory scenario where one player wins and the others do not. Similar situations are rare in real life. Even in the competition of the marketplace it’s exceedingly rare for one firm to completely “win” a sector of the market, and absent government meddling such situations don’t last long. War is the major exception, which is perhaps why so many games are modeled after or inspired by war.
But these zero-sum situations are things we want to avoid. Games we pursue. Outside of games, the things we typically find good and beautiful in life are not zero-sum and are not concerned with mathematical precision. The choice between spending time with family and spending time on an individual pursuit is complex and dynamic. If that choice was represented in a game it would probably be abstracted down to victory points and resources.
So what’s a psychological error and what’s a preference? It depends on what our goals are. In a game the goal is to win, and being mathematically optimal helps with that. In life the goal isn’t set out for you; part of the process of living a decent human life is trying to figure that out.
Of course, even if the ostensible goal of a game is to win, the game is still housed in our lives, and for many the goal might better be described as “winning without disrupting the enjoyment of the group”, or “winning unless that requires a certain amount of cognitive load”, or “winning with this particular strategy I’ve decided to test”.
Ups and Downs
Engelstein gets to some of these complicating factors in chapters 4 and 6. Chapter 4 is about utility theory, the idea that people don’t value money in a mathematically linear way but rather value it based on its perceived effect on one’s life. This chapter serves to clean up some of the subtextual problems I had with the earlier chapters and dips into the all important topics of subjective value and marginalism, two ideas far more important to economics than the homo economicus model alluded to in chapter one. (An economist once joked to me that saying that economists “believe” in homo economicus is like saying that physicists believe in perfectly level frictionless planes.)
Chapter 6 deals with regret, which again taps into these psychic costs that one may or may not see as rational or justified. When I seek to avoid regret, even in the kind of game situations outlined in the text, I often do it with a conscious understanding that I am making the suboptimal decision in order to avoid some psychic cost.
Unfortunately, as much as I enjoyed chapter 6, it does contain the most befuddling passage in the book. I could be incorrect here, but the way the experiment is explained does not justify Engelstein’s analysis. He discusses a study where people are given straight 50/50 odds versus a choice in which a box is filled with an unknown numbers of balls, any of which could be red or green. The participants predicts which color they will pull and then randomly selects one. We then get this explanation: “The chances of winning either game are 50 percent. This is obvious for game 1 but might need some explanation for game 2. How can it be 50 percent with so many unknown variables?[…]As an example, let’s say I put five red balls in the box–that’s it. Assuming that you, as the player, have an equal chance of announcing that your target is red or green, then 50 percent of the time you pick red[…]and 50 percent of the time you pick green” (emphasis mine).
The part I italicised can easily not be true based on what was said about the game. If the person performing the experiment has knowledge about what color people are more likely to choose, that person has an edge on the player and the odds suddenly aren’t 50/50 anymore. To my mind the rational choice is to pick the guaranteed 50/50 odds. But this example is meant to show that people value “competence” (i.e. knowing more of the information that is available to know) even though it’s irrelevant to the odds.
—
Gripes aside, I hope by examining cognitive biases such as loss aversion we can start to think of the ways that board games and our discussions about them can perhaps break new ground. RPG’s and video games have explored new and different ideas of success and decision-making, but they’re more suited to do that by their nature. Can board games do the same without becoming something else entirely? I’d love to see people try.
1 thought on “Engelstein’s “Achievement Relocked” And The Nature of Error”
I would also take a 100% chance of walking away with a huge amount of money instead of a 20% chance of not getting anything. A 200$ difference in expected value is not enough to offset the inherent risk of the proposition.
To me it’s no more of an example of “risk aversion” than insurance. Sure, it’s improbable my house is going to burn down but I’m not going to risk becoming homeless to save 5 bucks. It’s not a matter of expected values, it’s a matter of severity. Frankly, I’ve seen many choices like this in games and ithe “safe” move is almost always the right one.
Either way, I agree avoiding stress or discomfort is at the heart of the issue.