The human mind is sometimes very strange. While we’re incredibly rational and intelligent compared to other animals, our minds still often make all sorts of mistakes that are hard to overcome. As I have spoken about before board games are a realist activity, and they can be examined probabilistically and statistically. Thus, they’re a fantastic training ground for recognizing and eliminating cognitive biases and psychological fallacies.
I find this to be an absolutely fascinating field of study, and rest assured I will be writing more on this topic. For now, here are 5 cognitive biases that appear in board game decision making:
1. Outcome Bias
I am particularly susceptible to this bias, especially when working on my Netrunner decks. This is the tendency to judge decisions based on the outcome of that decision, not the odds of it being the correct decision at the point of the decision. For example, if a particular in-game decision has a 10% chance of success, you will probably value it higher than that if, when you first try it, it succeeds.
This seems to be also part of the psychological bias of anchoring where we tend to rely too heavily on the first piece of information we are given, and use that to contextualize and judge information we receive later.
The outcome bias has been studied in gambling situations, where people would rate a particular gambling decision better or worse than their actual odds based on the result being positive or negative.
This is particularly egregious in sports commentary, where if a coach calls what is perceived as a “risky” play, they are vilified if it fails, but commended if it succeeds. Commentators will actually say, on air, that it’ll be a good decision only if it works, which is moronic. The actual odds of success are never discussed, which is the only relevant part of the decision.
I find this is particularly bad in (American) football, where games are more high-pressure due to the relatively small number of games played in the regular season. Perhaps in fear of being criticized due to outcome bias in the minds of spectators, analysts, and team owners, coaches frequently make too-conservative decisions.
In board games, this comes up a lot with competitive card games, where you’re likely to judge particular deck construction decisions based on too low of a sample size. Or you’re likely to think back to the time when one card won you a game to “prove” that it’s a good include in your deck, when it might not be. I do this all the time when constructing Netrunner decks, and have to consciously not play so many “tech” cards that are only useful against certain decks, because it hurts the deck overall if I include them.
2. Endowment Effect and Loss Aversion
This can be described very simply: we overvalue that which we already have. My favorite example of this is video game RPGs, where I, and many other people, tend to hoard health potions and never use them, because we value having them too much to actually use them when needed.
I’ve even seen the same thing in board games, where there has been reluctance to heal a character when they still had quite a few life points yet, even though the heal effect is the same value now as it will be when that character is very low on health.
I’ve also seen people reluctant to make sacrifices to their engines in economic euro games, even though it would result in more points or a better position overall. Look at deckbuilders, where new players invariably shy away from card-trashing effects, even though that is one of the most powerful things you can do in a deckbuilding game.
Loss Aversion is the idea that losing a good is psychologically worse than gaining that same good. This has has implications for game design, which Geoff Engelstein goes into in more detail in this excellent talk. To summarize, game designers might want to be careful to reduce situations where players will lose things, or perhaps use loss aversion to manipulate behavior or emotions in the game.
3. Zero-Risk Bias
This is a fascinating one that I was not aware of before I started researching. Zero-Risk Bias is the tendency to prefer options that eliminate all risk in a single segment compared to an option that eliminates a greater amount of risk overall.
The common example of this is with Superfund cleanup sites where a tremendous amount of expense is made cleaning up sites entirely, when that same expense could be used to do more impactful cleanup while leaving sites mostly, rather than entirely, clean.
In board games, I see this a lot in cooperative games, which are often about risk mitigation. In Battlestar Galactica, the primary crisis resolution mechanism is resolved by people throwing down skill cards facedown into a common pool. In addition to the cards played, two random cards are also added. However, there are also abilities in the game that let you mitigate or eliminate the risk that a particular skill check might fail. It would be very hard to do the calculations here, but perhaps people would overvalue those abilities.
In Forbidden Desert, another cooperative game, I think that players often overuse water sources to ensure that they have enough water to get through the near future, when the better play is instead using those actions towards the victory condition.
4. Gambler’s Fallacy and Hot-Hand Fallacy
This is a fascinating example, because the Gambler’s Fallacy and the Hot-Hand Fallacy are exactly opposite of each other. Instinctively, it doesn’t quite make sense why they would both be common fallacies, but they are. Both come from the same human instinct to pull a self-constructed narrative into random-chance situations.
The Gambler’s Fallacy is the idea that if you have a string of bad luck (say on dice rolls), that you are “due” for a good roll, or vice versa. The Hot-Hand Fallacy is the idea that a person is “hot” after a string of good rolls and will continue to roll well.
From what I can tell, people tend to believe in the Gambler’s Fallacy when they are focusing on the randomizing device itself, such as the die. So you are more likely to think that a given die is due for a better or worse roll rather than a person. Conversely, the Hot-Hand Fallacy is more attributed to the person activating the randomizing device.
The application for board games here is obvious. You need to be careful about not falling into one of these fallacies in any game with dice rolling. A randomized deck is slightly different, as its properties change as cards are removed, but I suspect that people would still tend to miscalculate the odds of a good or bad pull here based on one of these fallacies.
5. Sunk Cost Fallacy
This final fallacy is a particular favorite of mine from my days of studying economics. The Sunk Cost Fallacy stems from loss aversion and is our reluctance to change tactics or strategies because we feel like we’ve already committed to the current course. What we’re actually doing is counting costs that have already happened and cannot be reversed.
In other words, you are likely to continue with a bad investment in a game because psychologically you’re still factoring in the costs you’ve already spent. You have to realize that past costs are “sunk” and cannot be reclaimed, and you have to make the best decision based on what is possible at the time of the decision. This is not only important in board gaming, but in life also. The Sunk Cost Fallacy is pervasive in business and finance decisions. Any time you have invested time and/or money you are susceptible to this cognitive bias.
As I said before, I love thinking about these kinds of psychological faults. Understanding them and being aware of them can help you make better decisions not only in board games, but in life. I don’t go around thinking about very many specific affects, but I frequently think to myself “is this decision rational based on impartial, objective calculations?” Simply being aware of the possibility that you might be making a cognitive error can help reduce those errors.
Are there any of your favorites that I’ve neglected to mention? Comment below!