Go to Blog archive

The art of decision-making, part 2: How to be more rational

This article is part of a three-part series written by Marko Kovic.

 

You are irrational. And so am I. And so is everybody else. Human reasoning and decision-making is systematically flawed, as I have outlined in part 1 of this series. That’s a huge problem: The hidden flaws in our thinking and decision-making (our “Unknown Unknowns”) cause big problems, often without us noticing at all.

You are probably thinking that we should do something about these irrational flaws of ours – and you are absolutely correct! Rationality, our ability to meaningfully pursue and achieve goals, is the single most important skill or resource we have. Being more rational is *always* a good thing, because the more rational your thinking and your decision-making are, the better you will achieve your goals.

Unfortunately, there is no magical pill that will make you perfectly rational overnight. But there are approaches or strategies you can consider in order to improve your decision-making, as well as that of your team or organization. In this article, three such strategies are outlined: Nudging and commitment devices; debiasing; and applying the rules of rationality.

Strategy 1: Nudging and commitment devices

If you can’t beat them, join them. This is a truism in many contexts, but what could this mean when it comes to rationality and decision-making? Sometimes, the easiest way to deal with irrational flaws such as cognitive biases might simply be to exploit them and change behavior by doing so.

One way of exploiting cognitive heuristics and biases is known as “nudging”. Nudging means making people behave the way you want them to behave, but without coercing or forcing them to do so, and without prohibiting or removing options. So a nudge is a kind of soft manipulation, because the people who are being nudged can make exactly the same choices they could before, and they can freely choose whatever it is they want to do. The decision-making context, however, is tweaked, so that people are more likely to make the choices you want them to make.

There are tons of specific nudges that have been tried in practice. A few famous examples:

 

  • Plate size: If the plates on which food is served are smaller, people think that they have more food on their plate..
  • Decoy effect: Let’s say you can buy two sizes of a drink, Small for 5 Francs and Largefor 10 Francs. Then, you introduce a third size: Medium for 8 Francs. Many people will shift their preference to Large for 10 Francs because it suddenly looks like a “good deal”.
  • Social norms: When people receive a letter such as “97% of people don’t cheat on their taxes”, they are more likely to not cheat themselves.
  • Mere exposure: Merely seeing things makes us develop a positive attitude towards those things. For example, schools have successfully made students eat more fruit and vegetables by carefully arranging fruit and veggies to be more prominently visible in cafeterias.
  • Sticky defaults: People tend to stick with default options (status quo bias). For example, more people are organ donors if the default is opt-out (by default, everybody is an organ donor) rather than opt-in (by default, nobody is an organ donor).

 

A concept that is related to nudging are “commitment devices”. A commitment device is, generally speaking, the voluntary decision today to restrict your decision-making options in the future. Why would you want to do what? If you know that you are susceptible to less-than-great decision-making in some contexts (especially when it comes to planning for the future), commitment devices can be very helpful.

For example, I am a bit of a sweet tooth. So in order not to eat too much sweet stuff, I have implemented a simple commitment device: I don’t keep any sweets at home. Commitment devices come in many flavors. A few examples:

  • Public pledges: A public pledge can be a commitment device, because it’s more difficult to weasel your way out of something when you committed publicly to it.
  • Automatic financial deductions: We are bad at long-term financial planning, and automated deductions, such as for a pension plan, can help.
  • Black-and-white smartphone screen: If you turn off the color on your smartphone and make it black and white, you will make using the smartphone less attractive (and therefore spend less time on distractions).
  • Gym membership: People don’t exercise as much as they should. Buying a gym membership means that you have committed to a sunk cost; the money is already spent, so you might as well go to the gym.

So how can nudging and commitment devices help improve decisions in professional contexts? The general approach you should take involves three steps:

  1. Identify decision-making contexts in which errors occur (or, more generally, in which behavior should be changed)
  2. Change the decision-making context so that the irrational flaws that cause the problems are taken into account.
  3. Measure the impact. If needed, rinse and repeat.

The big challenge with this approach is that you need to identify the decision-making contexts which need fixing. Sometimes, that might be easy and obvious (For example, if your company cafeteria is a mess, you can implement nudges to make people clean up after themselves.). But sometimes, you might not even know that some decision-making context is a problem – after all, our cognitive biases and other limitations are Unknown Unknowns. You might need external assistance to figure it out. Or you might try another approach: Debiasing.

Strategy 2: Debiasing

If cognitive biases cause bad decisions, then it’s best to get at the root of the problem and remove biases. That’s the basic idea behind “debiasing”; efforts aimed at reducing the individual susceptibility to cognitive biases.

Debiasing comes in two main varieties, context-specific debiasing and more universal cognitive forcing. Context-specific debiasing are debiasing measures that address biases in certain decision-making situations. Perhaps the most famous example of this type of debiasing simple checklists. Checklists are used in many contexts, and their effects are stunning. For example, as checklists were introduced in the aviation sector, errors and accidents surrounding airplane flights have plummeted. When we travel as passengers, we might not notice, but pilots go through a checklist before every flight in order not to succumb to problems such as confirmation bias or limited working memory.

Context-specific debiasing, however, has two drawbacks. First, in order to develop and deploy such debiasing interventions, you have to identify where and how irrational flaws lead to bad decisions. That is, as mentioned above in the section on nudging and commitment devices, not at all trivial to do. Second, context-specific debiasing has, by its very nature, a limited impact. By developing and deploying context-specific debiasing interventions, you can solve very specific (and big!) problems, but you are not making yourself and other people more rational in general. Enter cognitive forcing.

Cognitive forcing means forcing yourself to engage in metacognition – to think about your thinking. The way to achieve this kind of higher-order thinking skills is to learn about how and why and when our brains might trick us. When we readily engage in thinking about our thinking, we slow down and consider whether the conclusion we have reached or the decision we are about to make might be influenced by some bias or the other. In essence, this type of debiasing turns Unknown Unknowns into Known Unknowns – we enter a decision-making situation with the knowledge that our judgement might be clouded in specific ways. Ultimately, cognitive forcing amounts to a form of intellectual humility, whereby we question our own judgement more readily.

How can this type of debiasing be achieved? The debiasing literature, unfortunately, paints a picture we might not like: We need to train our metaphorical metacognitive muscle. This means that we need to learn about cognitive biases, and that we need to train entering that covered state of metacognition (Fortunately, by reading this article, you have already taken the first step towards this kind of debiasing!)engaging in this sort of debiasing!)

Debiasing, as well as nudging and commitment devices, aims to do something about our cognitive biases. That is immensely important and can significantly improve decisions! However, rationality, as described in part 1 of this series, is not simply defined as the absence of biases. Being rational means to actively form beliefs a certain way, and to actively pursue goals a certain way. So even if you are perfectly bias-free, you could still not be terribly rational. How weird is that!

Thankfully, the “rules of rationality” are fairly easy to understand and implement in practice.

Strategy 3: Applying the rules of rationality

A rational person is a person who is both epistemically and instrumentally rational. Epistemic rationality is concerned with how we form beliefs, and instrumental rationality with how we pursue goals. So what does one have to do in order to be rational in both ways?

There are two important criteria for epistemic rationality. Let’s say, for the sake of argument, that you believe that you will be able to close the deal with a prospective client. That belief of yours is rational when you, first, have good reasons for holding it. In other words: Your belief is rational if it is sufficiently justified. Second, your belief is rational if it is precise. The precision of a belief is best expressed as a probability. So you believe that you’ll be able to close the deal? How certain are you? 75%? 90%? 99% Why?

Instrumental rationality can be summarized with a single criterion: Utility maximization. You are instrumentally rational when you go about pursuing your goals so that you achieve them as much as possible. In a way, that may sound trivial. But when was the last time you actually tried to explicitly define your goals, and to prioritize different ways of achieving your goals, depending on the effectiveness and viability of the different options? Chances are, you never did that explicitly, because in practice, we tend to do things differently. We mostly do what we are used to doing and what we feel is convenient; we don’t really look at our choices and actions through the analytical lens of utility maximization.

Getting a handle on how you form beliefs on how systematically you pursue goals is mostly a matter of actually doing it. You can think of the rules of rationality as useful tools that can help you in your or your organization’s day-to-day decision-making. Those tools work best when they are formalized. For example, writing down probabilities associated with core assumptions and beliefs in a project is a great way of stimulating critical thought for yourself and your team.

Being less irrational takes work. Is there a shortcut?

Actively doing something about our irrational flaws is possible, and there are multiple strategies for going about that. However, there is one thing all of the strategies outlined above have in common: They take work in order to work.

Every organization should be willing to do that work, of course, because what is at stake is, ultimately, the success of the organization. But realistically speaking, not every organization or team or person will always have the resources required for actively dealing with irrational flaws in a broad manner. Sometimes, the goal has to be more modest. Sometimes, it’s not about improving the decision-making of a person, but about uncovering Unknown Unknowns in a singular decision, in a project, or in a strategy. Sometimes, therefore, we are not interested in changing the way a person makes decisions, but rather in uncovering the damage that Unknown Unknowns, our irrational flaws, might do or have already done. Is there a way to do that?

Yes, there is. It is called “Red Teaming”: Systematically challenging a decision, project, or strategy in order to uncover blind spots. More on Red Teaming in part 3 of this series. Coming soon!