Decision-making illusions
Behaviors that might seem like good ways to make decisions but that often aren't
Outcomes are probabilistic, so even high quality decisions can result in bad outcomes and low quality decisions can result in good ones. However, the quality of a decision does affect the likelihood of a good outcome, and decision quality is something we can control. Put differently, you don’t get to choose the outcomes of your decisions, but you do get to choose how you make them.
Below is a list of 11 decision-making illusions1 — things that might appear to reflect high-quality decision-making, but that often don’t.
Decision hot potato
When it isn’t clear who should make a decision, groups sometimes play “decision hot potato,” spending a lot of time going around in circles. In addition to being a poor use of time, this can turn an individual-led decision into a consensus-led decision, expand the consensus group, or escalate the decision for someone else to handle. While many delays, consensus-driven decisions, and escalations can be helpful, when not the right approach, they tend to decrease decision quality.
When possible, decide upfront who will make the decision and ensure all participants know their role. A simple heuristic for identifying the decision-maker is to ask, “Who has the most skin in the game for this decision?” If incentives are well-designed, this person (or group of people) is likely to be the most motivated to make a good decision and ensure it succeeds. Balance this criteria with the right incentives, authority, and expertise to identify the best decision-maker.
The HIPPO
Groups and individuals sometimes defer to the Highest Paid Person’s Opinion (the HIPPO), even if they disagree or have ideas that would be worth discussing. They might believe that this person has the best judgment, or they may not feel comfortable or motivated to share and debate ideas.
While this is a legitimate issue, often the highest paid person in the room is the right decision maker. In many cases, their experience and judgment are precisely why they are in their position and their opinion could very well be the best one. A healthy debate and quantitative methods can help validate the strength of their opinion.
“I’ll decide once I see the data”
While this might seem like a respectable, data-driven approach, it’s often an illusion. People subconsciously (and sometimes consciously) interpret data in ways that best align with their preferences.
Whenever possible, work with stakeholders to agree to the decision criteria prior to seeing the data. For example, you might decide that if a certain metric is less than x% you will select option A, otherwise you will select option B2. While it may be necessary to adjust decision criteria after the fact, aligning upfront helps you minimize bias and have a more productive debate about how to interpret the data.
“Strategic value”
Too often, “strategic value” is used as a placeholder for thinking critically about how something will actually benefit the business. What makes this value “strategic?” How exactly will this initiative help us increase revenue or decrease costs? When and by how much? Even if you don’t quantify the impact, make sure it’s clear how the initiative is expected to help the organization achieve its goals.
Measurement inversion
Measurement inversion is a phrase coined by Douglas Hubbard to describe how people often spend most of their effort on easy measurements (i.e. vanity metrics), while neglecting factors that are more consequential for a decision. For example:
To measure the productivity of knowledge workers, organizations often focus more on activity (e.g. time in office, meeting participation, number of projects) than the impact of their work
To measure the business impact of a CX initiative, organizations tend to spend more energy evaluating changes in customer satisfaction than changes in customer behavior
To measure content effectiveness, organizations often spend more effort on views, clicks, and downloads than on how well the content is actually accomplishing its purpose (e.g. helping acquire new customers)
When evaluating options, spend time thinking about what would happen if the decision is successful (or unsuccessful) and how you might measure it. While the true impact of a decision is often harder to estimate than vanity metrics, you might be surprised at what you can come up with given 20 minutes of focused thinking3.
Subjective scoring methods
When dealing with hard-to-measure metrics (e.g. estimated project impact), it’s common to turn to subjective scoring methods4. Here’s an example:
Subjective scoring methods tend to increase our confidence in a decision without an associated increase in decision quality, widening the gap between perception and reality5. A better approach is to identify real quantitative measures that reflect decision success. For example, by “impact,” you might really mean “impact on average deal size” or “reduction in support calls.” Ask your participants to provide estimated ranges for each variable. Even imperfect quantitative estimates are an improvement over scores. At the very least, rank options by estimated impact.
Over-engineering the decision
It’s common to believe that more time, money, effort, data, etc. always result in a higher quality decision. Not so. The effort spent on a decision should be justified by the risk associated with the decision6. Consider whether you have ever been frustrated by how long a decision is taking or thought that there were too many cooks in the kitchen. These are clues about decision quality. At a certain point, there will be diminishing returns on effort spent. That is when it’s time to decide.

A related issue is overcomplicating the decision. Break the decision into components, narrow the list of options, eliminate non-useful decision criteria, and clarify decision-making roles to simplify wherever possible. This can be especially helpful when you don’t have enough time or resources to collect all the data you would like. Make the best decision possible given your constraints.
Doing statistics without logic
Sometimes people insist on “seeing some evidence,” even if there is already a solid logical argument. Or they believe that statistically significant data is sufficient for a good decision. Nassim Taleb calls this “naive empiricism.” The problem is that while you can do logic without statistics, you can’t do statistics without logic. There is no shortcut (that I’m aware of) for critical thinking.
Here’s how I value different forms of evidence:
Logical arguments: Logic is built through study, practice, and critical thinking. It can incorporate cause-and-effect relationships, including second-order effects and beyond, unlike data alone.
1st party data: Data from your own organization can answer questions like, “Have we seen something like this before? What happened?” While it can be difficult to get access to relevant, clean data, this is a powerful tool for informing decisions.
3rd party data: Data from other organizations, like benchmarks or case studies, can help fill in cracks in your uncertainty about a decision. It can answer questions like, “What have others experienced in similar situations?” It can be easier to get than 1st party data, but is often not as strong.
For each form of evidence, quality matters. A faulty logical argument, biased 1st party data, irrelevant 3rd party data, etc. can all lower decision quality. Question assumptions, inspect the relevance of evidence, check for bias, and treat your evidence accordingly.
Assuming causation from correlation
Incorrectly assuming causation from correlation or association is a common mistake in decision-making7.

Finding an association between variables can be encouraging but the vast majority of associations we find in life and business are not causal. This is because of:
Coincidental associations: With so many variables being measured today, it is natural to find many associations that occur by chance. See the Nicholas Cage example above.
Shared common cause: True causal relationships often create associations between other variables. A classic example is the association between the frequency of shark attacks and the volume of ice cream sales. Both are driven by a shared common cause (time of year).
Differences between compliers and non-compliers: Compliance with a treatment often reflects fundamental differences between groups. For instance, someone who signs up for a new workout class is likely prioritizing their health more than the average person. Comparing their health outcomes to those of people who didn’t take the class is not an apples-to-apples comparison8.
If the stakes are low or the decision can be easily undone, it may be appropriate to make a decision without establishing whether an association is causal. The effort you put into reducing uncertainty should be justified by the risk inherent in the decision9.
Action for the sake of action
People often feel compelled to intervene. This might come from overconfidence in their abilities, a desire to leave their mark, neglect for second- and third-order effects, fear of appearing lazy, and other factors. Yet even well-intended interventions can make things worse, especially in complex systems. When making a decision, treat the status quo as a valid option. Sometimes no action is the best action.
Focusing solely on outcomes, neglecting decision quality
I call this illusion “the reverse fortune teller” because we assume that a bad outcome means the decision was low-quality. This is problematic because:
Outcomes are probabilistic: High-quality decisions can lead to bad outcomes, and low-quality decisions can lead to good ones.
Bad incentives: Focusing only on the outcome of a decision without considering the quality may encourage people to highlight only the favorable aspects of an outcome and hide the negative ones (window dressing). It can also discourage risk-taking.
Delayed feedback: The outcome of a decision is often unknown for some time, but decision quality can be assessed immediately.
Locus of control: We don’t get to choose our outcomes but we do get to choose how we make decisions. Control the controllable.
A focus on decision quality requires not just critical thinking to evaluate options, but also to assess the decision-making process itself.
Honorable mentions include mistaking two-way doors for one-way doors, misinterpreting p-values, using point estimates instead of ranges, and assuming all things are normally distributed.
Thresholds like this and checklists are great for binary choice decisions (yes/no). For discrete choice decisions (choosing between 3+ options), rank your options based on estimated payoff. For continuous choice decisions (e.g. allocating resources across options), simulate or imagine total payoff under various scenarios.
Proxies can be extremely useful in estimating the variable of interest. But, be wary of sacrificing reliability for ease. Apply skepticism liberally.
Subjective scoring methods often have participants rate various attributes of options (impact, risk, cost, etc.) on a scale (e.g. 1-5, high/medium/low, etc.). These scores are typically used to get an average score for each option or to plot them on a 2x2.
Subjective methods are not inherently bad. The issue here is the scores. Ordinal scales have arbitrary thresholds (e.g. where is the line between medium and high impact?) and are interpreted inconsistently across people and time (4/5 for me is not the same as it is for someone else). While scale points can be calibrated to real quantitative measures, this is rarely done in practice.
Specifically, the likelihood and magnitude of a negative outcome. If you want to get a little more precise with how much effort is justified for a given decision, I cover this topic here with the very smart Tom Vladeck.
If the topics of CX and correlation vs. causation interest you, I cover it more in depth in Why it's not worth trying to equate CX scores to customer behavior and Connecting CX and business value: Run more experiments.
The research question here is not whether those who took the class had better outcomes than those who didn’t. It’s whether those who took the class had better outcomes than if those same people hadn’t taken the class (the counterfactual).
Honorable mention: Associations also don’t tell us the direction of causation. You might want to say that customer satisfaction causes purchase frequency, but how do you know that customers aren’t more satisfied because they are purchasing more often?