The Three Layers of Good Decisions Under Uncertainty

The Three Layers of Good Decisions Under Uncertainty

Most decision failures are failures of structure, not information.

Building prediction systems, deploying recommendation engines, and reviewing analytics pipelines that organizations use to make consequential calls reveals a consistent pattern: the quality of a decision rarely depends on how much data you have. It depends on whether you’ve thought clearly about what you don’t know and how much that uncertainty actually matters.

There are three layers to this. Most people operate on only one.

Layer One: Probability Is Already Happening in Your Head

Every decision you make carries an embedded probability estimate. You just don’t write it down.

When you choose a vendor, you’re implicitly betting on their likelihood of delivery. When you hire someone, you’re estimating their probability of success in the role. When you delay a project, you’re calculating, consciously or not, the odds that conditions will improve. The estimate is always there. The question is whether it’s examined.

This is where most decision-making goes wrong before it even starts. Unexamined probability estimates tend to skew toward whatever outcome we already prefer. We inflate the likelihood of success when we’re excited about something. We underweight downside risk when we’re under pressure to act. We anchor to recent events rather than base rates. These are predictable cognitive patterns. But they’re correctable.

The discipline that helps most is simple: surface the estimate and interrogate it. What probability are you assigning to this working? Is that number based on evidence, analogy, or intuition? What would change your estimate? Have you consulted anyone whose prior experience differs from yours?

There is a parallel in how scientists handle measurement uncertainty. A physicist reporting a result always includes the error bars. The measurement itself is useful, but the confidence interval around it determines whether you can act on it. The same logic applies to business decisions. The estimate matters. The awareness of how wrong that estimate might be matters more.

Working in data science trains you to think this way because models force you to. A predictive model cannot have an unexamined assumption. The numbers require you to commit. That discipline is transferable. You don’t need a model to ask yourself: what am I actually assuming here, and how confident am I?

Improving decision quality starts at this layer, with more honest awareness of what you already believe and why.

Layer Two: Structure Your Options Before You Choose

Once you’ve surfaced your probability thinking, the next layer is organizing it so you can compare options on consistent terms.

Expected Monetary Value analysis is the formal version of this. The mechanics are simple: for each option, you multiply each potential outcome by its probability, sum those products, and subtract your cost. The result is a single number representing the probability-weighted value of each path.

Consider a VP of Product weighing two paths.

  • Path A: build a new product. $300K in development cost, a 75% chance of generating $800K in revenue, a 25% chance of a $100K loss.
  • Path B: improve an existing product. $100K cost, a 90% chance of a $300K return, a 10% chance of a $50K loss.

Running the math, Path A yields an expected value of $275K. Path B yields $165K. Path A has a clear edge. But the number is the beginning of a conversation, not the end of one.

EMV analysis does something more valuable than produce an output: it forces you to translate vague confidence into explicit probability. You cannot fill in a decision model with “I think it’ll probably work.” You have to commit to 70%, or 85%, or 55%. That act of committing, of putting a number on your belief, is itself a form of thinking. It reveals how clearly you actually understand the situation.

The format matters too. Rendered as a decision tree, these models become readable by others. You can show someone exactly where your reasoning leads and invite them to challenge specific inputs rather than argue in abstractions. That’s a significant upgrade over competing hunches.

One important caveat: EMV assumes your objective is to maximize expected value. Risk tolerance matters. A 25% chance of a $100K loss may be tolerable for one organization and catastrophic for another. The framework tells you the probability-weighted outcome. How much variance you can absorb is something you have to know separately.

Layer Three: Test Your Assumptions Before They Test You

The third layer is where hard-won experience diverges most sharply from textbook thinking.

Every analysis rests on assumptions. The probabilities you assigned in Layer One are assumptions. The revenue projections in your EMV model are assumptions. The cost estimates are assumptions. In practice, most of these will be at least partially wrong. The question is: which errors are tolerable, and which are fatal?

Sensitivity analysis answers that question. Instead of asking “what happens if everything goes as expected,” it asks “what happens if this input is wrong by 20%? By 40%? What if the probability drops from 75% to 50%?” You vary each input systematically and observe how the output responds. Some variables will move the outcome dramatically. Others will barely register.

This distinction is enormously useful. It tells you where to spend your information-gathering effort. If the entire outcome hinges on one probability estimate, your priority should be reducing uncertainty on that estimate before you commit. If the outcome is stable across a wide range of assumptions, you can move with more confidence even without certainty.

Sensitivity analysis also reveals model problems. When you start varying inputs and the outputs behave in ways that seem wrong, jumping discontinuously or reversing unexpectedly, the model is often telling you it’s structurally flawed. The math is surfacing an error in your logic. This is one of the most underappreciated uses of the technique.

For straightforward decisions, a spreadsheet with a few input cells and a recalculated output is enough. For complex systems with many interacting variables, you need proper scenario modeling. The complexity of the tool should match the complexity of the decision.

Where This Meets Technology

Machine learning and AI systems make decisions, or more precisely, they generate recommendations that humans act on. Those systems embed probability estimates at every layer. A fraud detection model assigns a probability to each transaction. A clinical decision-support system estimates likelihood of diagnosis. A generative AI agent summarizes complex evidence and recommends a course of action. Each is translating uncertainty into structure, whether through statistical inference or language-based reasoning.

The problem is that the probability estimates inside these systems are often opaque, and the sensitivity of the recommendations to specific inputs is rarely communicated to the people using them. A machine learning model might be highly confident when the input data is clean and closely resembles its training distribution. It might produce the same confident-looking output when the data has degraded or the situation has shifted. A generative AI system might sound authoritative regardless of whether its reasoning is grounded in relevant evidence. Without understanding what assumptions drive the recommendation, the person receiving it has no framework for calibrating their trust.

This is why probabilistic literacy matters more than ever. Not because everyone needs to build models, but because everyone who acts on model-generated or AI-generated recommendations needs to be able to ask three questions. What probability does this system assign to being right? What assumptions is it making? And how much does the recommendation change if those assumptions are wrong?

Those three questions map directly to the three layers. They’re the same questions a good analyst asks before any consequential recommendation.

The Through-Line

Think of uncertainty as water finding its level. It doesn’t disappear because you ignore it. It flows into whatever space you’ve left unexamined, pooling in the assumptions you didn’t surface, the sensitivity you didn’t test, the probability you assigned without thinking. Structure doesn’t eliminate uncertainty. It gives the water somewhere to go where you can see it.

That’s the value of thinking in layers. Surface your probabilities and scrutinize them. Build explicit models so others can challenge your inputs rather than just your conclusions. Test your assumptions against the full range of ways they might be wrong before those assumptions get tested by reality.

Good decisions don’t require perfect information. They require clear thinking about imperfect information, and the discipline to do that work before committing, not after.

The tools for this aren’t complicated. What’s hard is the habit.

Subscribe to The Algorithm

Notes on building AI systems that actually work.