This post was inspired by a recent tweet:

https://twitter.com/trengriffin/status/450331161324027904

Which reminded me of one of my favorite lessons in grad school at LSE. The lesson is simple: we think about how many different ways there are to have incomplete information about the future.

A very simple way to think about it is to think about foresight is it having two dimensions: knowledge about what the potential outcomes **are**; and knowledge about how **likely** each outcome is.

Pulling a random card from a normal 52-card deck is a great example of perfect knowledge about both what the range of outcomes are and how likely each outcome is. Even though the outcome is uncertain, we have great mathematical tools for figuring out how we should behave in advance of knowing what the outcome ends up being. This is called “risk”.

But many situations are not so favorable. Asking a lady out, for example, has a few canonical outcomes (she says no, she says yes, she demurs, etc.), but the asker will in many cases not have a clue what the likelihoods are. This is called “uncertainty” - where you know what the outcomes are, but not the odds of each.

So now that we have the two axes - outcomes and likelihoods - we can define two more “perfect” endpoints on this two dimensional spectrum of incertitude.

“Ambiguity” is where you know what the likelihoods are, but not what the outcomes would be. As a contrived example, consider that you were dealt a hand that you weren’t allowed to look at. You would be dealt one more card that you’d be allowed to see. You’d have perfect knowledge about the likelihood of each card coming up, but, crucially, not how it would affect your hand. This type of incertitude is common in very complex situations where you can’t parse out the impact of any given outcome.

Finally, “ignorance” is where you have knowledge neither of what the outcomes are, or what their likelihoods are. These are the “unknown unknowns” that Donald Rumsfeld made famous.

Visually, the layout of different situations looks like this:

Different methods for handling the different forms of incertitude are listed in italics.

The most important thing, though, is to know where you are in this space. If you think you’re in “risk” space, but in reality you’re in “uncertainty”, you’re setting yourself up for a bad day, as the tools you’re using may rely on a probability distribution of outcomes to guide decision-making.

In my view, the bottom half of this matrix (as presented in the graphic) is significantly more common than the top. It’s very rare that we ever have a real distribution to work with; at least not one that we haven’t simply contrived or guessed at so that we could use friendlier, more comforting mathematical models.

In any case, “risk” is actually a very special case in this space, but most of the models you see for decision-making (i.e., the ubiquitous excel spreadsheet) are built around the “risk” case.