Thomas Pendergast Vladeck home

The importance of model thinking

The other day, I got into a weird argument with my cousin (who is among the smartest people I know). We were discussing Sam Wang, the Princeton professor who runs the Princeton Election Consortium (PEC). Of the well-known prognosticators about the election, Dr. Wang was the most wrong, with his site estimating a probability of HRC winning to be over 99%.

My cousin was arguing that the results eliminated Dr. Wang’s credibility in this field and that we basically shouldn’t be listening to him any longer. Because he had been so spectacularly wrong, why should he be trusted again?

But this is wrong, and why it’s wrong is important in the discourse of ideas. First, Dr. Wang wasn’t reporting that he himself was estimating these odds for HRC, he was reporting that his model was outputting these estimates. This is an important distinction. He may have been convinced by the statistical model he was referring to, and he may have also believed its reported estimates, but what’s important for these purposes is that he was reporting the results of an independent model, not simply saying that’s what he believed.

Certainly, the model that the PEC had been using has lost its credibility. We now know that it didn’t properly incorporate correlated error in the outcomes at the state level (e.g. a miss in PA making a miss in MI and WI more likely), and it underestimated the distribution of overall polling bias. We shouldn’t use it again.

But what if Dr. Wang creates a new model that corrects for these mistakes? How now should we take my cousin’s advice to disregard Dr. Wang? Do we not even bother with the new model since the source is tainted by the previous election results? Do we inspect the model independently?

We can see here that my cousin’s advice doesn’t make much sense if you treat the model and its author separately. Clearly the new model must be treated on its own merits.

But this gets at a deeper question. What is a recommendation, a forecast, and estimate, an analysis, etc., without a model? The answer is that there is always a model, because there is always some kind of computation that leads to the end result, even if that computation is taking place entirely within the neural circuitry of the analyst. In these cases, when people simply come to their own conclusions, the author and the model are one and the same. There are no equations, parameters, logical relations, etc., that observers can evaluate and see if the specification does or does not make sense.

If Dr. Wang had not made his model explicit and had simply been reporting his own estimates, then my cousin perhaps would have been right. In this world, the logic would go something like this: his (Dr. Wang’s) model turned out to be bad, and his model was him, so disregarding his model and disregarding him are one and the same.

But this of course was not the case, and this is why it is so important to think in terms of explicit models. If you don’t have a model in mind facing something in the real world, it’s not even clear to me how you update your knowledge, aside from adding an additional memory to your bank of heuristics. When you understand how a model functions - the relationships between its several parts - you can adapt and improve it in the face of real-world experience.