A Pocket-Size Checklist of Thinking Errors
All the ways your framings and models can lead you astray
Marco Giancotti,

Marco Giancotti,
Cover image:
Photo by Mario Gogh, Unsplash
A recurring theme on this blog is the idea that we use "framings" and "(mental) models" to make sense of the world. Neither of these are original concepts of mine, but I have spent a considerable amount of time and keystrokes refining, redefining, and clarifying what they mean and how to understand them. It occurs to me that some readers might be wondering why.
My chief goal is to decide, act, and live better—both for myself and for whoever wants to listen—and I believe that thinking better is one of the best ways to do that. Framings and models are meta-thinking tools for this very practical purpose, and in this post I will try to prove that.
Talk of framings and, especially, mental models is common nowadays, but they seem to be relegated to business self-help shelves in bookstores and clickbait "tips and tricks to be more logical"-kinds of blog posts. As I wrote previously, I think this is a disservice to these ideas.
They are not just useful add-ons for productivity buffs and those who want to be cleverer: they describe how every single thought you ever think works. We always frame things and construct models whether we realize it or not. Understanding them even a little better is probably a good investment of anyone's time.
One first, easy way to leverage them is in the negative. How does one typically misuse framings and models? What thinking mistakes do they highlight?
I tried that exercise, and the result was the 15-item checklist below. These should encompass virtually every way to use framings and models wrong—and, by extension, every possible thinking mistake. Not bad, for just 15 short items!
The idea is for you to return to this list whenever you're puzzled, confused, stumped, contradicted, flummoxed, bewildered, perplexed, nonplussed, dumbfounded, frustrated, bamboozled, discombobulated, balked, mystified, flabbergasted, foiled, or foxed. It might just point you in the direction you need.
But first, a quick recap of what "framing" and "model" mean in AeMug-speech.
Definitions
The premise to all that follows is that virtually everything about human thought consists in building predictive models of the world. Our brains have embedded prophesy devices that are working around the clock to predict the future, reconstruct past events, and understand our place in all that. Thinking is simulating the world. The question is how—not from a biological point of view (we don't know much about that), but from a systemic one.
In this context, a framing is the set of things that you consider to exist when thinking about something, including what those things do. You can't keep the whole universe in mind, so you have to pick a minimal number of "moving parts" that suffices to build your next model.
If you're trying to predict the outcome of a tennis match, you probably focus on what's going on between two specific humans, two rackets, one ball, one court, and perhaps a few other elements, but probably you don't include the International Space Station, your cousin's masseuse, or the Cambodian tapioca industry. The latter three are not part of your "tennis match prediction framing"—they're irrelevant, dismissable noise for your current purpose.
Readers who also follow Plankton Valhalla might remember that Boundaries Are in the Eye of the Beholder. A framing, then, is a subjective, arbitrary, but deliberate choice of boundaries. It is also the understanding about how those moving parts behave, their "properties".
(Those familiar with information science or philosophy may use the term "ontology" interchangeably with "framing".)
I call the "moving parts" comprising a framing black boxes to emphasize that you don't need to know or care what goes on inside them, only how they look and behave from the outside. In that last sentence I used "look" metaphorically, because even abstract concepts are black boxes.

So the tennis ball in the "tennis match prediction framing" is just an object with a certain shape, texture, and color, which bounces in a certain way. For all you know, it might contain a thousand multiverses full of monsters and better versions of you on the inside, but you don't need to think about that: as far as this framing is concerned, the ball is just a black box, and it only has to keep doing what it usually does on the outside in order to be useful for your goals.
In language, every word is a boundary around a set of concepts and behaviors, and the inside of the boundary is a black box.
Finally, a model is a specific mental arrangement—or, as I prefer to say, an alignment—of the black boxes comprising a framing. The humans, the tennis rackets and balls, and so on could be aligned in a lot of different ways: for example, you could imagine two rackets on one side of the court, and two players on the other side; or a court completely flooded with a million tennis balls and no humans. Every such arrangement is a model, but usually you want to imagine the model with the alignment that best approximates the actual match in the real world. That will allow you to make the best possible simulation of how the real-world game might unfold, based on your knowledge.
And here's the ELI5 version: the framing is the group of Lego bricks you take out of the bag; each brick on the floor is a black box; the model is whatever you build with them today. But it all happens in your head!
You can read more about these things here.
The Checklist
First, the list. Explanations and examples follow. We'll start with the models, because they're the most superficial part, and go deeper as we proceed to framings.
Model errors
- Construction errors:
- When aligning the black boxes, did I really replicate the real-world situation I care about?
- Did I forget to include all the relevant black boxes?
- Naivety errors:
- Am I implicitly assuming that my model is 100% reliable?
- Am I confused about the purpose of the model in the first place?
- Am I assuming that other people share the same purpose when using what seems like the same model?
- Existential errors:
- Am I forgetting that every thought I think has a framing and a model behind it, however implicit?
- Am I forgetting that what I have might actually be a framing problem, rather than a model problem?
Framing errors:
- Behavior prediction errors:
- Am I misunderstanding how each black box should behave?
- Am I forgetting some important but rare behaviors?
- Boundary errors:
- Am I drawing the boundaries too wide?
- Am I drawing the boundaries too small?
- Am I relegating too much or too little to "environment" or "background noise"?
- Premise errors:
- Am I assuming that the same-sounding word used by someone else is necessarily part of the same framing as mine?
- Am I assuming that black boxes I've never seen interacting would interact in predictable ways?
- Am I forgetting that framings are arbitrary choices and can be changed as and when needed?
Model Errors
Construction Errors
When aligning the black boxes, did I really replicate the real-world situation I care about?
The most obvious error when making predictions, and probably the only one that most people worry about: did I get it right? Am I modeling something that doesn't exist?
Did I forget to include all the relevant black boxes?
I might have succeeded in aligning some moving parts realistically, but am I downplaying some important known factors? For example, maybe I'm aware that weather is a factor in a tennis match—weather is in my framing—but in this case I might have forgotten about it, and fail to predict that the extreme heat on the day of the match might affect the two players differently.

Naivety Errors
Am I implicitly assuming that my model is 100% reliable?
Ah, the map is not the territory! All models are wrong, even though some are useful. In other words, there is always something I can't predict or account for. That's okay, as long as I remember that. Forget it, and I'm in for some rude awakenings.
Am I confused about the purpose of the model in the first place?
Since boundaries and framings are arbitrary, how I choose them depends on what I'm trying to achieve. If my goal is match-betting, the humans-rackets-ball-court framing is good; if my goal is designing a zero-gravity experiment, I probably want to pick the ISS and its contents as black boxes, and treat any ball game on the ground as useless noise.
That's an extreme example and hard to get wrong. But very often we deal with much subtler doubts about purpose. For example, if I'm the tennis team's physician, I might be more interested in predicting whether our player can finish the match without injuries, rather than the final score—a difference that needs to be reflected in my choice of framing and model.
Am I assuming that other people share the same purpose when using what seems like the same model?
Different people have different goals. Thus, they will naturally choose different framings and models even for the same real-world phenomena. A fan and a tennis physician will be very confused in conversation unless they understand that they're thinking about the match with different goals and perspectives.
The same for two fans who enjoy the sport differently—one for the strategic awe of a long and complex rally, for instance, and the other for the sense of being part of a crowd of passionate fans. These slight differences in goals will impact how they think and talk about the game.
All this is normal and to be expected. I should always be asking myself: is my interlocutor trying to do the same thing with his model?
Existential Errors
Am I forgetting that every thought I think has a framing and a model behind it, however implicit?
This is one of the toughest. We don't often meta-think, as in "am I thinking about this right?" We just... think, and may the Flying Spaghetti Monster help us.
The thing is, it is much easier to think wrong—make one of the errors in this checklist—than right. Taking a step back, at least for the trickier problems, is a powerful way to do better.
Here's the solution: I'll remember to return to this list over and over until I've memorized it!
Am I forgetting that what I have might actually be a framing problem, rather than a model problem?
A-ha. Maybe I'm doing everything right with my black boxes, I'm following the instructions manual to the letter, and yet things don't check out. Maybe it's time to go deeper, down into the very foundations of my thinking about this problem—question the instructions themselves. I need to check the Framing Errors section.
Framing Errors
Behavior Prediction Errors
Am I misunderstanding how each black box should behave?
Perhaps I misunderstood how those things work. There are two "player" black boxes in the tennis match, alright, but maybe Player Carlos has more cards up his sleeve than I thought, and his forehand return dip is not as perfect as I initially expected. This affects the quality of my predictions.
Am I forgetting some important but rare behaviors?
Maybe I have the basics right, and my model provides good predictions in most common cases, but what if one of the black boxes behaves in peculiar ways under certain less-common conditions?
Maybe I expertly included "the crowd of spectators" as another black box in my sophisticated model of the tennis match, because sometimes too much noise can distract the players. But did I consider that Player Nick becomes stressed when the crowd is too supportive?
Boundary Errors
Am I drawing the boundaries too wide?
I usually choose my black boxes instinctively—I draw their boundaries in the way that feels most obvious. But sometimes that is not enough for the level of quality I need in my mental simulations of the world.
Perhaps I need to open it up and look inside: instead of "Player Carlos", I might have to begin thinking about "Player Carlos' left knee (which was injured months ago)", and "Player Carlos' mental state": new black boxes added to my framing to make sense of what is going on out there.
Am I drawing the boundaries too small?
On the other hand, I might be considering too many details. I don't need to think about the court's grass conditions or Player Coco's tank-top's size. True, everything is connected and even those factors might influence the match a tiny bit, but those black boxes would probably complicate my mental model more than they would enhance it.

Am I relegating too much or too little to "environment" or "background noise"?
This is a deeper and broader version of the "did I forget to include all the relevant black boxes?" question in the Model Errors section. In that case, it was simply about leaving one of the black boxes available in the framing out of the model without a good reason. This framing version, however, is about blind spots and false positives—much harder to debug.
There might be important factors I'm completely oblivious to that heavily impact the quality of my predictions. This is caused by ignorance, and it is not always my fault. What if, for example, Player Carlos just broke up with his partner yesterday, and is playing in a devastated state of mind? I would do well to draw a new boundary out of what I considered to be mere background noise, and add a new "Carlos's ex" black box to include in my framing. Only then it might occur to me to monitor both sweethearts' social feeds on the day before the match, and ask around from sources in the know, in order to update my model to include the latest scoop.
(In case you're wondering, I'm making all this up. I don't really think girlfriends and boyfriends are worthwhile black boxes for most tennis matches. Hey, I'm just trying to keep things simple by sticking to the same example.)
Conversely, I might be including irrelevant black boxes that don't significantly affect the outcome of the match. This problem is similar to the "boundaries too small" case, so refer back to that.
Premise Errors
Am I assuming that the same-sounding word used by someone else is necessarily part of the same framing as mine?
The Naivety Errors section reminds me that framings and models are relative to the goals of their users. This causes a lot of problems when two people use the same word to refer to different real-world phenomena. They framed things differently from each other, but the fact is hidden by language.
When a male fan of Female Player A mentions "A's legs", he might mean something entirely different from when A's physician pronounces the same words.
The solution, again, is to first uncover each other's goals, then understand the discrepancies between each other's framings, and finally attempt to communicate about the models.
Am I assuming that black boxes I've never seen interacting would interact in predictable ways?
The number of possible interactions between the black boxes in a framing increases quickly as grows: five black boxes can be combined in ten pairs, ten black boxes can pair up in 45 ways, and so on. These numbers shoot up even faster if I consider three-way interactions and even more complex ones.
So, have I really considered all the possible (and feasible) ways the elements can clash and synergize with among themselves? Have I seen how the black boxes behave in all cases? Or am I just expecting them to behave the same even in novel situations?
Player Coco has just bought the newest high-tech racket model. Can I be confident about how her game will be affected?
Am I forgetting that framings are arbitrary choices and can be changed as and when needed?
This is the last, but possibly greatest, question. I wonder why we are so good at not asking this.
Boundaries are in the eye of the beholder. My framings are all up to me. I am not forced to use someone else's framings, nor to stick to the same framings forever. I can fix them, evolve them, integrate them with new ones at any moment. I can even throw them away and start from scratch.
Given that all conscious thought is based on the predictions and simulations of mental models, and given that models are always built on top of framings, the skill of reframing is arguably one of the most important for any human being to acquire.
It happens all too often. When something just doesn't make sense, when no amount of mental manipulations and considerations help, it's probably time to revise the framing. ●
Cover image:
Photo by Mario Gogh, Unsplash