1-Page Summary

Each day, we face difficult decisions. At the same time, the world moves so fast that we often lack a clear view of the problems we face. In The Great Mental Models Volume 1, Shane Parrish and Rhiannon Beaubien argue that, in the face of this complexity and uncertainty, we can navigate tough decisions using mental models—simple mental representations of how things work—that help you cut through complexity and understand the world.

The Great Mental Models Volume 1 presents a foundational set of nine mental models. According to the authors, these models are timeless thinking tools from a variety of fields. In learning them, you’ll equip yourself to make better decisions, avoid consequential mistakes, and succeed in life and work.

The next two volumes of The Great Mental Models build on the first, which introduces the premises and purpose of the series. Volume 2 continues with mental models from physics, chemistry, and biology, and Volume 3 describes models from systems thinking and mathematics.

Parrish and Beaubien, both former Canadian intelligence agents, are the founder and content strategist (respectively) of the Farnam Street blog, where they write about mental models, decision-making, and the pursuit of wisdom. In this guide, we’ll explain their view of mental models—what they are, how they work, and why they matter—and we’ll discuss the nine models they present.

In commentary, we’ll expand on the authors’ explanation of mental models, and we’ll offer real-world examples to show how the authors’ choice of models can be applied in daily life.

What Mental Models Are and How They Work

To begin, we’ll explain the authors’ approach to mental models. We’ll explain what a mental model is and describe why you need them, and we’ll discuss the type of models that the authors chose for Volume 1 of The Great Mental Models.

Mental Models Represent How Things Work

Put simply, a mental model represents how something works. According to the authors, mental models are time-tested ways of thinking that help us break down and solve complex problems.

The authors’ premise is that the world operates according to a finite set of rules and patterns and that by recognizing and understanding them, you can see through a problem’s complexity and make better decisions. Think of a mental model as one such rule, or a discrete “chunk” of understanding. Multiple mental models give you multiple chunks that, taken together, help you grasp how things work.

For instance, having a variety of mental models can help you make a delicious plate of pasta—texture and flavor, two common culinary concepts, help you ascertain whether your dinner is well-cooked. Add in others, such as timing and heat, and you gain an even clearer picture of what it takes to cook well.

Multiple Kinds of Mental Models

Mental models come in various forms. Some sources, such as the authors and writer Julian Shapiro, call them simplified representations of complex ideas. Others describe them as being more akin to thinking processes, rather than discrete, unchanging chunks of understanding. Still others act more like rules of thumb.

Since not all mental models are the same sort of tool, try to notice which of the authors’ models fall into each category. Additionally, notice that the notion of “mental models” is itself a mental model—a way of describing how we conceptualize, break down, and make sense of the world. Just by gaining the model of mental models, you’ve already started to level up your thinking.

According to the authors, mental models make you a better decision-maker in life and work because they enable you to understand why things happen in the ways they do and therefore be able to predict what will happen next. Returning to the pasta example, learning mental models for timing, heat, and flavor help you decide what your dinner needs—perhaps a bit more sugar in the tomato sauce or a bit less time on the boil.

You’re likely to face more complex scenarios in your life than how to cook a great plate of pasta, which is where the authors’ mental models will really help you. In Volume 1, the authors focus on broad thinking tools that don’t fall into any specific field, opting instead to begin with a well-rounded repertoire of mental models that capture humanity’s best, time-tested wisdom.

Understanding the fundamentals will enable you to see the world with “x-ray vision” and solve any problem by mixing and matching your mental models. The authors call this ability to see clearly and devise creative solutions wisdom.

What Is Wisdom?

In the book’s introduction, the authors describe wisdom as the ability to find the right solution to various kinds of problems. Mental models help you do this by allowing you to turn the problem over, look at it from various angles, and break it down. Looking at the etymology of “wisdom” helps clarify what the authors mean:

There are numerous other perspectives on wisdom, an age-old topic in philosophy, literature, religion, and psychology. Conventionally, human cultures relied on “the wisdom of the elders,” a sort of common-sense knowledge about life passed down from generation to generation. More recently, scientists have tried to define wisdom as a specific set of capacities and virtues, including emotional sensitivity, communication skills, and learnedness.

There’s no consensus on what exactly wisdom is, nor whether people can learn it. However, you can learn many skills that some scientists argue contribute to wisdom, such as the ability to evaluate options and learn from your mistakes—which mental models can help you do.

With that, let’s dive into the authors’ nine foundational mental models.

Group #1: Elemental Models

To begin, the authors introduce three elemental models that apply to many situations. These first models expand upon and demonstrate the authors’ notions of what mental models are and how they work:

Altogether, these broad “meta-models” will help you think more clearly about each successive model.

Model #1: Know That Maps and Models Have Limitations

First, the authors explain how to think about mental models as maps. Note that the authors treat “maps” and “models” as synonymous terms. Given this, the authors’ argument is that the limitations of real, physical maps also apply to mental models.

Parrish and Beaubien argue that maps are invaluable tools—they help us navigate the world by giving simplified representations of the terrain. But to use them effectively, remember that maps aren’t literal pictures of the territory they describe, nor do they provide every piece of information available about that territory. Rather, they give only the information you need for your specific purpose. For example, if you’re driving to Yellowstone National Park, you’d want a road map that includes landmarks (such as Old Faithful); you would have no use for a relief map of the same region that shows where public hunting land is.

All Models Are Wrong

In 1976, the British statistician George Box stated that “all models are wrong, some are useful.” In essence, this captures the reality that no model or map can ever perfectly represent the real world.

As the authors suggest, this is because the world is infinitely large and complex, while models and maps attempt to simplify things into finite representations. Much of the scientific establishment dedicates its work to refining scientific models in increasingly nuanced and precise ways—however, in most cases, useful is a better criterion than perfect.

This invokes a popular principle from Stoicism: Focus on what you can control rather than what you can’t. Your maps will never be perfect or complete, no matter what you do—for instance, numerous beliefs and theories that once guided the world have been overturned and replaced with newer ideas. This highlights that living effectively comes not from perfect, flawless knowledge, but from the ability to act decisively despite imperfect knowledge.

Because maps exclude most aspects of the territory in order to be useful, they have certain limitations. The authors stress that if you fail to recognize these limitations, you risk making poor decisions. Keep these two limitations in mind:

Limitation #1: Maps are time-bound. We make maps at certain points in time, but maps don’t automatically change as the world does. This means that maps go out of date. Outdated maps can lead to bad decisions and mistaken beliefs. Similarly, when using a model, consider whether it’s still useful or if the thinking needs to be updated. (Shortform note: A Shortform guide operates in this way; our commentary often expands or updates ideas from older books (“maps”), helping offset the time-bound nature of the information.)

Limitation #2: Maps are reductions. Remember that the territory and its map are different—many things have been excluded from the map that you’ll run into in real life. If you forget this, the authors say that you risk dogmatically forcing your maps to “work,” even when they no longer fit reality. For example, a pothole is not going to be on the map, but you would drive around one. You should have the same flexibility in your thinking.

(Shortform note: On a similar note, Donella Meadows says in Thinking in Systems that we need to consistently update our models of complex systems. Much of the time, something you think you understand will appear to change or behave differently than you expected. When this happens, you need to update your model, revising it to reflect the new information you’ve learned.)

Shortform advice for applying this model: Model #1 is a set of principles to remember rather than a technique to implement. Use these principles by looking for “maps” in your everyday life. When you recognize one, remember that it only contains limited information, it might be outdated, and it may or may not be useful for your specific purpose. If it’s not, then find a new map.

Model #2: Know Where to Start

When examining any problem, start with the basic facts or bedrock assumptions of the discipline or field. These are called first principles—in other words, the fundamentals of the discipline or domain. For instance, the first principles of conversation involve active listening and respect for your conversational partner. By starting from first principles, you ensure that anything you do is built upon a solid, proven foundation.

(Shortform note: One notable attempt to derive first principles is the linguistic theory of a “Universal Grammar,” most associated with Noam Chomsky. This theory sought to find the basic, bedrock grammatical rules that, in theory, are common to all languages. While it found early success, UG has come under criticism in recent years, with scholars noting the ambiguity and confusion surrounding what the proposed basic principles are and how they work. This suggests that searching for universal principles is difficult, and perhaps impossible—so in practice, focus on finding what works in your given situation.)

According to the authors, starting from the fundamentals enables you to think for yourself. Typically, we see the world through a layer of inherited ideas and assumptions that influences how we think. But if you start from the bedrock facts, you can devise solutions that are independent of cultural conventions. This is the difference between knowing how to follow established rules and knowing enough to improvise your own solutions—the difference between a cook who can only follow recipes and a chef who understands how to create delicious food without directions.

First Principles in Chess

In The Art of Learning, Josh Waitzkin explains how he started his journey to chess mastery by studying the basic elements of the skill—in other words, the first principles. In chess, you learn first principles by studying simple situations that demonstrate the core capabilities of each piece. From each situation, you learn what each piece can and can’t do, as well as how they relate to one another.

For instance, Waitzkin first studied King-Pawn versus King. In this situation, the objective is to understand how you can trap the lone king with your king and pawn. By practicing the situation diligently, you learn the strengths and limitations of each piece, and you discern the principles that govern the interaction. Once you understand that principle, you can move on to a different elemental situation.

Further, Waitzkin explains that understanding these underlying principles gives you a deeper understanding of chess strategy. While a player who simply memorizes moves will fail when his rote strategy falls apart, a player who understands principles can recognize them at play and adapt his strategy on the fly. Fundamentals enable you to see what’s going on beneath the surface and adapt.

Applying this model: While there’s not always an easy way to find first principles, the authors recommend that you start by questioning what you believe to be fundamental. Often, you’ll find that supposedly bedrock principles are just conventions or unexamined beliefs. Keep questioning these, and you’ll eventually reach a handful of concepts that you can’t reduce any further. For instance, the fundamentals of cooking are four basic elements: salt, fat, acid, and heat. Knowing these, you can apply them by experimenting to figure out how they relate and suggest secondary and tertiary principles.

(Shortform note: Some fields and skills, such as physics, chess, or classical music, have established fundamentals. For instance, chess fundamentals include “control the center” and “tie up two opposing pieces with one.” None of these are obvious at first glance, so don’t stress about finding first principles right away—it can take many people and many years to establish truly bedrock, reliable fundamentals.)

Model #3: Know Your Strengths

Now that you know what to watch out for and where to start, the authors explain a simple model for understanding where your strengths and weaknesses lie. Knowing whether you’re in a region of mastery or if you should seek support allows you to avoid blunders and make better decisions. In turn, this helps you learn, grow, and succeed more in life and work.

To visualize this, imagine a small, circular region around a skill you’ve mastered. That area is one of your regions of mastery. Inside a region of mastery, you have a deep understanding of the territory (good maps) and the skills to navigate it. You intuitively grasp the possibilities and limitations of that skill set, and you can effectively overcome challenges and avoid common pitfalls. For instance, an experienced photographer can use her camera, direct photoshoots, and capture her subject in ways the beginner isn’t even aware of.

Outside that region, you lack clear understanding and the skills to match. According to the authors, that’s where your weaknesses are—areas where you’re more likely to make mistakes.

What Does Mastery Look Like?

The authors explain that someone who’s mastered a domain—what they call a “lifer”—has deep, nuanced knowledge of that skill set. However, they don’t dive deep into what this looks like in practice.

In Mastery, Robert Greene argues that a master develops a specific kind of mind that grows to encompass the entirety of her field of mastery. Through long-term practice (Greene suggests 20,000 hours) and intense, committed effort, a master transforms her mind:

Greene asserts that when you combine spontaneous, childlike intuition with rigorous adult follow-through, you can achieve feats of mastery. With time, your mind will gain a deep, thorough, intuitive sense of your field. This allows you to see what Greene calls “the Dynamic,” or the whole landscape and how it all flows. For instance, a master conductor has deep knowledge of instruments, musicianship, composition, and the landscape of music both past and present—and she sees how it all fits together as one dynamic, flowing whole.

To judge whether you’re in a region of mastery, ask:

When you’re outside your regions of mastery, remember that you know less than you might think. The authors recommend that you set your ego aside and use your foundational mental models to grasp the basics. Then, converse with experts to better understand that domain.

(Shortform note: Another way to quickly get up to speed with a region is the “learn enough to be dangerous” approach. The “Learn Enough” organization teaches the minimum viable skills for web development using curated lessons that focus on the practical essentials, rather than every bit of information a beginner could need. Extrapolating the organization’s philosophy, imagine an approach to skill development that does the same for any skill—essentially, you’d apply the Pareto Principle to focus on the parts of the skill that get you the highest ROI.)

Applying this model: To develop a region of mastery, practice the corresponding skills until you become proficient. The authors recommend three key activities:

The Core Skill of “Meta-Learning”

While the authors don’t go deeply into learning practices, other authors have explained in depth what it takes to develop a region of mastery. Across several books—including Peak, The Art of Learning, The Talent Code, and Mastery—experts agree that deliberate, committed practice is the key to learning.

Deliberate practice, sometimes called deep practice, is a form of practice wherein you learn meticulously from every mistake you make. Each time you make a mistake, you reflect on what went wrong and correct your form. In addition to this core feedback loop, you should practice with intense, undivided attention, aim for a clear goal, and continuously measure your progress.

In addition, expert-level skill requires long-term commitment. In Outliers, Malcolm Gladwell claims that 10,000 hours is the magic number for expertise. However, time-to-expertise may vary—K. Anders Ericsson explained in response to Gladwell that 10,000 hours was the average practice time for professional violinists, not for any skill. He suggests that different skills require different lengths of commitment. Regardless of the specific time, practice is the way: In Mastery, George Leonard argues that mastery is about walking the path rather than reaching the goal. In other words, the journey is the point in every moment.

Group #2: Models for Basic Systems Thinking

Moving beyond the introductory models, the authors’ next three models concern basic systems thinking. Parrish and Beaubien stress that we live in an interconnected world, and these models offer ways to think through, predict, and evaluate options in light of this interconnectedness. By understanding how your choices have ripple effects, you can avoid negative downstream consequences and ensure better outcomes for each decision you make.

We’ll discuss why to consider the consequences of any decision, how estimating probabilities can narrow those decisions, and how you can use imagination to explore the possible outcomes of a choice. Put together, these models will help you navigate difficult decisions in a world where every choice you make has immediate and downstream effects.

Model #4: Consider the Consequences

The authors explain that because we live and act within large, interconnected systems—like our workplaces or professional communities—our actions have consequences that ripple outward. Since anything you do has downstream effects, consider the immediate and secondary effects of any choice. According to the authors, many decisions that have immediate positive results have negative consequences down the road. By thinking ahead, you can predict and prevent unwanted outcomes. For example, a struggling business might take on investors to help keep the company running in the short term, but down the road, those same investors might make demands that sacrifice the company’s authentic mission.

(Shortform note: For a real-world example, consider Prohibition, the 1920s attempt to ban alcohol in the United States. Lawmakers didn’t consider the second-order consequences of the ban and, as a result, speakeasies and mob rackets flourished. Al Capone and others made their fortunes smuggling alcohol in from Canada and underground distilleries, and many lost their lives as a result of the ban. In hindsight, it seems obvious that people weren’t going to stop drinking—but hindsight is 20/20, and the lawmakers evidently lacked this mental model.)

The authors describe two lessons you can learn by considering consequences:

Lesson #1: Short-term benefits often have long-term consequences. Immediately gratifying actions often have cascading consequences that lead to poor outcomes. The authors contend that seeing this helps you make better decisions now.

(Shortform note: While the authors assert that being able to predict long-term effects helps you make better decisions, this may not hold true. In practice, we often choose based on emotion rather than reason. In Money: Master the Game, Tony Robbins explains that to master something, you need to understand it in theory, commit to it emotionally, and then take action to grasp it in practice. Just knowing something isn’t enough.)

Lesson #2: Explaining ripple effects makes you more persuasive. By showing that you’ve considered the extended consequences of a choice, you can become more persuasive in business and personal relationships.

(Shortform note: Ancient Greek philosophers used three aspects of persuasion: ethos, pathos, and logos. Above, the authors imply that a deeper understanding of systems and ripple effects strengthens the logos, or logic and reasoning, of your argument. While this matters, negotiation expert Chris Voss argues in Never Split the Difference that the emotional aspect of persuasion—pathos—is more important. He suggests using calculated empathy, or understanding someone’s emotional needs in order to get what you want from them. If demonstrating that you’re a clear thinker helps, it might be because it helps the other person feel more confident in your decision-making abilities.)

Model #5: Consider the Chances

If considering the consequences shows you possibilities, considering the chances helps you determine which possibilities are most likely to happen. That is, the authors recommend using probability techniques to explore the most likely consequences of any decision. By determining the probabilities of various outcomes, you can avoid choices that are likely to turn out poorly.

According to the authors, we don’t naturally think in terms of probabilities. To equip you with this mental model, they explain three key forms of probabilistic thinking:

#1: Bayesian thinking: Put simply, Bayesian thinking involves evaluating new information in light of what you already know and being willing to update your beliefs when new and compelling information comes along. When practiced consistently, updating what you know keeps you from holding onto outdated ideas—which are more likely to be wrong. (Shortform note: While the authors present a plain language distillation of Bayesian thinking, Bayes’ theorem can be expressed as a mathematical formula. With uses in probability and prediction, Bayesian thinking is valuable to fields such as machine learning and stock trading.)

#2: Fat-tailed curves: Some domains, such as running speed, fit a bell curve (a normal distribution). Others, like finance, fit fat-tailed curves—that is, they’re more prone to extreme outlier events like financial crashes or geopolitical clashes. Know when you’re operating in a fat-tailed domain so you can plan for extreme events. (Shortform note: Fat-tailed curves are the underlying focus of Nassim Nicholas Taleb’s work in Fooled By Randomness, The Black Swan, and Antifragile. Antifragility, for example, is his proposed response to the fact that massively impactful events are impossible to predict: Since they happen, but we can’t know when, you should prepare yourself to adapt and benefit from disruptions of all types.)

#3: Asymmetries: The authors say that we often overestimate how accurate our probability estimates are. To avoid this, remember that you’re biased to think highly of yourself, and regularly scrutinize your thinking. (Shortform note: In finance, asymmetries have a different meaning. For instance, Tony Robbins recommends in Money: Master the Game that you look for asymmetric risk/reward in investing—opportunities to earn big while risking little.)

Model #6: Consider the Possibilities

Considering the chances narrows what choices are worth pursuing, but some problems call for an inactive way of exploring what could happen. The authors say that for problems that you can’t test directly, thought experiments allow you to consider all of the possibilities.

A thought experiment is an imagined scenario or hypothetical situation that you test with your mind’s eye. In other words, you use imagination to rigorously explore what’s possible. This allows you to test things that would pose a problem in the real world, such as ethical questions involving significant harm or things that aren’t even possible to test.

For instance, you might ask yourself, “What would I do if I were given a button that would destroy half of all life on Earth, but would also permanently end suffering?” By exploring possible answers to this question, you could reach insights about ethics, morality, and how to handle difficult decisions.

To run a thought experiment, the authors suggest that you start by formulating the question you want to explore and performing basic research to establish what’s already known. Then, formulate a hypothesis to “test” and run the thought experiment. Finally, reflect on what you found and whether to revise, change, or confirm your hypothesis.

The Trolley Dilemma

While the above example riffs on the Marvel Cinematic Universe’s use of the supervillain Thanos as a sort of thought experiment, real-world thought experiments have led to major scientific and philosophical advances. Thinkers as far back as ancient Greece appeared to use thought experiments in debate, and they continue to be used in various capacities today.

One famous thought experiment is “The Trolley Dilemma.” Originally developed in 1967 by philosopher Philippa Foot, it goes like this: Imagine you see a runaway trolley barrelling toward five unsuspecting workers. There is no time to warn the workers, no way to stop the trolley, and the collision will surely result in their deaths. You do, however, have a lever next to you that will switch the trolley onto another track—on which there is one unsuspecting worker. What do you do? Do you stand by and allow fate to kill five innocent people? Or do you step in and become the direct cause of one person’s death?

This experiment exemplifies the principles the authors discuss: We can use our imaginations to conceive of a scenario that would be dangerous or difficult to test as a way to engage in meaningful (and risk-free) decision making; in this case, concerning morality and ethics.

Shortform Extension: Applying Models #4, #5, and #6

To apply the models from Group #2, try making a checklist that walks you through each of the above considerations. Then, consult this checklist when faced with any consequential decision, and think it through using each model that applies.

For example, imagine you’ve been offered a job at a major company in your field, and you need to decide whether or not to leave your current position. To decide, go through your considerations checklist:

Step #1: Consider the consequences—Thinking through what could happen, you notice the following: You’ll get a pay increase, but you’ll also have to work longer hours. Further, the new role gives you fewer opportunities to develop additional skills, while your current company gives you many. However, there’s a chance you could meet some influential players at the new company’s events, and you might get promoted down the road. Given all this, taking the new job has both immediate pros and cons, as well as downstream pros and cons.

Step #2: Consider the chances—To narrow down what’s worth your attention, calculate some probabilities. For instance, you estimate that while you might make some powerful connections in the new role, the chances are fairly slim. In this case, you can give that factor less weight when making your decision.

Step #3: Consider the possibilities—Last, you decide to imagine what else is possible. You ask whether this new role is really such a great opportunity, and you imagine what else could happen if you don’t take it. Could you find other, better opportunities? Could your current role improve or expand?

Put together, these models clarify the factors and broaden your view of the problem. In the end, you decide that the probable immediate and downstream effects of the new job—longer hours, thus higher stress, plus fewer opportunities for growth—aren’t worth the less probable payoffs.

Group #3: Problem-Solving Rules of Thumb

In this final section, we’ll pivot away from systems-thinking models to present the authors’ final three models—Occam’s Razor, Hanlon’s Razor, and inversion. Each of these models is a self-contained “rule of thumb” for solving tough problems. When applied correctly, they slice through complications, eliminate mental clutter, and help you home in on clear answers.

Model #7: Occam’s Razor

Simply put, Occam’s Razor states that the simplest explanation is most often the best explanation. Given various solutions to a problem, all of which solve it equally well, you should favor the simplest answer.

As the authors explain, simpler answers are mathematically more likely to be correct. A complicated explanation requires more factors to play nice, while a simple explanation requires just a few. Given a 50/50 chance that each factor works out, a three-factor theory is more likely to work out than a nine-factor theory.

Favoring the simpler answer saves you time and effort—you simply don’t bother to explore complicated paths when a simple path works. According to the authors, less time spent testing complicated possibilities means more time to move on to the next important task. However, remember that Occam’s Razor is a general rule rather than an absolute. Avoid using it to simplify genuinely complex problems, such as systemic societal issues, that require holistic, multi-factor solutions.

Philosophical Razors

Aside from Occam’s Razor, philosophers and scientists have developed a wide range of philosophical “razors” for use in argumentation. Put simply, a philosophical razor is a rule of thumb that helps you quickly eliminate unlikely solutions to a problem. Occam’s Razor is the most familiar example, and it captures this general principle—avoid overly complex solutions when a simpler solution could suffice. Others include:

Note that no razor is correct in every case. So instead of treating them as ironclad rules, use them as guiding principles that help you avoid wasting time on probably wrong solutions.

Each can be misused, too. For instance, you could mistake Occam’s Razor as suggesting that “God created and causes everything,” since that’s a very simple explanation for our universe. However, this neglects another principle—falsifiability, as above—that suggests that unfalsifiable explanations are of no use. Given this, avoid using Occam’s Razor as an all-purpose cutting tool; use other mental models to complement it and find your way to good solutions.

Applying this model: When solving a problem or considering a situation, pause and ask, “All else equal, what’s the simplest solution to this?” This will help you find the most probable solution. For instance, you might decide between three solutions to a software engineering problem by picking the least complicated code string that does the job.

Model #8: Hanlon’s Razor

Much like Occam’s Razor, Hanlon’s Razor quickly cuts through complicated explanations. In short, it states that the most likely explanation involves the least ill intent. According to the authors, bad things usually result from ignorance or a lack of thought, rather than malice.

This is because people tend to take the path of least resistance. Making a plan and acting on it takes much more effort than acting unthinkingly, so there are far fewer premeditated evils than simple, ill-considered mistakes. For instance, most insults come from heat-of-the-moment emotions, not master plans to tear down your self-esteem.

(Shortform note: There’s a scientific basis for this razor: People do take the path of least resistance, and it’s because we’ve evolved to conserve energy. This is evident in the way the brain automates habits—the more you repeat a behavior, the more automatic it becomes. There’s also evidence that some people are averse to effort, choosing easier tasks even when they’re less enjoyable. Hence, it’s not only easier to take the path of least resistance, but it also often happens without our knowing it.)

The authors recommend using Hanlon’s Razor to realize that the world contains more chaos than evil. They suggest that we drop the reactive, emotional assumptions we make when bad things happen, and instead remember that bad things, such as house fires, occur due to chaos rather than malicious actors.

Used this way, Hanlon’s Razor helps you become less self-absorbed. Instead of assuming that people are out to get you—a self-absorbed perspective—realize that bad things just happen by chance and, most likely, nobody has a plan to hurt you on purpose.

(Shortform note: Insofar as Hanlon’s Razor can help you decenter your ego from every situation, it’s a close cousin of the third agreement from The Four Agreements. The third agreement is another rule of thumb for clear thinking: Make no assumptions. Don Miguel Ruiz argues that when you stop making assumptions, you prevent yourself from overanalyzing situations and acting on mistaken beliefs.)

Shortform advice for applying this model: Use Hanlon’s Razor in interpersonal situations. When something bad happens to you as a result of another person’s actions, stop and ask yourself, “Did they really mean to harm me? What other, less intentional explanation could there be?” You might realize that, for example, your partner fought with you because she was stressed about work, not because she was out to get you.

Model #9: Inversion

As a thinking tool, to invert is to approach a problem from the end you typically don’t consider. The authors explain that we tend to approach challenges directly: by looking at the problem first and then searching for the solution. However, the direct approach sometimes fails, especially when the problem demands a creative or counterintuitive solution. When it does, try looking at the problem from the opposite angle that you started from.

The authors recommend using inversion as a complement to the direct approach, not to replace it. So when approaching a problem, think from both ends to see it in more detail and act more effectively.

(Shortform note: Another way to think about inversion is as negative space. When drawing, we often focus on the positive space—the lines, shapes, and colors. To invert would be to focus on the negative space, and to look at what isn’t there in order to work out what is. For instance, you might apply this to business negotiations by paying attention to what someone isn’t saying in addition to what he does say. What he doesn’t say suggests a range of information that wouldn’t otherwise be obvious.)

The authors outline two ways to use inversion:

#1: Assume success, then analyze what else would have to be the case. For instance, if you’d like to become more fit, you might begin with the “assumption” that you’ve built your ideal body, and then think about what lifestyle factors you changed to get there. Use these answers to guide you.

#2: Practice avoidance instead of pursuit. For example, you could minimize the risk of business failure by studying companies that failed, figuring out what caused their failures, and eliminating those factors from your business—in other words, minimize risk instead of pursuing advantages.

(Shortform note: In Money: Master the Game, Tony Robbins writes about both of these tactics as ways to figure out your financial future. Using method #1, he suggests envisioning your ideal financial future and using that to work out the habits you’ll need to get there. Method #2 is a common tactic in investing: Since markets are so unpredictable, it’s often most effective to mitigate your downside by building an all-weather portfolio, rather than trying to predict and beat the market over the long-term.)

Applying this model: When you’re struggling to solve a problem, pause and invert. Say you’re trying to grow a blog audience, but direct methods—such as SEO—aren’t panning out. So you invert by asking, “How can I avoid failing?” Then, you might realize that a successful blog means both attracting and not repelling potential readers. Part of blogging success, then, means removing things that might turn people off to your blog, such as excessive CTAs, annoying pop-ups, or writing with poorly disguised SEO. Eliminate the factors that detract from your blog, and you could find that reader retention increases.

Exercise: Start Developing Your Mental Models

The authors emphasize that mental models are tools for making better decisions. Practice using mental models to aid your decisions by applying three to a situation you’re currently experiencing.