Most of us have been taught that to make good decisions we need to put in a lot of time and effort. In Blink (2005), Malcolm Gladwell questions this assumption, asking: How do our snap judgments compare to our rational, well-thought-out decisions? He finds that our snap judgments are often just as good as our deliberate decisions.
A New Yorker staff writer, Gladwell has made his name writing books that make social science research accessible and digestible to the layperson. His books include The Tipping Point (2000), Outliers (2008), David and Goliath (2013), and Talking to Strangers (2019).
Gladwell’s personal experience with racial stereotyping prompted him to research and write Blink. When Gladwell, who is half-Jamaican, let his hair grow, he noticed that police and security guards began to treat him differently. He was issued more speeding tickets and targeted by the police as a potential rapist. This led him to think more carefully about the far-reaching effects of snap judgments.
We usually think of snap judgments as lazy, superficial, and probably wrong. But are they really? Gladwell argues that snap judgments can be just as good as—or even better than—the decisions that we make by analyzing a situation carefully.
According to Gladwell, both logical, conscious decision-making and snap judgments have their time and place. Our brain uses two broad strategies for making decisions:
This thinking is also known as rational decision-making. When we think consciously, we use past experiences and current information to make a decision logically.
This thinking is also known as the adaptive unconscious, intuition, or making snap judgments. When we think unconsciously, we make decisions without understanding why, or sometimes even without realizing we’ve made them.
Gladwell says we use these two different thinking strategies in different situations.
Kahneman’s Two Systems of Thinking
Another way to think about these two thinking strategies is as two “systems,” as Nobel Laureate Daniel Kahneman does in his bestseller Thinking, Fast and Slow (2011).
System 1
In Kahneman’s conception, unconscious thinking is “System 1” thinking. System 1 operates automatically and quickly, with little or no effort, and no sense of voluntary control.
- Examples: Detect that one object is farther than another; detect sadness in a voice; read words on billboards; understand simple sentences; drive a car on an empty road.
System 2
Like Gladwell’s conception of conscious thinking, “System 2” thinking allocates attention to the effortful mental activities that demand it, including complex computations. It’s often associated with the subjective experience of agency, choice, and concentration.
- Examples: Focus attention on a particular person in a crowd; exercise faster than is normal for you; monitor your behavior in a social situation; park in a narrow space; multiply 17 x 24.
According to Gladwell, snap judgments have two main benefits:
1) They’re unconscious. To make snap judgments, our unconscious minds “thin-slice”: They find patterns in situations based on small snapshots of experience.
Snap judgments don’t require a lot of information. When we thin-slice, our unconscious mind picks out the relevant information and leaves the rest. This allows us to ignore distracting, superficial details and get to the heart of a problem or choice.
When we thin-slice, we take small segments of experience and generalize them to make broader judgments. For example, when you meet someone for the first time, you might use his mood, outfit, or voice to make judgments about his personality and likeability.
(Shortform note: Gladwell doesn’t attribute the term “thin-slicing” to anyone in particular, and he’s often credited with coining it. But the idea of “thin slices” of experience first appears in a 1992 paper by Nalini Ambady and Robert Rosenthal that Gladwell draws on in some of his examples.)
2) They’re fast. The unconscious mind processes little bits of information and makes decisions about them all the time without our awareness. This frees up the conscious mind to focus on tasks that only it can complete, like those involving logic.
(Shortform note: In Thinking, Fast and Slow, Kahneman explains that snap judgments—his System 1—are fast because they work associatively. Associations, for example associating the word “lettuce” with salads and the color green, happen at lightning speed in the brain. They’re so fast and automatic that we can’t block them even if we try.)
However, Gladwell notes, we need to beware of the downside of snap judgments:
1) We can get distracted by superficial information. To illustrate this problem, Gladwell gives the example of the 29th president of the United States, Warren Harding. Harding had an undistinguished political career. He wasn’t particularly smart, rarely took a stance on (or interest in) political issues, gave vague speeches, and spent much of his time drinking and womanizing. But still, he became president. How did he get the position in the first place?
As Gladwell notes, Harding looked like a president. His distinguished appearance and deep, commanding voice won voters over. They unconsciously believed that good-looking people make competent leaders. Harding’s looks and presence triggered associations so powerful they overrode voters’ ability to look below the surface, at his qualifications (or lack thereof). When we’re similarly influenced by superficial but irrelevant qualities, Gladwell says we’re making a “Warren Harding error.”
(Shortform note: “Warren Harding errors” are known in psychology as the halo effect. This is our tendency to allow positive impressions of someone (based on their appearance, clothing, or voice) to influence our judgments about other personal characteristics such as intelligence and morality. Psychologists have found that competent-looking faces predict positive election outcomes and that both physical and vocal attractiveness affect candidate ratings. The halo effect applied in reverse—for example, the fact that unattractive people are judged as more likely to commit crimes—is known as the “horn effect.”)
2) We can fall prey to unconscious biases we don’t even know we have. As Gladwell points out, our snap judgments can be both the product and the root of prejudice and discrimination. Our attitudes about race and gender, for instance, operate on two levels.
Mismatches between these two levels can lead us to behave in a biased manner even when we think we’re being impartial.
(Shortform note: One example of how our unconscious attitudes influence our thinking is the “name-letter effect.” Research has shown that we favor romantic partners whose names contain similar letters to our own names. Why do we do this? It’s probably because of implicit egotism: We like ourselves, therefore we like the letters in our own names. But this motivation is hidden from our conscious minds.)
As Gladwell points out, when we act automatically, we depend on our implicit attitudes. If you have strong pro-white associations, you’ll act differently around someone who’s Black, probably without even being aware that you’re behaving differently. You might show less positive emotion or make less eye contact. If the Black person starts to mirror your behavior, you might then judge them negatively based on their lack of eye contact. This vicious circle has particularly detrimental effects in high-stakes situations, for example job interviews or police stops.
Implicit Racial Bias in the Criminal Justice System
In Biased, social psychologist Jennifer Eberhardt examines the social and neurological mechanisms involved in implicit racial bias. She shows how our brain’s normal instinct to categorize can lead us to make erroneous associations between race and threat. As an example, Eberhardt describes a 1976 study that found that when people watched a video of a fake argument in which a white person physically pushed a black person, only 17% of viewers classified the white person as violent. When the black person pushed the white person, 75% of viewers said the black person was violent.
Eberhardt’s work shows that these implicit associations carry over to the criminal justice system. For example, Black drivers are far more likely than white drivers to be pulled over by police for minor violations, even though most of these police officers don’t hold explicitly racist attitudes. Black people who are convicted of crimes receive harsher penalties for the same crimes than white criminals, including a higher rate of death penalty sentences.
3) We’re susceptible to priming.
“Priming,” a technique in which you’re exposed to something that affects your responses for a short period of time, may also influence our snap decisions. Gladwell points to research evidence on priming that finds our unconscious minds to be extremely suggestible.
Most of the classic studies on priming use images presented subliminally. However, there are other ways to trigger this effect: For example you can use particular words in the lead-up to a task, or ask for particular information about the participants (see below) to influence the result.
Priming Changes Performance
Subtly priming a particular stereotype can affect people’s performance in domains that relate to the stereotype. A classic 1999 study, for example, found significant priming effects on the performance of Asian American women in the United States on a math exam. The participating women were divided into three groups. One group answered questions about their ethnic identity before starting the exam, another group answered questions about their gender identity, and the third group—the control group—answered questions that were unrelated to both ethnicity and gender. Participants whose Asian American identity had been primed through the initial questionnaire performed the best on the exam. This was followed by the control group. The group whose female gender identity had been primed performed worst of all.
These experiments show us that priming can have devastating effects on our unconscious attitudes and, consequently, our lives. But they also show us a path forward. If we can negatively influence our hidden attitudes, we can also positively influence them.
(Shortform note: The effectiveness of priming is debatable, and priming-related findings have been one of the major casualties of the reproducibility crisis in scientific research. Many follow-up studies on priming have failed to replicate the original results, suggesting that the results were either cherry-picked to yield the best results, statistically manipulated, or, at worst, fabricated. At best, priming is a more complex phenomenon than was thought at the time Gladwell wrote Blink.)
A further problem with snap judgments is that we don’t have a good understanding of how they work.
As Gladwell points out, we’re often unable to explain why or how we arrive at a snap judgment, even if that judgment is correct. We know something, but we don’t know how we know it, and that’s frustrating. It’s hard to trust something that you can’t explain.
Because most of us don’t feel comfortable if we don’t know exactly what made us arrive at a particular snap judgment, we tend to rationalize, or invent inaccurate explanations for our actions or thoughts.. But instead of helping us to uncover the truth, rationalizing often takes us further away from it. We don’t lie on purpose, though: We actually believe the lies that our conscious minds construct to explain the decisions of the unconscious mind.
Do Rationalizations Come from the Left Hemisphere?
Rationalizations could be a way for us to “cover up” faulty communication between brain hemispheres. Research with split-brain patients may offer some clues about how it works. Split-brain patients are people who have had their corpus callosum—the tract of nerve fibers that connects the left and right hemispheres—severed to treat severe epilepsy.
In a split-brain patient, the two hemispheres are cut off from one another. Michael Gazzaniga and colleagues investigated what happens when there’s different input to the two hemispheres of split-brain patients, for example if the right hemisphere is shown one picture and the left is shown a different picture. In split-brain patients, the hemisphere in which language is located (the left for most people) can’t access information presented to the right hemisphere. These patients can correctly identify both of the pictures they’ve been shown, but they can only explain their choice of the picture shown to the left hemisphere. So they make up a reason for their choice of the other picture that seems logical but is actually incorrect—much as we do when we rationalize.
There are two problems with rationalizing our snap judgments:
Problem #1: Rationalizing leads to inaccurate explanations of our decisions.
Gladwell discusses the Problem of the Two Ropes to demonstrate how far our rational explanations can veer from the truth. In a 1931 study, psychologist Norman Maier hung two ropes from the ceiling in a room that also contained various items of furniture and other tools. The ropes were far enough apart that if you held one rope in your hand, you couldn’t reach the other. He asked volunteers to come up with as many ways to tie the two ropes together as they could. There were three obvious solutions using the furniture and tools provided, which most people figured out fairly easily. There was also a fourth, non-obvious solution: set one of the ropes swinging, go stand next to the other, and grab the swinging rope before tying them together.
If a volunteer was having trouble producing this fourth solution, the psychologist walked across the room and casually bumped into one of the ropes, causing it to swing. The move was so subtle that volunteers’ unconscious minds picked up on the suggestion while their conscious minds didn’t. After that, most people came up with the fourth solution.
These people weren’t lying. They were just automatically producing explanations that their conscious brains found most plausible. They had no idea the psychologist had given them the answer when he bumped the rope.
Does a “Time Out” Improve Our Problem-Solving Ability?
Psychologists call the type of problem that Maier investigated an “insight problem”: a problem for which a solution presents itself seemingly out of nowhere. It’s often recommended that we take some time away from the problem to “incubate” the solution, but is this good advice? Scientists are divided on whether an incubation period is helpful. A 2011 meta-analysis of studies published between 1964 and 2007 on incubation periods and problem-solving found that the picture is complex. Incubation periods do seem to help for problems requiring divergent thinking, but less for problems with pre-set solutions. There’s also evidence that lengthening the incubation period can help in producing high-quality solutions, as can filling the incubation period with cognitively easy tasks rather than resting completely.
Problem #2: Rationalizing leads to worse decision-making and performance.
Gladwell points out that language is the primary tool of the rational mind. Using language (and therefore activating our rational minds) when a task is better completed by the unconscious mind can snuff out insights.
Think about any stranger you saw today, maybe the barista who made your morning coffee. Suppose someone asked you to describe the barista in as much detail as possible, including facial features, hair color, clothing, and jewelry.
If you had to pick this person out of a lineup, you’d do much worse after describing him or her than before. The act of describing erases the image from your mind by pulling it forward from the unconscious to the conscious.
This is verbal overshadowing. Instead of remembering what you saw, you’re remembering your description, which, due to the limits of language, will always be less accurate than your visual memory. When you explain yourself, you override the complex experience that you’re explaining. (Shortform note: Verbal overshadowing doesn’t only apply to faces. It also affects other visual memories, as well as our memories of tastes and sounds.)
The Real-World Implications of Verbal Overshadowing
There haven’t been enough studies on the implications of verbal overshadowing for the field of criminal justice. However, it seems logical that reporting a crime or providing eyewitness testimony could impair victims’ and witnesses’ memories and make their testimony less reliable. Mitigating verbal overshadowing isn’t easy, but there might be one way to diminish its effects. A 2008 study found that verbal overshadowing impacts older adults less than younger ones, in part because older adults have “higher verbal expertise.” This suggests that improving your verbal skills could lessen your susceptibility to verbal overshadowing.
You have two options to stop rationalization from getting in the way of good decisions.
Gladwell’s Option #1: Don’t try to explain your snap decisions. Honor the mysteries of the unconscious mind and admit that you don’t always have the answers, even those pertaining to your own choices. Once you’ve created a story to explain an unconscious decision, that story is hard to shake. We believe the stories we tell ourselves and others.
(Shortform note: Gladwell’s suggestion that you avoid explaining your decisions is easier said than done, and Blink doesn’t offer further advice. However, Daniel Kahneman does offer strategies for countering the “narrative fallacy,” or the tendency to explain random or irrational events with coherent stories. First, apply your explanation to other outcomes. If it can explain more than one distinct outcome, it’s probably flimsy. Second, be wary of highly consistent patterns in your own narratives and those of others. This should alert you to cherry-picking of examples or buried information.)
Gladwell’s Option #2: Attempt to enhance your conscious perception through technology or recording techniques that slow down the flow of information. For example, use slow-motion videos for sports technique analysis. This allows your conscious mind to catch up to your unconscious, giving you a chance to double-check your unconscious judgments.
(Shortform note: This idea underlies the increasing use of slow-motion replays in sports umpiring. In the past, umpires had to make fast calls based only on what they saw in real time. Slow-motion replays allow for a more fine-grained analysis of exactly what happened—and there’s evidence that they can materially change umpires’ decision-making processes, leading them to penalize fouls more harshly.)
How do we determine our own preferences? It turns out that thin-slicing also applies when deciding what we like and don’t like.
Our preferences might seem fairly context-independent. But, as Gladwell notes, thin-slicing can go awry when it comes to knowing what we like. There are three reasons for this: sensation transference, unfamiliarity, and lack of expertise.
In sensation transference, aspects of the environment we’re in influence our perception of a particular object. This phenomenon is commonly applied in marketing.
For example, we have trouble distinguishing between a product and its packaging. Changing things like the color of the food or its packaging, the weight of the packaging, or the location of the product image on the packet can influence our assessments of a particular product. We experience the packaging as part of the product, not independent of it. (Shortform note: The influences of packaging on our expectations about product flavor can be very specific. For example, rounded typefaces can lead us to expect sweet flavors, while sharper typefaces lead us to expect sour flavors.)
As Gladwell notes, sometimes we dislike something for no other reason than that it’s unfamiliar. We taste, hear, or watch something different and the unconscious mind automatically registers it as bad.
Thin-slicing fails when the unconscious mind has no previous experiences with which to compare the new experience.
Familiarity and the Mere Exposure Effect
Our dislike of things that are unfamiliar can be partly explained by what’s known as the “mere exposure effect.” The mere exposure effect occurs when we start to like things just because we’ve been exposed to them before. It’s a surprisingly robust effect that’s been observed across cultures, for both subliminally and consciously processed stimuli, and even prenatally (for example, newborn babies show a preference for voices, stories, and music that they heard frequently while in utero). The flip side of this effect, of course, is that the less exposure we’ve had to something, the less likely we are to respond positively to it.
A third reason Gladwell gives for the failure of thin-slicing judgments is that we lack relevant expertise. Experts aren’t fooled by a product’s packaging and they aren’t put off by unfamiliarity. Experts have the training to know what they like and the vocabulary to explain it.
(Shortform note: Any field of expertise incorporates technical vocabulary that allows for finer distinctions, and therefore supports more precise communication, than lay speech. Sometimes becoming an expert involves “unlearning” word associations that we may have formed in more general contexts. For example, physics teachers can help their students to understand concepts by teaching them the physics-specific meaning of key technical words.)
Most of us think we can’t control our instinctive reactions. This assumption is both wrong and defeatist. Gladwell argues that we can improve our instinctive decision-making through deliberate training and by slowing down.
In addition to gaining expertise, Gladwell proposes two strategies for improving our snap decisions: We can rehearse and we can practice mind reading.
Gladwell suggests that you practice making decisions, especially in environments and circumstances that mimic stressful situations. For example, rehearse your upcoming job interview or presentation in an environment that mirrors the actual event as closely as possible.
(Shortform note: Norman Doidge argues in The Brain That Changes Itself that when you practice something, you’re increasing your brain’s efficiency in executing the task. New tasks are cognition-heavy, recruiting a massive number of neurons across different brain areas. Practice helps our brains determine which networks or neurons are best suited for the task and lock in their responses, freeing up cognitive capacity for more and more challenging versions of the task.)
As Gladwell explains, we read people’s minds by gathering information from their faces. We can get better at understanding others, and consequently make more accurate snap judgments about them, by practicing reading people’s facial expressions.
Humans are highly social animals. Our brains are tuned to cues that can help us navigate the complex social world. An important part of this is being able to make good guesses about what’s going on in other people’s minds. (Shortform note: In psychology, this ability to construct a model of someone else’s mind is called “Theory of Mind.” It includes keeping track of the other person’s knowledge—do they have the same information as I do?—as well as guessing at their emotional state.)
In improving mind reading, Gladwell recommends that we use microexpressions as clues to what other people are thinking. Microexpressions are expressions we make unconsciously. They’re almost imperceptible, lasting a fraction of a second. You might be good at broadly controlling the expressions your face makes, but you’ll still make involuntary expressions that betray your true thoughts and feelings.
(Shortform note: The effectiveness of microexpression analysis, especially for lie detection, is controversial. First, lies aren’t always associated with microexpressions: One study found them in only around 20% of participants who had been instructed to mask or neutralize their natural expressions. When microexpressions did occur, they were often inconsistent with the emotion being hidden.)
Additional Advice on Fast Decision-Making
Throughout the book, Gladwell suggests several ways we can make better snap decisions. We should limit the amount of information we consider, avoid rationalizing, and rehearse (particularly in stressful situations). We should also be aware of how unconscious biases—for example, the Warren Harding effect, sensation transference, and the mere exposure effect—get in the way of good snap decisions and do our best to counter these biases. How else can You improve your decisions? You can:
Be aware of the effects of transient emotions on decision-making. Daniel Goleman points out in Emotional Intelligence that when you’re in a good mood, you tend to make more optimistic decisions; when you’re feeling down, you make more pessimistic decisions.
Consider the cognitive biases you’re bringing to a decision. In Thinking, Fast and Slow, Daniel Kahneman describes some common biases in human cognition that relate to our thinking about money. First, we tend to judge outcomes based on a fixed cognitive reference point that feels “neutral” (usually our current situation). Second, we evaluate our finances in relative terms rather than absolute ones (for example, a $100 increase feels much better if you start with $100 than if you start with $900). And third, while gains feel good, losses of the same amount feel disproportionately bad. Be careful of snap decisions that come from these biases, as they may lead you astray.
Learn to distinguish when snap decisions are appropriate and when they’re not. Some decisions are more suited to a fast approach, while others benefit from a more considered one (as an extreme example, consider choosing which flavor muffin to buy with your morning coffee vs. deciding whether to ask someone to marry you). For muffin decisions, snap away. For marriage decisions, a more conscious process is usually desirable.
Consider your approach to the decision-making process itself. In The Paradox of Choice, psychologist Barry Schwartz argues that there are two main ways that people approach making decisions: They either try to pick the very best option from a large range of options (“maximizers”), or they carry a set of criteria into a decision and choose the first option that acceptably satisfies the criteria (“satisficers”). Maximizing may seem like the best approach, but it turns out that compared to satisficers, maximizers are less happy, less satisfied with their lives, more depressed, and more prone to regret. If you’re a maximizer, consider test-driving the satisficer method.
Throughout our lives, we’ve been taught that our decisions are sounder if a lot of time and effort have gone into making them. In Blink (2005), Malcolm Gladwell questions this assumption, asking: How do our snap judgments compare to our rational, well-thought-out decisions? He argues that our snap judgments are often just as good as our deliberate decisions.
A New Yorker staff writer, Gladwell has made his name writing books that make social science research accessible and digestible to the layperson. He’s garnered praise for both his lucid, enthusiastic presentation of cutting-edge research and the novel connections he makes across a wide range of fields. His books include The Tipping Point (2000), Outliers (2008), David and Goliath (2013), and Talking to Strangers (2019).
Some have critiqued Gladwell’s work for oversimplifying complex concepts, relying too heavily on anecdotes rather than research, and generally lacking academic rigor. Such an “impressionistic take on the scientific method” has earned him the moniker “America’s Best-Paid Fairy-Tale Writer.” But, as Gladwell has said in multiple interviews, he is primarily a journalist and a storyteller, and he’s unabashed about his desire to write at the intersection of good stories and interesting research.
Whether you love him or hate him, you can’t ignore Gladwell: His reach is extraordinary—his books, the first five of which were New York Times best sellers, have sold millions of copies in dozens of countries, and he’s popularized now-well-known concepts such as the “broken windows theory” (in The Tipping Point) and the “10,000-hour rule” (in Outliers). If you’re familiar with the Pareto principle, the “stickiness factor,” or the “talent myth,” chances are good you have Gladwell to thank.
Connect with Gladwell:
Gladwell’s personal experience with racial stereotyping prompted him to research and write Blink. When Gladwell, who is half-Jamaican, let his hair grow, he noticed that police and security guards began to treat him differently. He was issued more speeding tickets and targeted by the police as a potential rapist. This led him to think more carefully about the far-reaching effects of snap judgments.
Blink was Gladwell’s second book. It followed his bestseller The Tipping Point (2000), in which he debuted his signature anecdotal style in book-length format. (He had already been producing similar stories for The New Yorker for several years.) The Tipping Point examined the science of epidemics and translated this into a study of social epidemics and “viral” trends, helping to push these concepts into the mainstream.
As we might expect from “the 21st century’s defining zeitgeist surfer,” his choice of topic for Blink was similarly ahead of the curve. Blink was one of the first in a wave of pop psychology books that aimed to explain the workings of the human brain to a general readership. It followed cognitive psychologist Steven Pinker’s How the Mind Works (1997) and The Blank Slate (2002)
and psychologist David Myers’s Intuition: Its Powers and Perils, which covers much of the same ground in a more rigorous but less absorbing way. Blink was followed by Dan Ariely’s Predictably Irrational (2009), Richard Thaler’s Nudge (2009), and Daniel Kahneman’s Thinking, Fast and Slow (2011).
Among other honors, Blink was one of Fast Company’s best business books of 2005, and it spent over a year on various bestseller lists.
The book’s light, anecdotal style helped it to find a wide audience. For many readers, it provided an accessible entry point into cognitive science, reinvigorating the pop psychology genre and igniting public interest in the inner workings of the mind.
Most reviewers of Blink praised its stylistic elements while expressing some reservations about its rigor and overall structure. Time reviewer Lev Grossman, for example, praised the intellectual clarity that Gladwell brought to Blink and likened him to “an omniscient, many-armed Hindu god of anecdotes,” while at the same time questioning the basis for some of Gladwell’s claims. Similarly, Kirkus Reviews commended the anecdotal style but went on to say that the narratives “don’t add up to anything terribly profound.” Reviewer David Brooks wrote in The New York Times that the snap judgment part of his brain had been engaged and impressed by Blink, while the more analytical part was left unsatisfied.
Blink sparked a number of memorably titled reviews and responses, including Blinkered (in which reviewer Richard Posner described Blink as “written like a book intended for people who do not read books”), Blink and You Miss It: The Meaninglessness of Malcolm Gladwell, and the conservative rejoinder Think! Why Crucial Decisions Can’t Be Made in the Blink of an Eye.
Blink is highly anecdote-driven. The individual narratives have strong internal momentum, but the whole book doesn’t read as much more than the sum of its parts. Gladwell himself, looking back on the book three years later, acknowledged that his general approach at the time had been “This is a lot of cool stuff! Do with it as you will!”
Gladwell tends to present his articles and books as mere “conversation starters,” encouraging interested readers to find and read the original academic work and expand their intellectual horizons from there.
In this guide, we flesh out Gladwell’s arguments by adding scientific discoveries about the phenomena he covers. We also review more recent research, pointing out where it supports Gladwell’s ideas and where it throws them into question.
Gladwell’s style is anecdote-heavy and can be meandering. In this guide, we’ve focused on his main principles, retaining and discussing a few important examples. We’ve also updated the research and provided alternative explanations for some of the scenarios he presents.
We usually think of snap judgments as lazy, superficial, and probably wrong. But are they really? Gladwell argues that snap judgments can be just as good as—or even better than—the decisions that we make by analyzing a situation carefully.
According to Gladwell, both logical, conscious decision-making and snap judgments have their time and place. Our brain uses two broad strategies for making decisions:
This thinking is also known as rational decision-making. When we think consciously, we use past experiences and current information to make a decision logically.
This thinking is also known as the adaptive unconscious, intuition, or making snap judgments. When we think unconsciously, we make decisions without understanding why, or sometimes even without realizing we’ve made them.
Gladwell says that we use these two different thinking strategies in different situations.
Kahneman’s Two Systems of Thinking
Another way to think about these two thinking strategies is as two “systems,” as Nobel Laureate Daniel Kahneman does in his bestseller Thinking, Fast and Slow (2011).
System 1
In Kahneman’s conception, unconscious thinking is “System 1” thinking. System 1 operates automatically and quickly, with little or no effort, and no sense of voluntary control.
- Examples: Detect that one object is farther than another; detect sadness in a voice; read words on billboards; understand simple sentences; drive a car on an empty road.
System 2
Like Gladwell’s conception of conscious thinking, “System 2” thinking allocates attention to the effortful mental activities that demand it, including complex computations. It’s often associated with the subjective experience of agency, choice, and concentration.
- Examples: Focus attention on a particular person in a crowd; exercise faster than is normal for you; monitor your behavior in a social situation; park in a narrow space; multiply 17 x 24.
In this guide, we’ll begin by talking about the benefits of snap judgments (Part 1). Part 2 talks about the drawbacks of snap judgments and the dangers of relying on them in the wrong circumstances. Part 3 explores the mysteries of unconscious decision-making processes, including why explaining our snap decisions can get in the way of making good ones. Part 4 looks at personal preferences: They may seem straightforward, but sometimes we’re not very good at identifying what we do and don’t like. Part 5 looks at how to decide between making a snap decision and thinking through a problem carefully, including some strategies for making your snap decisions as accurate as possible. Part 6 concludes with two examples of good and bad snap judgments, and Gladwell’s final takeaways.
Gladwell believes there are two primary benefits of snap decisions:
The value of speed is obvious in situations in which there’s no time to think things through. EMTs, firefighters, and police officers make snap decisions all the time. But even though we don’t realize it, we’re all making snap decisions constantly, and we all find ourselves in situations where time is limited and we need to act quickly.
(Shortform note: Though fast decisions and snap decisions aren’t necessarily the same thing, making decisions quickly can give you the upper hand in business. For example, consumer research platform Attest proposes an equation for measuring decision-making effectiveness in which speed acts as a multiplier: Decision effectiveness = Decision Quality × Speed × Yield – Effort.)
In addition to being speedy, snap decisions are unconscious. The unconscious mind handles all the minor bits of information thrown at us every day. This frees up the conscious mind to focus on problems that need our deliberate attention.
Gladwell explains that scientists can see how much work the unconscious mind does by observing people with damage to their ventromedial prefrontal cortex, which is one area of the brain involved in unconscious decisions.
As Gladwell points out, people with ventromedial damage can’t make snap judgments. Their unconscious minds don’t prioritize information for them, so they give equal weight to minor and major details when making a decision. People with ventromedial damage can spend hours sifting through options before making a trivial decision, such as when to schedule their next appointment.
(Shortform note: Not all researchers working with ventromedial patients have found the same thing. In fact, it seems more common for people with ventromedial damage to make decisions as quickly as controls but to be wildly inconsistent in their preferences. For example, in their choice of preferred foods, they might rate doughnuts as tastier than carrot sticks and carrot sticks as tastier than watermelon, but then say that watermelon is tastier than doughnuts.)
People with ventromedial damage also have trouble turning decisions into actions. They’re highly rational thinkers and don’t involve their emotions when making decisions. This seems like a good thing, as we’re often told to leave our emotions out of the decision-making process. But we need an emotional push to get from decisions to actions. Without emotions, we may intellectually know that something is bad for us, but we’ll keep doing it anyway.
The Role of Emotions in Decision-Making
How does the emotional push in decision-making work? It seems that emotion, which manifests as physiological changes such as sweating or increased heart rate, changes our decisions without us being aware of it. To show this, neuroscientist Antonio Damasio and his team worked with patients who had ventromedial damage. They developed a gambling task (the “Iowa gambling task”) involving four different simulated decks of cards. Participants chose a deck to draw from each time, gaining or losing money when they turned the cards over. Two of the decks were “good” decks; that is, they rewarded the player more often than they punished the player. The other two were “bad” decks—the player was more likely to lose money when turning over cards in these decks.
People without ventromedial damage learned pretty fast (after 40 to 50 trials) to stick to the good decks. Even more impressively, they started showing physiological stress responses before choosing from the bad decks after as few as 10 draws. The people with ventromedial damage, in contrast, never defaulted to the good decks. They were more attracted to the decks that offered big rewards, even if that meant they would lose in the long term.
Damasio talks about the implications of this research in his book Descartes’ Error, proposing that the ventromedial prefrontal cortex acts as a kind of junction where emotions, past behavior, and future behavior are processed together. This combined processing allows us to learn from our good or bad past decisions and behave differently in the future.
As Gladwell argues, working with less information means that you’re less likely to be distracted by irrelevant details. If you let your unconscious mind do its job without interfering too much, you get decisions that are clean, fast, and surprisingly accurate.
Gladwell introduces the idea of “thin-slicing” to explain why our unconscious minds are so efficient at making snap judgments. Thin-slicing is using a small segment of information (for example, a 10-second video) to make a quick decision. As evidence that it works, Gladwell describes a study by Nalini Ambady and Robert Rosenthal that looked at student evaluations of teachers. These researchers found no significant difference between the evaluations of students who had taken a whole semester-long class with the teacher and those who had watched a video of less than 30 seconds of the same teacher.
(Shortform note: Gladwell doesn’t attribute the term “thin-slicing” to anyone in particular, and he’s often credited with coining it. But the idea of taking “thin slices” of experience comes from a 1992 paper by Ambady and Rosenthal that reviewed research on how people interpret short segments of non-verbal behavior. They showed that people who made judgments about someone’s honesty and professional competence based on a 30-second video were just as accurate as those who watched a 5-minute video.)
Thin-Slicing Offers Insights into Human Behavior
Research interest in thin-slicing has spread to many fields, including interpersonal relationships, clinical evaluations, parenting, and the legal system. There’s also considerable overlap with studies of body language—for example, researchers have found that we can accurately judge someone’s socioeconomic status based on thin slices of their body language. We do this by looking at the degree to which they show engagement with others: “Engagement cues,” such as nodding and smiling, tend to reflect low socioeconomic status, while people with higher socioeconomic status are more likely to show “disengagement cues” such as doodling. In general, you’ll be judged more positively on first impression if you make eye contact, avoid fidgeting, smile, and make open-handed gestures. A stiff posture, frowning, and looking down will lead people to judge you negatively.
Researchers are even trying to link how we appear in thin-slicing judgments to our genes. For example, one recent study found that people who are genetically predisposed to have more active oxytocin receptors are more likely to express prosocial behavior and positive body language (e.g., nodding their heads and smiling). This in turn makes people judge them more positively when thin-slicing.
Gladwell introduces psychologist John Gottman’s “love lab” as an analogy for what’s happening in our brains when we thin-slice. Since the 1980s, more than 3,000 married couples have taken part in Gottman’s research at the University of Washington. Gottman’s work suggests that if you know how to “thin-slice” a marriage—that is, if you know which information is relevant and which isn’t—you can accurately predict the future of the marriage.
Gottman and his colleagues videotape couples having a “meaningful interaction,” perhaps discussing a point of contention or talking about how they met. They also hook the subjects up to sensors that take physiological measurements.
(Shortform note: Some researchers have argued that Gottman’s sample sizes are too small to be meaningful, and that his studies don’t take into account false positives due to the low base rates of divorce. Others have questioned whether what Gottman does can be called “prediction” at all. He apparently makes his predictions after he already knows the outcomes of participants’ marriages, using technology to match the outcome with the noted patterns to create predictions in hindsight. Gottman has disputed this interpretation of his work on the Gottman Institute website.)
The way Gottman zeroes in on the most important pieces of information is similar to what our unconscious minds do when they thin-slice. A well-trained unconscious mind lasers in on the most significant details and disregards the rest. This is why thin-slicing allows for speedy decision-making, since it doesn’t involve processing a lot of information.
Do We Thin-Slice Potential Romantic Partners?
Gottman thin-slices long-term relationships to gain insights into their history, but what about the thin-slicing that we engage in when screening potential new partners? Tinder wasn’t around when Gladwell wrote Blink, but it’s a textbook example of making fast decisions based on very little information.
Researchers haven’t investigated thin-slicing behavior on Tinder, but they have looked at what happens in speed dating sessions. Speed dating, which was invented in the late 1990s by Rabbi Yaacov Deyo to help connect Jewish singles in Los Angeles, is an ideal context for thin-slicing studies. For example, researchers found that female speed daters (in a heterosexual speed dating scenario) were faster than the males to notice negative qualities in their dates. In another study, researchers compared the qualities that people said they were looking for before the speed dates with the partners they actually chose to see again, finding that we’re not very good at identifying what we like in people until we meet them—perhaps a reason to give someone a chance if you’re sitting on the fence.
The “love lab” example doesn’t show thin-slicing at work, but serves as an analogy. Thin-slicing takes place in the unconscious mind. Conversely, Gottman and his researchers arrive at their predictions through a very deliberate, conscious method. They use a lot of data to make their predictions. They’re not guessing.
But Gottman’s love lab research provides a model for what’s happening in our brains when we thin-slice. Our brains do a quick and unconscious version of what Gottman does when he consciously analyzes his data.
According to Gladwell, Gottman’s work provides two important takeaways that help us understand how the unconscious mind works when it thin-slices:
There are patterns, or “signatures,” that the conscious mind doesn’t pick up. The unconscious mind makes decisions based on these patterns.
Predicting divorce involves identifying the patterns of unhealthy relationships. It wouldn’t take Gottman’s team long to identify what your relationship’s pattern is. Maybe you repeatedly ask for credit and your spouse repeatedly withholds it. Maybe whenever you and your partner have a disagreement, no matter how minor, your partner is stubborn and won’t attempt to see your point of view. If these types of interactions show up regularly in a span of 15 minutes, they probably reflect an underlying problem.
Like Gottman’s team, the unconscious mind takes a thin slice of experience and quickly recognizes patterns in it. It then makes an inference about a current situation based on the patterns of similar experiences in the past.
Professional Intuition Selectively Improves Our Snap Decisions
When we use the love lab analogy for thin-slicing, we can immediately see why some snap decisions are better than others. If your unconscious mind has had enough experience in a particular area to build up an accurate, well-organized database, your snap decisions will draw on this database and are likely to be sound. But if you’re a novice in a particular area, your database will probably be small and not very accurate: a recipe for bad decisions.
Psychologist Gary Klein came to a similar conclusion in his work on the role of experience in rapid decision-making. Klein was interested in experienced firefighters, who have to make fast, potentially life-or-death decisions under pressure. He found that over time, these firefighters had built up an unconscious “prototype” of a typical building fire. When the prototype and the situation matched, their decisions were so rehearsed that they didn’t feel like decisions at all. But when the situation didn’t match the prototype, they got a “feeling” that something was wrong, which led to a different decision.
For example, one lead firefighter took his team into a burning house, and they started hosing down the place they thought the fire was coming from. But the water didn’t work. He also noticed that the house was unusually hot and that he couldn’t hear the usual roar of flames. Not knowing what was going on, he decided to evacuate his team from the house. Less than a minute later, the floor they had been standing on collapsed. This fire chief said he’d had a “sixth sense” about the need to evacuate—but Klein says that he made the call because his unconscious mind had noticed a mismatch with the prototype.
This work suggests that the quality of our snap decision-making is domain-specific. We wouldn’t, for example, expect a firefighter who makes exceptional snap decisions about burning buildings to be particularly good at analyzing marriages.
Gladwell argues that the unconscious mind doesn’t need a lot of information to do its job.
To illustrate this, he discusses how Gottman’s researchers gave some of his three-minute clips to marriage counselors, psychology students, and marital researchers. Even though they were experts, they were only able to predict divorces about 54% of the time, which isn’t much better than chance. Why did they have so much trouble?
Such a large amount of information is overwhelming. When you narrow your focus to just four problematic emotions and attitudes (criticism, stonewalling, defensiveness, and contempt), you can disregard the other data points and still make an accurate prediction about the health of a marriage.
Gottman says that you can further narrow your focus to a single emotion, contempt, and still predict a marriage’s future pretty accurately. If you detect contempt in a three-minute clip, you don’t need to know anything else about a couple’s relationship to know if it’s in trouble. It is.
Why Is Contempt So Poisonous in Relationships?
In Emotional Intelligence, Daniel Goleman goes into more detail on Gottman’s work on contempt. Gottman found that in the short term, expressions of contempt caused stress in the other partner—for example, if one partner showed a facial expression of contempt, the other partner’s heart rate shot up. It’s not surprising, then, that in the long term contempt can be both physically and emotionally destructive. Gottman found that when husbands repeatedly showed contempt, their wives reported increases in a range of physical illnesses (including gastrointestinal infections, colds and flus, and infections). When wives repeatedly showed contempt, there was a good chance the marriage would end within the next four years.
There are also links between contempt and inter-partner physical violence. One explanation for the power of contempt to destroy relationships is that its social function is to exclude the other person from social networks in the long term (in contrast to anger, which can function in the short term to change other people’s behavior). In this view, anger is less damaging than contempt because it carries an expectation of reconciliation, whereas contempt is a more permanent emotional state.
Now that we have a better sense of what our brain does when it thin-slices, Gladwell directs us to a study that shows the accuracy of thin-slicing in judging people’s personalities.
Psychologist Samuel Gosling found that our intuitive or snap judgments are surprisingly accurate when it comes to judging strangers.
Gosling asked 80 college students to fill out the Big Five Inventory, a widely used questionnaire that measures five personality traits: extraversion, agreeableness, conscientiousness, emotional stability, and openness to new experiences. (Shortform note: You can take the Big Five Inventory test here to see how you stack up.)
Then Gosling asked the close friends of those 80 students to rank them on the Big Five. Unsurprisingly, the friends showed that they knew the subjects pretty well. Each friend had a “thick slice” of experience with the subject, having spent a lot of time with him or her, and could judge the subject’s personality accurately.
Finally, Gosling sent someone who had never met the subject into the subject’s dorm room for 15 minutes. Gosling asked the stranger to assess the subject according to the five personality traits just by looking at that person’s bedroom.
Gosling discovered that you can make a surprisingly accurate judgment about someone you’ve never met just by looking around his or her dorm room for 15 minutes. (Shortform note: Gosling went on to write a book titled Snoop: What Your Stuff Says About You. A spin-off BBC3 show called Hot Property has would-be daters inspect the bedrooms of three potential dates and choose one to go on a date with.)
Friends, as expected, were more accurate in rating extraversion and agreeableness. However, strangers were more accurate in rating conscientiousness, emotional stability, and openness to new experiences.
Why? Bedrooms expose patterns. They give a peek into your private life that is revealing. An unmade bed, posters on the wall, or a fluffy ottoman can tell you just as much about a person as meeting him or her, if not more.
A bedroom without its occupant inside it contains well-selected information. It excludes distracting and irrelevant information. Stereotypes triggered by a subject’s presence were absent. If you meet a quiet, econ major with glasses, you may never guess she’s a hardcore heavy metal fan. If you judge her by her vinyl and paraphernalia collection, you get a different picture. The people themselves weren’t there to get in the way. We can’t view ourselves objectively, so we often say confusing or contradictory things about ourselves to other people. By just looking at someone’s bedroom, we avoid this white noise.
To Gladwell, this demonstrates that strangers who observe an extremely thin slice of a person’s life can make accurate judgments about that person’s personality. As this example shows, sometimes, it’s better to have a limited amount of information rather than a lot, as long as that information makes use of a pattern from which to extrapolate to the whole.
The Size Instinct: The Problem With Having too Little Information
As Gladwell notes, in many circumstances, copious amounts of information are more distracting than helpful.
On the other hand, as we’ll see in later chapters, going with less isn’t always better. If we rely too heavily on thin slices, we’re in danger of succumbing to the size instinct: the mistaken impulse to overestimate the importance of any single data point or incident, without putting it in the proper context. The size instinct can severely distort our worldview and cause us to devote disproportionate attention (and resources) to small details and to the symptoms of problems, rather than to large problems and their deeper root causes.
In Factfulness, international health and development expert Hans Rosling illustrates how basing our decisions on a small amount of easily available information (the size instinct) might get in the way of properly understanding and solving a problem. He points out that according to raw figures, child mortality in Mozambique is extremely high. But if an aid organization starts building an expensive hospital on the basis of that figure alone, they’re missing the point: Most sick children in Mozambique won’t make it to the hospital in the first place
To avoid getting distracted by the size instinct, Rosling suggests that you break down and compare the numbers you’re presented with. For example, before deciding to build a hospital, an aid organization could seek a more detailed breakdown on the causes of child mortality and determine whether the mortality rate is increasing or decreasing over time.
Look to the past to see if you’ve based decisions on irrelevant information. Can you find ways to identify patterns and disregard the minor details when making decisions in the future?
Think of a time when you had to make a decision and were overwhelmed by all the factors you had to take into account. What was the situation? What were some of the deciding factors?
In hindsight, were all those factors relevant? Which factors may not have mattered as much as you thought at the time? (For instance, when you were choosing a college, did it really matter that College A had better sports teams than College B, when you didn’t play sports? Or was that just a distracting piece of information that looked relevant on your pros/cons list?)
How could you have been more selective in your decision-making process by focusing on the relevant patterns? What one or two factors would have pointed you in the right direction, even ignoring all the other factors? (Just brainstorm for now. You’ll gain more insight here as you explore later chapters.)
As Gladwell notes, when we recognize the power of the unconscious mind’s thin-slicing, we need to acknowledge both its positive and negative sides.
This chapter is about the downside of our snap judgments and what we can do to counter it.
Snap judgments can go wrong for a few reasons: First, we can be overly superficial when making the judgment. Second, we can fail to recognize when our conscious and unconscious attitudes don’t match, which leads us to make decisions that are inconsistent with our conscious beliefs. And third, and we’re far more susceptible to the effects of priming than we think. Let’s look at each of these in turn.
Thin-slicing doesn’t always serve us. Sometimes we make superficial snap judgments.
Often thin-slicing helps us get below the surface details of a situation to find deep patterns. But this deep dive can get interrupted, leaving us with a snap judgment that’s based on irrelevant surface details.
Gladwell gives the example of the 29th president of the United States, Warren Harding, to show how easy it is to get distracted by surface details. Harding had an undistinguished political career. He wasn’t particularly smart, rarely took a stance on (or interest in) political issues, gave vague speeches, and spent much of his time drinking and womanizing.
Still, Harding climbed the political ranks and became president. He’s widely regarded as one of the worst presidents in history. (Shortform note: More recently, some historians have argued that Harding’s presidency deserves a more balanced assessment.) How did he get the position in the first place?
As Gladwell notes, Harding looked like a president. His distinguished appearance and deep, commanding voice won voters over. They unconsciously believed that good-looking people make competent leaders. Harding’s looks and presence triggered associations so powerful they overrode voters’ ability to look below the surface, at his qualifications (or lack thereof). When we’re similarly influenced by superficial but irrelevant qualities, Gladwell says we’re making a “Warren Harding error.”
Warren Harding and the Halo Effect
What Gladwell refers to as “Warren Harding errors” are known in psychology as the halo effect: our tendency to allow positive impressions of someone (based on their appearance, clothing, or voice) to influence our judgments about other personal characteristics such as intelligence and morality.
Psychologists have formally studied the halo effect in politics for decades. They’ve found, for example, that competent-looking faces predict positive election outcomes and that both physical and vocal attractiveness affect candidate ratings.
The halo effect applied in reverse (for example, the fact that unattractive people are judged as more likely to commit crimes) is known as the “horn effect.”
As Gladwell points out, our snap judgments can be both the product and the root of prejudice and discrimination. Our attitudes about race and gender, for instance, operate on two levels.
Mismatches between these two levels can lead us to behave in a biased manner even when we think we’re being impartial.
Unconscious Attitudes: The Name-Letter Effect
Do you like some letters of the alphabet more than others, to the point of selecting your romantic partner based on the letters in their name? If you think about it consciously, this idea seems absurd. Of course you select someone based on their personal qualities, not their name. However, research shows that on an unconscious level, we’re often drawn to people whose names share letters (particularly first or last initials) with our own. This is called the “name-letter effect” and occurs because of implicit egotism. We like ourselves, therefore we like the letters in our own names.
In a 2004 paper titled “How do I love thee? Let me count the Js,” researchers confirmed this effect and found that it also extended to numbers: Participants preferred people who’d been assigned experimental codes that were similar to the numbers in their birthdays. The effect isn’t limited to romantic partners: We also prefer brand names that share our initials.
To show that our unconscious attitudes don’t always match up with our conscious beliefs, Gladwell introduces the Implicit Association Test (IAT). The IAT is a tool that attempts to measure the unconscious associations that people have with certain categories, for example gender or race. The creators of the IAT that measures racial bias argue that it can uncover your implicit bias through the measurement of your reaction time. Are you faster to react when pairing positive words, such as “wonderful,” with white faces or black faces? How about negative words such as “evil”? (You can find the IAT online at www.implicit.harvard.edu if you’re interested in trying it.)
Regardless of their stated beliefs, more than 80% of IAT-takers have “pro-white associations.” In other words, it takes slightly longer for most people to put words like “glorious” and “wonderful” in the “African American” category than to put words like “hurt” and “evil” in the same category.
This doesn’t just apply to test-takers who aren’t African American—50% of more than 50,000 African Americans tested have pro-white associations.
Controversies Over the IAT
There’s a great deal of controversy surrounding the IAT. According to journalist Jesse Singal, who synthesized a range of views on the IAT for The Cut, there are problems with reliability (including low “test-retest” reliability, in which people taking the same test on two different occasions are likely to get different results) and validity (a score on the IAT isn’t necessarily predictive of someone’s behavior toward people of a certain race). There are also questions about whether the IAT oversimplifies causal links. For example, people may receive higher implicit bias scores for reasons unrelated to bias: for example, if they’re very familiar with stereotypes, if they’re thinking about oppression and discrimination, or even if their cognitive processing is slow.
Following this research, the test creators walked back their original claims about what exactly the test measures. They now say that the data isn’t predictive of individual behavior and is only predictive in aggregate—for example, if the same person does the test many times and averages the results, or when measuring a particular group’s overall patterns of bias.
As Gladwell points out, the implications of a mismatch in conscious and unconscious beliefs are unsettling. When we act automatically, we depend on our implicit attitudes. If you have strong pro-white associations, you’ll act differently around someone who’s Black, and you probably won’t even be aware that you’re behaving differently. You might, for example, show less positive emotion or make less eye contact.
Let’s say you’re a Black woman explaining your symptoms to a white male doctor with pro-white associations. Your unconscious mind picks up on the doctor’s subtle distance and lack of eye contact, and you lose confidence and start struggling to articulate your thoughts clearly. The doctor’s unconscious mind picks up on this. He gets the gut feeling that you’re making up or exaggerating your symptoms, perhaps guessing that it’s to get access to a particular medication. He sends you away without a prescription—leaving you suffering and reinforcing his own unconscious pro-white associations.
Our unconscious racial and gender attitudes matter, regardless of the conscious attitudes we articulate.
Implicit Racial Bias in the Criminal Justice System
In Biased, social psychologist Jennifer Eberhardt examines the social and neurological mechanisms involved in implicit racial bias. She shows how our brain’s normal instinct to categorize can lead us to make erroneous associations between race and threat. As an example, Eberhardt describes a 1976 study that found that when people watched a video of a fake argument in which a white person physically pushed a black person, only 17% of viewers classified the white person as violent. When the black person pushed the white person, 75% of viewers said the black person was violent.
Eberhardt’s work shows that these implicit associations carry over to the criminal justice system. For example, Black drivers are far more likely than white drivers to be pulled over by police for minor violations, even though most of these police officers don’t hold explicitly racist attitudes. Black people who are convicted of crimes receive harsher penalties for the same crimes than white criminals, including a higher rate of death penalty sentences.
Gladwell points to research evidence on priming that shows our unconscious minds to be extremely suggestible. This makes the unconscious vulnerable to negative suggestions, but it also makes it available for positive suggestions. Priming sheds light on one way to influence otherwise inaccessible unconscious processes.
Most of the classic studies on priming use images presented subliminally. However, there are other ways to trigger this effect: For example you can use particular words in the lead-up to a task or ask for particular information about the participants (see below) to influence the result.
Priming Different Stereotypes Leads to Differing Performance
The subtle priming of a particular stereotype can affect people’s performance in domains that relate to the stereotype. A classic 1999 study, for example, found significant priming effects on the performance of Asian American women in the United States on a math exam. The participants were divided into three groups. One group answered questions about their ethnic identity before starting the exam, another group answered questions about their gender identity, and the third group—the control group—answered questions that were unrelated to both ethnicity and gender. Participants whose Asian American identity had been primed through the initial questionnaire performed the best on the exam. This was followed by the control group. The group whose female gender identity had been primed performed worst of all.
These researchers then carried out the same experiment in Canada, a country in which the ethnic stereotype that Asian people are better at math is less prevalent. In Canada, priming both gender and ethnic stereotypes led to poorer performance.
This study was replicated in 2014 with similar results, but with the added proviso that participants needed to be familiar with the relevant stereotypes for the priming effect to work. Given the ongoing reproducibility crisis in psychology (more on this below), replication studies are important to buttress the credibility of key findings.
These experiments show us that priming can have devastating effects on our unconscious attitudes and, consequently, our lives. But they also show us a path forward. If we can negatively influence our hidden attitudes, we can also positively influence them.
(Shortform note: The effectiveness of priming is debatable, and priming-related findings have been one of the major casualties of the reproducibility crisis in scientific research. Many follow-up studies on priming have failed to replicate the original results, suggesting that the results were either cherry-picked to yield the best results, statistically manipulated, or, at worst, fabricated. At best, priming is a more complex phenomenon than was thought at the time Gladwell wrote Blink.)
How do you fix something that happens below the level of conscious thought? Gladwell argues that you can retrain your implicit assumptions by being aware of them and actively using your conscious mind to counter them. For example, you can:
Gladwell suggests that if you’re about to take the IAT, or to do anything in which unconscious racial bias might be activated, you strategically look at images of successful people of color before starting. Doing this, he says, will prompt your unconscious to make positive associations.
(Shortform note: The priming may not need to be so specific. In one study, people who were primed to think about shared human experiences across cultures showed lower anti-Arab bias. In this study, two different forms of priming were effective. The first was looking at pictures of people in different cultures engaged in normal social activities, while the second was thinking about meaningful childhood memories that might be shared across cultures.)
Your unconscious attitudes are based on your environment and the accumulation of prior experiences. To change your unconscious attitudes, Gladwell says you need to change your environment and experiences.
When you become aware of an implicit association that’s discriminatory—perhaps you take the race section of the IAT and you’re one of the 80% of people tested who have a white preference—Gladwell suggests that you alter the association by exposing yourself to people who counter your implicit biases. Read books, watch movies, and become familiar and comfortable with cultures that your unconscious mind discriminates against. You’re training your unconscious mind to change its opinion.
(Shortform note: Many authors who examine racial prejudice would argue that mere exposure isn’t sufficient to change unconscious attitudes. In So You Want to Talk About Race, for example, Ijeoma Oluo argues that conscious engagement, especially through difficult conversations that are specifically about racial issues, is key in changing attitudes.)
Acknowledging your biases is difficult and uncomfortable. It’s also crucial to improving your snap judgments and aligning your unconscious attitudes with your conscious ones. Reflect on your experiences and how they may have influenced your biases.
Try not to think too much about your answers to the following questions. Capture your gut reactions: When you read the word “leader,” who do you picture? What about “lawyer”? “Parent”? “CEO”?
Take a look at your “pictures.” Do they tell you anything about your inherent biases? Do your pictures follow historical gender norms? Do the people in your gut-reaction images share your skin color?
Think about your hometown and the schools you attended. Think about your childhood friendship groups. Did the people you spent the most time with look like you? Did they share your culture? Your religious beliefs? Your values? Jot down a few of the shared beliefs and cultural attitudes of your community, schools, or friendship groups.
How might your community and past experiences have influenced your unconscious beliefs about who can be a leader, lawyer, parent, or CEO?
What conscious steps can you take to counter these biases? (Remember, you can change your experiences to change your unconscious beliefs.)
As Gladwell points out, we’re often unable to explain why or how we arrive at a snap judgment, even if that judgment is correct. We know something, but we don’t know how we know it, and that’s frustrating. It’s hard to trust something that you can’t explain.
Knowledge without a rational explanation is a double-edged sword. This type of knowledge can be the truest, deepest kind, but it can also harbor biases. Because most of us don’t feel comfortable if we don’t know exactly what made us arrive at a particular snap judgment, we tend to rationalize. But instead of helping us to uncover the truth, rationalizing often takes us further away from it.
Rationalizing is inventing inaccurate explanations for our actions or thoughts. By doing this, we try to make our decisions seem more rational, both to others and to ourselves.
We tend to feel that there’s a rational reason for everything we do. When we don’t know why we’ve made a certain decision, we make something up. We don’t lie on purpose. We actually believe the lies that our conscious minds construct to explain the decisions of the unconscious mind.
Rationalizations and the Left Hemisphere
Rationalizations could be a way for us to “cover up” faulty communication between brain hemispheres. Research with split-brain patients may offer some clues about how it works. Split-brain patients are people who have had their corpus callosum—the tract of nerve fibers that connects the left and right hemispheres—severed to treat severe epilepsy.
In someone with an intact corpus callosum, the right and left hemispheres can communicate directly. This means that the left hemisphere is aware of the perceptions and actions of the right hemisphere (such as moving the left side of the body and seeing things in the left visual field) and vice versa. But in a split-brain patient the two hemispheres are cut off from one another.
Michael Gazzaniga and colleagues investigated what happens when there’s different input to the two hemispheres of split-brain patients. In most people, language is a left-hemisphere function. The team found that when they presented stimuli to the left hemisphere (for example flashing an image in the right visual field), the patient could easily name and describe the image. If a picture of an object was presented to the right hemisphere, patients verbally denied seeing anything, but could select that picture from an array of different pictures using their left hand.
When different images were flashed simultaneously to the two hemispheres, split-brain patients could only report verbally on the right-hand one. For example, one patient’s right hemisphere was shown a picture of a chicken’s foot and his left hemisphere a picture of a snowy landscape. Afterward, he pointed immediately to a picture of a whole chicken and a picture of the shovel. When asked why he’d chosen the shovel (information that the verbal left hemisphere had no access to), he said easily, “You need the shovel to clean out the chicken shed.”
Snap decisions require us to assimilate a large quantity of information that’s presented very fast. This makes them a more natural fit in the parallel-processing right hemisphere than in the left. Our own rationalizations may just be a version of the “chicken coop” story.
There are two problems with rationalizing our snap judgments:
Gladwell gives the following example to demonstrate how far our rational explanations can veer from the truth.
In a 1931 study, psychologist Norman Maier hung two ropes from the ceiling, far enough apart that if you held one rope in your hand, you couldn’t reach the other. There were various tools and pieces of furniture in the room as well. He asked volunteers to come up with as many ways to tie the two ropes together as they could.
There were four ways to tie the ropes. Most people discovered the first three pretty easily.
Most people struggled to find the fourth solution: Swing one rope like a pendulum, then grab the other rope. Catch the swinging rope and then tie the ends together.
If a volunteer was having trouble producing this fourth solution, the psychologist gave their unconscious minds a suggestion. He walked across the room and casually bumped into one of the ropes, causing it to swing. The move was so subtle that volunteers’ unconscious minds picked up on the suggestion while their conscious minds didn’t. After that, most people came up with the fourth solution.
Rationalizing Unconscious Decisions
These people weren’t lying. They were just automatically producing explanations that their conscious brains found most plausible. They had no idea the psychologist had given them the answer when he bumped the rope.
Does a “Time Out” Improve Our Problem-Solving Ability?
Psychologists call the type of problem that Maier investigated an “insight problem”: a problem for which a solution presents itself seemingly out of nowhere. (People often refer to this experience as an “Aha! moment” or a “Eureka moment.”) But are we more likely to experience Eureka moments straight away or after a period of rest? Is it sound advice to “sleep on it,” or should we just make a Gladwell-style snap judgment now and be done with it?
The period of rest is known as an “incubation period,” and scientists are divided on whether it’s helpful. A 2011 meta-analysis of studies published between 1964 and 2007 on incubation periods and problem-solving found that the picture is complex. Whether people are better at solving problems after an incubation period depends on several things:
The type of problem. Divergent thinking tasks, which are open-ended questions with a large number of possible solutions (for example, “What would happen if everyone suddenly lost the ability to read and write?”), benefit more from incubation periods than insight problems with only one (non-obvious) solution.
The length of the incubation period. Generally, longer incubation periods lead to better solutions.
The nature of the incubation period. During this period you have a few options: You can rest, you can do cognitively easy tasks, or you can do cognitively difficult tasks. The worst performance comes from doing difficult tasks while you wait. But, interestingly, if you spend your waiting time doing easy tasks, you’ll perform better than if you had rested completely.
As Gladwell says, sometimes our rationalizations are just inaccurate. Other times, they actually harm our ability to make smart decisions.
Our rational minds can overpower our unconscious minds, to our detriment. Explaining our processes can make us worse at whatever we’re trying to explain. Attempting to explain the mysterious takes away some of the mystery’s power.
Gladwell points out that language is the primary tool of the rational mind. Using language (and therefore activating our rational minds) when a task is better completed by the unconscious mind can snuff out insights.
For example, when we recognize someone, the recognition comes from our unconscious. We’re not consciously looking at the person’s eyes, nose, or hair, and comparing those features to a mental list of the eyes and noses and hair of people we know. Recognition is usually instant. Either you recognize someone or you don’t.
Let’s see what happens when we try to turn this into a conscious process. Think about any stranger you saw today, maybe the barista who made your morning coffee. Suppose someone asked you to describe the barista in as much detail as possible, including facial features, hair color, clothing, and jewelry.
If you had to pick this person out of a lineup, you’d do much worse after describing him or her than before. The act of describing erases the image from your mind by pulling it forward from the unconscious to the conscious.
This is verbal overshadowing. Instead of remembering what you saw, you’re remembering your description, which, due to the limits of language, will always be less accurate than your visual memory. When you explain yourself, you override the complex experience that you’re explaining. (Shortform note: Verbal overshadowing doesn’t only apply to faces. It also affects other visual memories, as well as our memories of tastes and sounds.)
The Real-World Implications of Verbal Overshadowing
Although there haven’t been enough studies on the implications of verbal overshadowing for the field of criminal justice, it seems logical that reporting a crime or providing eyewitness testimony could impair victims’ and witnesses’ memories and make their testimony less reliable.
How do we mitigate verbal overshadowing? We can overcome many mental biases and distortions simply by being aware of them. But in the study Gladwell references, researchers found that not only does being aware of verbal overshadowing make no difference in people’s performance on facial recognition tasks, it may actually worsen their performance.
However, there might be one way to diminish the effects of verbal overshadowing. A 2008 study found that verbal overshadowing impacts older adults less than younger ones, in part because older adults have “higher verbal expertise.” This suggests that improving your verbal skills could lessen your susceptibility to verbal overshadowing.
Gladwell draws on studies that show that verbal overshadowing also hurts our ability to solve insight puzzles.
There are two general kinds of puzzles:
When we attempt to solve logic puzzles, explaining our thought processes can actually help us better understand the problem.
But when we attempt to solve insight puzzles, explaining our processes hurts us. One study found that, given a sheet of insight puzzles, those asked to explain their strategies solved 30% fewer problems than those who weren’t asked to explain themselves. (Shortform note: This may be linked to the left hemisphere/right hemisphere distinction mentioned earlier. Perhaps explaining things verbally, which is a left-hemisphere function in most people, overrides or shuts down the delicate holistic work of the right hemisphere.)
You have two options to stop rationalization from getting in the way of good decisions.
Gladwell’s Option #1: Don’t try to explain your snap decisions. Honor the mysteries of the unconscious mind, and admit that you don’t always have the answers, even those pertaining to your own choices. Once you’ve created a story to explain an unconscious decision, that story is hard to shake. We believe the stories we tell ourselves and others.
Kahneman: Antidotes to the “Narrative Fallacy”
Gladwell’s suggestion that you avoid explaining your decisions is easier said than done, and Blink doesn’t offer further advice. However, Daniel Kahneman does offer strategies for countering the “narrative fallacy,” or the tendency to explain random or irrational events with coherent stories.
Kahneman’s Strategies (Thinking, Fast and Slow)
1. Discover the flimsiness of your explanation by applying it to other outcomes.
In some situations, an identical explanation can be applied to both possible outcomes. For example:
- During a day of stock market trading, an event might happen, like the Federal Reserve System lowering interest rates. If the stock market goes up, people say that the lower interest rates have emboldened investors. If the stock market goes down, people say that investors were expecting the interest rate change, or perhaps that the market had already priced it in. The fact that the same explanation (changing interest rates) can be given, regardless of whether the market moves up or down, means that it’s likely a bad explanation with no predictive power.
2. Be wary of highly consistent patterns in your own narratives and those of others.
- For example, management literature profiles the rise and fall of companies, attributing company growth to key decisions and leadership styles, even the childhood traits of founders. These stories ignore all the other things that didn’t happen that could have caused the company to fail (and that did happen to the many failed companies that aren’t profiled—survivorship bias). Ignoring this, the stories are presented as inevitabilities.
Look out for highly consistent patterns that emerge from comparing more successful and less successful examples. You don’t know lots of things—whether the samples were cherry-picked, whether the failed results were excluded from the dataset, and other experimental tricks.
Be wary of people who declare very high confidence around their explanation. This indicates only that they’ve constructed a coherent story, not necessarily that the story is true.
Gladwell’s Option #2: Attempt to enhance your conscious perception through technology or recording techniques that slow down the flow of information. For example, use slow-motion videos for sports technique analysis. This allows your conscious mind to catch up to your unconscious, giving you a chance to double-check your unconscious judgments.
(Shortform note: This is similar to what Gottman and colleagues did in the “love lab” discussed in Chapter 1. What Gladwell seems to be recommending here is that you examine your own decision-making process so carefully that spurious rationalizations become rational explanations.)
Techniques for Slowing Down Input: Transcribing Conversations
Slow-motion video is the obvious choice for analyzing physical movements, but what about conversations? To capture what someone says, you just write down the words. But how do you capture how they say it?
Conversation Analysis is a transcription method that was originally developed in the late 1960s and early 1970s in sociology. When transcribing using this method, you use symbols to note down elements like overlaps (people talking at the same time), interruptions, rising or falling intonation, and audible inhalations or exhalations. These elements may seem trivial, but it turns out that they often have special meaning in conversations. For example, an audible in-breath often signals that someone is trying to “take the floor” (take a turn at speaking).
Researchers have used Conversation Analysis to examine effective—and ineffective— interactions in a wide range of areas, such as doctor’s offices, courts, and classrooms.
How do we determine our own preferences? We’re good at making fast judgments based on thin-slicing about what we do and don’t like. But, surprisingly, sometimes these snap judgments about preferences can be inaccurate.
Your preferences might seem fairly context-independent. If you prefer a particular type of chocolate, for example, you would still expect to prefer it if the packaging changed, or if you were asked to explain why you like it. But, as Gladwell notes, thin-slicing can go awry when it comes to knowing what we like. There are three reasons for this: sensation transference, unfamiliarity, and lack of expertise.
Thin-slicing fails when our unconscious minds get distracted or misled by irrelevant information. This is what occurs in sensation transference. Sensation transference causes us to unconsciously judge a product, service, or person based on things that co-occur with the product but aren’t necessarily part of it (such as packaging or the environment in which it’s served). Marketing professionals often use sensation transference to get us interested in products.
One of the easiest ways marketers can prompt sensation transference is packaging. Changing things like the color of the food or its packaging, the weight of the packaging, or the location of the product image on the packet can influence our assessments of a particular product. We experience the packaging as part of the product, not independent of it. (Shortform note: The influences of packaging on our expectations about product flavor can be very specific. For example, rounded typefaces can lead us to expect sweet flavors, while sharper typefaces lead us to expect sour flavors.)
Cross-Modal Sensory Associations Affect Our Preference Judgments
Sensation transference affects us in ways we’re only just starting to understand. While most of us don’t have true synesthesia (a cross-wiring of perceptual systems that might have someone tasting sounds or seeing letters in different colors), our senses are more interlinked than we might think. For example, we link musical pitches to tastes—we perceive higher-pitched notes as sweet or sour and lower-pitched notes as umami or bitter—and we’re susceptible to “sonic seasoning,” in which the music that’s playing while we eat something affects our perception of the taste. People have experimented with sonic seasoning enhancements for toffee, chocolate, coffee, and cheese.
Cross-modal associations don’t only affect our judgments about food. One classic 2008 study found that the temperature of an object we’re holding affects both our perceptions of other people and our behavior toward them. Participants who held hot coffee (as opposed to iced coffee) were more likely to judge someone else as warm and caring, and participants who held a heated pad were more likely to behave in a warm, generous way than participants holding a cool one. Conversely, touching something cold or looking at a picture of a snowy landscape appears to improve people’s cognitive control.
Studies like these, which consistently show that irrelevant sensory information influences our unconscious judgments, may be yet another reason to be wary of snap judgments.
As Gladwell notes, sometimes we like things just because they’re familiar and dislike things that are unfamiliar. We don’t like what we don’t know. Sometimes we taste, hear, or watch something different and the unconscious mind automatically registers it as bad.
Gladwell gives the example of television classics All in the Family and The Mary Tyler Moore Show, which almost didn’t make it to the air because the hundreds of viewers in market testing hated them. They thought that Family’s Archie Bunker was too abrasive and that career-woman Mary was a “loser.”
These sitcoms became two of the most successful in history. Were the opinions of initial viewers just different than those of the general public? Probably not. Viewers thought that they hated these shows, but really they were just shocked by them. Once they got used to them, they realized that they actually enjoyed them.
Thin-slicing fails when the unconscious mind has no previous experiences to which to compare the new experience.
The Mere Exposure Effect: Why We Like Familiar Things
Why do we tend to dislike things that are unfamiliar? This can be partly explained by what’s known as the “mere exposure effect.” The mere exposure effect occurs when we start to like things just because we’ve been exposed to them before. It applies to everything from the people we interact with to that song that keeps playing on the radio.
The mere exposure effect is surprisingly robust—it’s been observed across cultures, for stimuli that are both subliminally presented and consciously processed, and even prenatally (for example, newborn babies show a preference for voices, stories, and music that they heard frequently while in utero). The flip side of this effect, of course, is that the less exposure we’ve had to something, the less likely we are to respond positively to it.
A third reason that Gladwell gives for the failure of thin-slicing judgments in the area of our personal preferences is a lack of relevant expertise. Experts aren’t fooled by a product’s packaging and they aren’t put off by unfamiliarity. Experts have the training to know what they like and the vocabulary to explain it.
Experts learn how to match their unconscious feelings about a food or an object with a formal aspect of that food or object. This has a top-down effect on their first impressions.
Gladwell gives the following example: Professional food tasters spend decades developing the skills to judge foods objectively. They learn to match their sensation of a food being sweet or bitter to specific characteristics of that food. The more they practice, the better they become at identifying, for instance, not only how much citrus flavor a food contains, but how much of that citrus flavor is orange citrus, how much is lemon, and how much is grapefruit.
The more they practice, the better their ability to taste, and the more accurate their thin-slicing when confronted with a new food.
(Shortform note: This process suggests more of an interaction between deliberate processing and snap judgments than Gladwell allows for in the rest of the book. What Gladwell is describing here is a process of automatization—the repetition of a skill so that it requires less and less conscious attention and is eventually executed more or less unconsciously, freeing up the deliberative system for more difficult challenges. We all experience this gradual shifting of the line between effortful and unconscious processing when we try to master something complex, for example a new sport or language.)
Experts also have a unique vocabulary that helps them refine their snap judgments. For example, as Gladwell notes, food tasters learn a specific way to describe what they like and don’t like.
They evaluate mayonnaise according to six dimensions of appearance (including color, lumpiness, and shine), 10 dimensions of texture (including firmness and density), and 14 dimensions of flavor, split into the subgroups of aroma (eggy), basic tastes (sour), and “chemical-feeling factors” (astringent).
Each of these 30 characteristics is rated on a 15-point scale. Food tasters use their rational minds to practice identifying and rating these characteristics. These experiences result in unconscious decisions that are well-aligned with the earlier conscious ones. Therefore, if a professional food taster likes mayonnaise, she can accurately tell you why. Using specialist vocabulary improves the chance that snap decisions will be made based on relevant, rather than irrelevant, details.
(Shortform note: Any field of expertise incorporates technical vocabulary that allows for finer distinctions, and therefore supports more precise communication, than lay speech. Part of becoming an expert in any given field is learning the vocabulary involved. Sometimes this involves “unlearning” word associations that we may have formed in more general contexts. For example, physics teachers can help their students to understand concepts by deliberately teaching them the physics-specific meaning of key technical words.)
Thin-slicing is more likely to fail when we lack expertise in a given area. Gladwell argues that it’s even more likely to fail when we act like experts without the requisite expertise. We pretend to be experts when we attempt to rationalize our snap-judgment preferences. This leads to even more confusion.
When we try to explain the unexplainable, we tune out our intuition in favor of the conscious mind, which (as we saw in Chapter 3) is often bluffing. It doesn’t know why it likes or dislikes something, but it doesn’t know that it doesn’t know, and it will make up anything to satisfy itself that it knows. It wants to be an expert on its preferences.
The problem is that we tend to adjust our judgments and actions to fit our rationalizations. Rationalizing can lead us further from understanding ourselves and what we like.
How Should We Measure the Quality of Our Decisions?
Not all decisions are created equal. A decision about which flavor of potato chips is your favorite, for example, has very different ramifications than a doctor’s decision about a patient’s diagnosis. For trivial decisions that relate to personal preference, such as choosing a poster for your wall, it makes sense to judge the decision based on people’s satisfaction with their choice after a period of time. When decision quality is measured this way, decisions based on unconscious thought come out ahead of those based on conscious deliberation.
However, asking a doctor or patient whether they’re satisfied with a particular diagnostic or treatment decision isn’t the best way to measure decision quality in a medical context. More objective criteria, such as the progression of the patient’s illness, are needed. Getting the balance right between satisfaction and thoroughness can be tricky: As one 2011 study found, patients are less satisfied when a clinician communicates uncertainty in their decision (for example, if she says that there are several options for treatment and no clear best one). Clearly the solution here isn’t that doctors pretend more certainty than they feel. Instead, the authors recommend that doctors involve the patient in the decision, making it clear that some degree of uncertainty is normal.
Gladwell suggests that we can make better snap judgments by providing the unconscious mind with structure. This involves rehearsing our desired spontaneous responses and developing rules that we can fall back on in times of stress.
Gladwell provides the following specific advice for improving instinctive decision-making:
Let’s look at each piece of advice in more detail.
Neither deliberate nor intuitive decision-making is inherently good or bad. Whether these strategies are good, bad, or neutral depends on the situation. If we have time, resources, and a clearly defined task, deliberate decision-making is productive. It can also prime us for “rapid cognition,” or snap judgments.
Part of how we can make better decisions is to understand when the deliberate approach is best, and when the intuitive approach is best. Gladwell suggests that, where possible, we start with deliberate decision-making. This lays the groundwork for future rapid cognition.
Layering Deliberate and Unconscious Thinking in Team Sports
We can take any team sport as an example of the combination of conscious and unconscious thinking that Gladwell suggests. In most of these sports there are three levels of structure. First, we have the rules of the game. These give us an overarching goal and clear parameters, which allow us to easily frame the decisions we make as good or bad, depending on their outcomes. (Did that pass lead to a goal? If so, it was a good decision. If the ball got stolen by the other team, it was a bad decision.) Second, we have our game plan for a particular match. This is created through deliberate thinking by the coach and players. Finally, we have the spontaneous, instinctive decisions that we make during the game. The rules of the game and our overall game plan both provide a structure for the rapid-fire decisions that we make after the game starts.
Gladwell’s second piece of advice is not to overload yourself with detail. Too much information can be a burden. Find the most essential data points and make a decision based on those.
As we’ve seen in Gottman’s “love lab,” you don’t need a lot of information to identify patterns—just the right information. If we want to protect the integrity of our snap judgments, we need to limit the data we consider when making a decision.
(Shortform note: For this advice to be genuinely useful, we need to have some way of distinguishing crucial from superfluous information. Gladwell doesn’t provide this, but a good way of doing this is interpreting the information we receive as “signal,” or information that reveals key patterns, plus “noise,” or information that distracts us from the signal. If we listen carefully for the signal, we’re more likely to be feeding meaningful information into our unconscious decision-making apparatus. As Nate Silver argues in The Signal and the Noise, people who are highly accurate forecasters tend to use statistical analyses to separate signal from noise. They also avoid being overconfident in their predictions and pay extreme attention to detail.)
As an example of the pitfalls of overloading yourself with information before a decision, Gladwell discusses a shift that occurred at Cook County Hospital in Chicago in the late 1990s, when it changed the way it assessed ER patients with chest pain. Previously, doctors had taken a multitude of complex factors into account when diagnosing heart problems, leading to many inconsistent and potentially incorrect decisions.
A new computer algorithm reduced that large number of factors to four. The algorithm increased the accuracy of negative diagnoses by 70% (reducing the costs to the hospital of admitting patients who weren’t in danger of having a heart attack) and increased the accuracy of positive diagnoses to 95% (saving the lives of people who did go on to have major complications).
Managing Information Overload in Medical Diagnosis
The field of computer-assisted diagnosis has progressed rapidly since the publication of Blink. Increasingly sophisticated algorithms have led to highly accurate digital processes for diagnosing illnesses such as diabetic retinopathy, breast cancer, and early-stage Parkinson’s disease. Computers can also quickly analyze images and extract relevant information that assists clinicians in interpreting chest radiographs and images of the retina.
Gladwell suggests that to make better snap decisions, you need to practice. He suggests two types of specific practice. First, you should practice making fast decisions in stressful situations (more on this in Chapter 6).
How Does Rehearsal Work? Practice, Automatization, and the Brain
It seems obvious that practicing something will help us get better at it. But what physiological mechanism underlies this process? Norman Doidge argues in The Brain That Changes Itself that when you practice something, you’re increasing your brain’s efficiency in executing the task. New tasks (for example learning to play scales on the piano) are cognition-heavy, recruiting a massive number of neurons across different brain areas. Practice helps our brains determine which particular networks or neurons are best suited for the task and lock in their responses, freeing up cognitive capacity for more and more challenging versions of the task. Interestingly, mental rehearsal seems to be almost as effective as rehearsing the whole action.
Neuroimaging studies of experts back up this interpretation—for example, the brains of professional violinists are more efficient and focused in their processing of motor movements than those of amateurs, which frees up capacity for the higher-level processes involved in interpretation and musicality. Physically and/or mentally rehearsing before a job interview or presentation has the same effects, making lower-level processes more automatic and freeing up conscious processing capacity to help you respond thoughtfully to those tricky questions.
In addition to practicing fast decision-making, Gladwell says you should practice reading the social environment. Here he focuses on learning to pick up the fleeting, unconscious “microexpressions” that people make that betray their true thoughts and emotions. Gladwell calls this “mind reading.”
We can get better at understanding others, and consequently make more accurate snap judgments about them, by practicing reading people’s facial expressions. Let’s take a closer look at how we thin-slice people, or make judgments about them by reading (or failing to read) their minds.
Humans are highly social animals. Our brains are tuned to cues that can help us navigate the complex social world. An important part of this is being able to make good guesses about what’s going on in other people’s minds. (Shortform note: In psychology, this ability to construct a model of someone else’s mind is called “Theory of Mind.” It includes keeping track of the other person’s knowledge—do they have the same information as I do?—as well as guessing at their emotional state.)
Part of our fluency in mind-reading comes from our ability to use others’ facial expressions to infer their personality and predict their behavior.
Microexpressions are the hardest expressions to catch. They’re almost imperceptible, lasting a fraction of a second. You might be good at broadly controlling the expressions your face makes, but you can’t control these involuntary expressions.
This is how we thin-slice people—our unconscious minds make note of subtle muscle movements in the face that are too fleeting for our conscious minds to detect.
(Shortform note: The effectiveness of analyzing microexpressions, especially to detect lies, is controversial. First, lies aren’t always associated with microexpressions: One study found them in only around 20% of participants who had been instructed to mask or neutralize their natural expressions. When microexpressions did occur, they were often inconsistent with the emotion being hidden.)
How Much Information Do We Actually Need Before We Can Thin-Slice Accurately?
In microexpression analysis, as with many of Gladwell’s other examples throughout the book (such as the love lab), the answer to improving the quality of our snap judgments seems to lie in training ourselves to be more precise when we make them. However, the training that Gladwell recommends typically relies on categorization systems and databases that have been developed by other people during slow, careful, targeted analysis of a particular domain. The following question therefore arises: Should we use deliberately constructed databases (microexpressions, marriage couple interaction codes) and a large amount of information, or should we thin-slice using deliberately limited data?
Gladwell isn’t very clear on this—his priority is to tell a good story, and he leaves it to the reader to extract the overall principles. Perhaps the solution is to outsource as much as possible of the time-consuming analysis to others. Most of human knowledge and culture is built on this kind of outsourcing: For example, consider the difference between taking trumpet lessons from an expert player and trying to figure out how to play the trumpet by yourself.
As Gladwell notes, our ability to read facial signals and body language is enormously sophisticated. Most of the time, we’re really good at reading people’s minds by subconsciously picking up on subtle signals.
Still, mistakes happen all the time. We get into arguments and hurt others because we misinterpret what they’re feeling or thinking. Gladwell speculates that two things can affect our accuracy when recognizing the facial expressions of others: rushed decisions and underlying stress.
Even split-second decisions take a few moments. Thin-slicing appears to be so fast it’s automatic, but it’s not. In the few seconds it takes to make a snap judgment, we’re gathering information and weighing it. When we don’t have time to properly process information, we fall back on stereotypes. Reactions based on stereotypes are the lowest-quality snap judgments.
Gladwell says that slowing down helps us make better decisions, even quick decisions. Waiting a beat can make all the difference.
The Effects of Stress on Fast Decisions: Cognitive Tunneling
In Smarter Faster Better, author Charles Duhigg also warns of the dangerous effects of stress on decision-making. In particular, “cognitive tunneling” occurs when you hyperfocus on just one problem (or aspect of a problem) and miss other important information. Cognitive tunneling is more likely when your brain has to change quickly from a relaxed state to an alert one, so deliberately pausing for a moment when you feel this happening may help you to avoid it.
How can you avoid making bad decisions in stressful situations? Gladwell suggests that you practice making decisions in environments that mimic stressful situations. For example, rehearse an upcoming job interview or presentation in an environment that mirrors the event as closely as possible. Consider factors like the time of day, the people involved, and what you’ll wear.
These drills allow you the time to figure out the most appropriate response to the stressor. Then, when you’re in a moment of stress, you’ll have practiced the desired response so often that it’s automatic. You can rely on your unconscious mind in the moment rather than having to make a rational decision. This is a way to inoculate yourself against stress.
Gladwell describes a security firm that has bodyguards repeatedly encounter an aggressive dog and tracks their heart rates (an indicator of stress level). The first time they confront the dog, their heart rates are 175, so high that their vision is impaired. By the fourth time, their heart rates are usually around 110 and they’re able to function under pressure.
(Shortform note: This echoes SEAL commanding officer Richard Marcinko’s famous piece of advice: “The more you sweat in training, the less you bleed in combat.” If you’re in a stressful situation and you haven’t had a chance to practice, consider using the STOP mental checklist: Stop what you’re doing; Take a few deep breaths and center yourself; Observe your mind, body, and emotions; Proceed with the action, taking into account what you’ve observed.)
Use this exercise to think about ways you can rehearse making good, quick decisions in stressful situations.
What stressful situation do you find yourself in frequently enough that it gets in the way of your success? (Maybe it’s giving presentations at work or attending job interviews.)
Think of a particular stressful situation that you’ll be facing in the next few months (perhaps a presentation, a job interview, or a performance). How can you give yourself opportunities to rehearse making good decisions under stress? (Can you record yourself, practice with a friend, ask an expert for feedback?)
Gladwell describes two examples of snap decision-making in detail. One of these examples showcases the benefits of making intuitive, on-the-fly decisions; the other example shows how these decisions can lead to disaster.
Which approach works best in war? Surely methodical planning will beat snap decisions every time? Gladwell discusses the Millennium Challenge war simulation to show that split-second decision-making can beat careful planning in a war context.
The Millennium Challenge was a 2002 war game planned by the military’s Joint Forces Command (JFCOM). The military uses these games to test new strategies.
There were two teams in the Millennium Challenge:
The two teams had very different philosophies of war.
Blue Team operated on the assumption that with enough information, you can fight a war in a rational and systematic way. They took into account the enemy’s political, economic, cultural, and institutional environments, not just their military system. Blue Team had the most sophisticated resources and systems of any army in history.
Red Team operated on the assumption that war is, by its nature, unpredictable. You can’t fight an unpredictable war with a systematic approach. They believed that war requires spur-of-the-moment decisions that change according to changing situations.
Red Team did their analysis (conscious decision-making) before the battle began so that once it started, they could rely on their instincts (unconscious decision-making). The focus was less on systems and rigid strategies, more on using the wisdom and good sense of the team’s individuals.
The Benefits of Spontaneity in Military Strategy
Military and martial arts strategists have separately argued for the importance of spontaneity and flexibility (snap judgments) in military strategy. Renowned Prussian army general Helmuth von Moltke the Elder (1800-1891), for example, commented that “no plan of operations extends with certainty beyond the first encounter with the enemy’s main strength,” which is widely quoted today as the pithier “no plan survives contact with the enemy.” Boxer Mike Tyson famously expressed the same sentiment before his fight with Evander Holyfield, saying, “Everybody has a plan until they get punched in the mouth.”
Fighter pilot and military strategist John Boyd, who was a strong influence on Van Riper’s thinking in the Millennium Challenge, formulated a decision-making model that combines the stability of long-term thinking with spontaneous decision-making in real time. Boyd’s OODA (Observe-Orient-Decide-Act) loop captures a dynamic process in which decision-makers constantly update the information they have and adapt their actions accordingly. One of Boyd’s key insights was that to succeed in war, you have to carry out this process faster than the enemy (this way, you’re already acting on updated information by the time the enemy has a chance to update theirs). This pressure to make fast decisions favors Gladwell-style snap judgments rather than ponderous logical analysis.
In the battle, Red Team struck first in a surprise attack, knocking out half of Blue Team’s ships with cruise missiles. If this had been a real battle, 20,000 lives would have been lost before Blue Team fired its first shot.
Red Team relied on the power of their snap decisions, and these decisions were ultimately much more effective than the well-informed, meticulously planned decisions of Blue Team.
In that first version of the Millennium Challenge, Red Team won. However, the JFCOM staff decided to ignore Red Team’s win and start the game over. They gave Red Team a script and told them to stick to it, leaving no room for spontaneous decisions. Van Riper claims that if JFCOM didn’t like the result of a battle, they would just run the simulation again. (Shortform note: The new rules prohibited Red Team from shooting at approaching aircraft from the ground, from hiding their weapons from Blue Team’s attacks, and from using chemical warfare. Blue Team was also allowed to use experimental technology that still doesn’t exist in reality 20 years later.)
With Red Team constrained by their script, Blue Team won the second round easily, and the analysts at JFCOM and the Pentagon rejoiced over the victory of their detailed plan, without having learned a lesson from the previous loss to Red Team’s adaptable plan. This was the flawed plan implemented in the US invasion of Iraq in 2003.
Inspiration for Van Riper’s Approach
Van Riper told the New York Times in 2008 that his inspiration for the swarm-like attack on Blue Team came from nature. He thought of the way that ant colonies and wolf packs coordinate to take down larger and stronger prey.
Taking inspiration from nature wasn’t quite as successful for the CIA during the Cold War. They spent millions of dollars on “Acoustic Kitty,” a cat fitted with listening devices and trained to spy on the Soviet Embassy. Acoustic Kitty was released near the Embassy but was immediately run over by a passing car, leading the CIA to abandon the project.
Snap judgments can fail catastrophically in stressful situations. To show how this happens, Gladwell discusses a 1999 police encounter with Amadou Diallo, a Guinean immigrant to the US. This encounter ended when four New York City Police officers fired 41 shots at Diallo in the vestibule outside his apartment. The officers mistook him for a rape suspect, and they mistakenly thought he was pulling a gun when they approached.
While many view this tragedy as an instance of racial profiling, Gladwell presents it as a snap judgment failure. The officers thin-sliced in the moments before the first shot was fired, and the process went terribly wrong. Usually, we can easily tell the difference between someone who’s terrified and someone who’s dangerous. Why couldn’t the officers?
Gladwell argues that stress led the officers to make poor snap judgments:
The officers were charged with second-degree murder, but were found not guilty.
(Shortform note: It’s also likely that racial profiling did play a role in Diallo’s death. In Biased, Jennifer Eberhardt showed that priming with black faces leads to faster identification—or misidentification—of a weapon: When people are shown a blurry picture of a gun that comes into focus slowly, they’re faster to recognize the gun if they’ve been shown a split-second image of a black face than if they’ve been shown a white face or no face at all.)
This tragic story tells us that we should beware of trusting our snap judgments in stressful situations. Even if you feel pressure to make a fast decision, it’s best to slow down for a beat before doing something irreversible.
The Problems With Gladwell’s Notion of “Temporary Autism”
Gladwell labels the stress-induced impairment in recognizing facial expressions “temporary autism,” arguing that people who make bad decisions in moments of stress are functioning as though they’re temporarily autistic. This label oversimplifies and mischaracterizes autism.
First, it’s unclear why Gladwell has singled out autism for the analogy when difficulties reading facial emotion have also been found in temporal lobe epilepsy, ADHD, anxiety disorders, major depression, and Parkinson’s disease. Difficulties in recognizing facial emotion are also more nuanced than he allows. People experiencing major depression, for example, are more likely to perceive neutral expressions as sad. People with anxiety are extra good at recognizing fear. Researchers have also uncovered specific chemical influences: MDMA, for example, impairs people’s recognition of anger and fear, while oxytocin improves our recognition of happiness.
Gladwell contrasts autistic people with “normal” people, a problematic binary. (A more neutral way to talk about this distinction is “neurotypical” vs. “neurodivergent.”) He also assumes that the reader isn’t autistic, using phrases like “you and I” to describe neurotypical behavior.
Finally, the choice of “temporary autism” as a metaphor is particularly concerning when it’s used in the context of police brutality, as autistic people, on the whole, are the victims of violence far more often than the perpetrators.
Blink’s conclusion tells the story of trombone player Abbie Conant to highlight ways we can prompt ourselves to make better snap decisions.
As Gladwell recounts, the Munich Philharmonic Orchestra invited Conant to audition in 1980, not realizing she was a woman. Conant played behind a screen during the first round of the audition. The director was floored by her talent until he found out she was a woman. The trombone was thought to be a “masculine” instrument, played in military marching bands. The director didn’t believe a woman could play it as well as a man.
The committee reluctantly allowed Conant to join the orchestra, but a year later demoted her to second trombone. In a classic case of sensation transference, the same playing that had astounded them when they listened to her blind suddenly didn’t sound so good when they knew it was coming from a woman.
Since then, many orchestras have instituted safeguards against sensation transference:
Since these safeguards have become common, the number of women in major orchestras has increased fivefold.
This and the book’s other examples demonstrate what Gladwell sees as the two primary lessons of Blink:
Lesson #1: We depend on the power of our first impressions, forgetting that in addition to being powerful, they’re fragile. We need to acknowledge both the power and the corruptibility of our intuition, and take both seriously.
Lesson #2: When we realize how fragile our first impressions are, we can take steps to fortify them. We tend to believe that what happens in the blink of an eye is inevitable. However, we have unacknowledged control over our intuition. If we can control the environments in which we rely on snap judgments, we can also control those snap judgments.
Additional Advice on Fast Decision-Making
Throughout the book, Gladwell suggests several ways we can make better snap decisions. We should limit the amount of information we consider, avoid rationalizing, and rehearse (particularly in stressful situations). We should also be aware of how unconscious biases—for example, the Warren Harding effect, sensation transference, and the mere exposure effect—get in the way of good snap decisions and do our best to counter these biases. To improve your decisions, you can also:
Be aware of the effects of transient emotions on decision-making. Daniel Goleman points out in Emotional Intelligence that when you’re in a good mood, you tend to make more optimistic decisions; when you’re feeling down, you make more pessimistic decisions. The effects of these decisions can continue long after the emotion has passed. For example, a classic 1974 study found that we often translate anxiety into sexual attraction: Men who had just crossed a rickety suspension bridge were more likely to call an attractive female interviewer than men who had crossed a sturdy wooden bridge.
Consider the cognitive biases you’re bringing to a decision. In Thinking, Fast and Slow, Daniel Kahneman describes some common biases in human cognition that relate to our thinking about money. First, we tend to judge outcomes based on a fixed cognitive reference point that feels “neutral” (usually our current situation). Second, we evaluate our finances in relative terms rather than absolute ones (for example, a $100 increase feels much better if you start with $100 than if you start with $900). And third, while gains feel good, losses of the same amount feel disproportionately bad. Be careful of snap decisions that come from these biases, as they may lead you astray.
Learn to distinguish when snap decisions are appropriate and when they’re not. It’s often tempting to make a quick decision and be done with it, especially when you’re tired or stressed. But some decisions are more suited to a fast approach, while others benefit from a more considered one (as an extreme example, consider choosing which flavor muffin to buy with your morning coffee vs. deciding whether to ask someone to marry you). For muffin decisions, snap away. For marriage decisions, a more conscious process is usually desirable.
Consider your approach to the decision-making process itself. In The Paradox of Choice, psychologist Barry Schwartz argues that there are two main ways that people approach making decisions: They either try to pick the very best option from a large range of options (“maximizers”), or they carry a set of criteria into a decision and choose the first option that acceptably satisfies the criteria (“satisficers”). Maximizing may seem like the best approach, but it turns out that compared to satisficers, maximizers are less happy and less satisfied with their lives, more depressed, and more prone to regret. If you’re a maximizer, consider test-driving the satisficer method.