1-Page Summary

Every man-made object, environment, or program in our world is designed. From doorknobs to smartphone apps, design pervades our lives to the point that it often becomes completely invisible. When we struggle with one of these designs, we assume that our difficulties are our own fault, or that we’re just not smart enough to figure it out. But that blame is misplaced. More often than not, the true culprit in cases of “human error” is actually bad design.

In The Design of Everyday Things (originally released in 1988 under the title The Psychology of Everyday Things and revised in 2013), cognitive psychologist and engineer Don Norman explores the ways people understand and interact with the physical environment (this is sometimes referred to as “user experience”). In doing so, he makes all of us smarter consumers and helps designers create products that work with users, rather than against them.

Interacting With Objects

At its core, design is any human influence on the physical world. This applies to everything from ancient architectural marvels to the layout of clothes in your closet.

When we interact with design, we’re guided by the principles of discoverability and understanding. Discoverability refers to whether a user can figure out what an object is and how to use it without considerable effort. Discoverability answers the question, “How do I use this thing?” Understanding, in this context, refers to the user’s ability to make meaning out of the discoverable features of the object. Understanding answers the questions, “What is this, and why do I want to use it in the first place?”

Focusing on these factors is a hallmark of human-centered design, which is a design philosophy that flips the traditional design process on its head by focusing on human needs and behaviors first and designing products to fit those needs, rather than designing a product and hoping that users figure out how to use it.

How Do We Know How to Use an Object?

To design for human needs, we need to understand how people interact with design. There are six design principles that influence how we interact with an object: affordances, signifiers, mapping, feedback, models, and the system image.

Affordances are the finite number of ways in which a user can possibly interact with a given object. They answer the question, “What is this thing for?” For example, chairs typically have a flat surface, which we intuitively recognize as an indicator of support. In other words, the look of a chair suggests that it is for sitting on.

Signifiers are signals that draw the user’s attention to an affordance they may not have intuitively discovered, like a “click here” button on a website or a “push” sign on a door. For designers, signifiers are more important than affordances: The most sophisticated technology is pretty useless if a user can’t find the “on” button.

Mapping uses the position of two objects to communicate the relationship between them. For example, if you see a row of three lights and a panel of three switches, natural mapping would mean the position of the switch corresponds to the position of the light it controls. Mapping is not universal since culture can influence how we think about direction and spatial relationships.

Feedback is a sensory signal that alerts the user that what they’re doing to an object is having some effect. Feedback can tell us when something is working as expected, but more importantly, when it’s not working how we want. In a car, a dashboard alert light or the sound of squeaking brakes are both sources of feedback that let us know something is wrong.

Models (also called conceptual models or mental models) are mental images of an object and how it works based on affordances, signifiers, mapping, and feedback. Mental models stem from the universal instinct to organize information into cohesive stories. But these stories are not always accurate, and false mental models of a design can cause confusion.

The System Image is the sum total of the information we have about an object, including both its physical properties and information from user manuals, product websites, or past experience. The system image is the only way designers can communicate their model of how something works to the user.

Cognition, Emotion, and Behavior

The way we think clearly influences how we interact with objects, but designers often underestimate the role of psychology in user interaction.

The Seven Stages of Action

When we interact with an object, we face two “gulfs” of understanding: the Gulf of Execution, (figuring out what an object does and how to use it) and the Gulf of Evaluation (evaluating results after using the object). To cross these gulfs, we use a seven stage action cycle. This action cycle happens unconsciously unless we’re interacting with an unfamiliar or confusing object. Each stage answers a particular question.

  1. Goal: What result do I want to achieve?
  2. Plan: What options do I have for achieving my goal?
  3. Specify: Which of these options will I choose?
  4. Perform: How do I execute my plan?
  5. Perceive: What happened when I did that?
  6. Interpret: What does that result mean?
  7. Compare: Did I reach my goal?

Let’s use grocery shopping as an example to see the seven steps in action. In that case, they may look something like this:

  1. Goal: I need to go grocery shopping.
  2. Plan: Should I drive to the store or take the bus?
  3. Specify: I think I’ll drive.
  4. Perform: I’ll follow the usual route to the store instead of a new one.
  5. Perceive: Everything went smoothly and I’ve parked at the store.
  6. Interpret: This means I can now go inside and shop.
  7. Compare: I’ve met my goal of going grocery shopping!

This cycle will play out multiple times for any given action because most behaviors have both an overall goal (like “go grocery shopping”) composed of several subgoals (“start the car”). Determining the overall goal is important because it gives designers a better idea of what users really want. To do this, we use root cause analysis, or continually asking “why?” about a behavior until there is no further answer. The root cause of a behavior might be internal goal-driven (studying for a test) or external event-driven (putting in earplugs in a noisy environment).

Conscious vs. Subconscious Processing

People react to technology in their lives through thoughts and emotions, and good design can capitalize on those reactions. To do that well, designers need an accurate working model of how the brain processes information.

There are two types of cognitive processing: conscious and subconscious. Conscious processing is deliberate: It’s where we compare options, predict possible outcomes, and come up with new ideas. Subconscious processing is automatic and works in generalizations. It’s the home of pattern recognition and snap judgments.

We can think of these differences in terms of three levels of processing: visceral, behavioral, and reflective.

Memory

Memory also impacts our interactions with objects. There are two kinds of knowledge: “knowledge in the head” (memory) and “knowledge in the world," which is anything we don’t have to remember because it’s contained in the environment (like the letters printed on keyboard keys). Putting knowledge into the world frees up space in our memories and makes it easier to use an object.

The knowledge we keep in our heads is only as precise as the environment requires. Most people won’t notice if you change the silhouette on an American penny because we only need to remember the color and size to tell a penny apart from other coins. We’re more likely to notice changes to the portrait on an American dollar bill, because we’re used to relying on that image to help us tell bills apart (since they are identical in size, shape, and color).

Memories can be stored either short- or long-term. Short-term memory is the automatic storage of recent information. We can store about five to seven items in short-term memory at a time, but if we lose focus, those memories quickly disappear. This is important for design: Any design that requires the user to remember something is likely to cause errors.

Long-term memory isn’t limited by time or number of items, but memories are stored subjectively. Meaningful things are easy to remember; arbitrary things are not. To remember arbitrary things, we need to impose our own meaning through mnemonics or approximate mental models. Designers can make this easier for users by making arbitrary information map onto existing mental models (for example, think of the way Apple has kept the location of the power and volume buttons relatively the same with each new version of the iPhone.)

The Error of “Human Error”

Industry professionals estimate that between 75 and 95 percent of industrial accidents are attributed to human error. This number is misleading, since what we think of as “human errors” are more likely outcomes of a system that has been unintentionally designed to create error, rather than prevent it.

Detecting Errors

Errors can be divided into “slips” (errors of doing) and “mistakes” (errors of thinking). Accidentally putting salt instead of sugar in your coffee is a slip—your thinking was correct, but the action went awry. Pressing the wrong button on a new remote control is a mistake—you carried out the action fine, but your thought about the button’s function was wrong.

Most everyday errors are slips, since they happen during the subconscious transition from thinking to doing. Slips happen more frequently to experts than beginners, since beginners are consciously thinking through each step of a task. On the other hand, mistakes are more likely to happen in brand new scenarios where we have no prior experience to pull from, or even familiar scenarios if we misread the situation.

Causes of Error

One major cause of error is that our technology is engineered for “perfect” humans who never lose focus, get tired, forget information, or get interrupted. Unfortunately, these humans don’t exist. Interruptions in particular are a major source of error, especially in high-risk environments like medicine and aviation.

Social and economic pressures also cause error. The larger the system, the more expensive it is to shut down to investigate and fix errors. As a result, people overlook errors and make questionable decisions to save time and money. If conditions line up in a certain way, what starts as a small error can escalate into disastrous consequences.

Preventing Errors

Good design can minimize errors in many ways. One approach is resilience engineering, which focuses on building robust systems where error is expected and prepared for in advance. There are three main tenets of resilience engineering.

  1. Consider all the systems involved in product development (including social systems).
  2. Test under real-life conditions, even if it means shutting down parts of a system.
  3. Test continuously, not as a means to an end, since situations are always changing.
Constraints

Designers can also use constraints, which limit the ways users can interact with an object. There are four main types of constraints: physical, cultural, semantic, and logical.

Physical constraints are physical qualities of an object that limit the ways it can interact with users or other objects. The shape and size of a key is a physical constraint that determines the types of locks the key can fit into. Childproof caps on medicine bottles are physical constraints that limit the type of users who can open the bottle.

Cultural constraints are the “rules” of society that help us understand how to interact with our environment. For example, when we see a traditional doorknob, we expect that whatever surface it’s attached to is a door that can be opened. This isn’t caused by the design of the doorknob, but by the cultural convention that says “knobs open doors."

When these agreements about how things are done are codified into law or official literature, they become standards. We rely on standards when design alone isn’t enough to make sure everyone knows the “rules” of a situation (for example, the layout of numbers on an analog clock is standardized so that we can read any clock, anywhere in the world).

Although they’re less common, semantic and logical constraints are still important. Semantic constraints dictate whether information is meaningful. This is why we can ignore streetlights while driving, but still notice brake lights—we’ve assigned meaning to brake lights (“stop!”), so we know to pay attention and react.

Logical constraints make use of fundamental logic (like process of elimination) to guide behavior. For example, if you take apart the plumbing beneath a sink drain to fix a leak, then discover an extra part leftover after you’ve reassembled the pipes, you know you’ve done something wrong because, logically, all the parts that came out should have gone back in.

The Design Thinking Process

“Design thinking” is the process of examining a situation to discover the root problem, exploring possible solutions to that problem, testing those solutions, and making improvements based on those tests. This process is iterative, which means it is repeated as many times as necessary, each time with slight improvements based on previous iterations.

Design thinking involves two tasks: finding the right problem and finding the right solution. Designers are often hired to solve symptoms, but good designers dig deeper to find the underlying problem before coming up with solutions. To do this, designers run through four stages: observation, idea generation, prototyping, and testing. This process is repeated as many times as necessary to develop the final product.

The observation phase involves gathering information on the people who will use the new design. This is different from market research: Designers want to know what people need and how they might use certain products, while marketers want to know which groups of people are most likely to buy the product.

After observation comes the idea generation phase, where designers brainstorm solutions to the problem. The goal is to generate as many ideas as possible without censoring “silly” ideas, since they might spark valuable discussion. Designers will then create prototypes of the most promising ideas using things like sketches and cardboard models.

Once the prototype is refined, the testing phase begins, where members of the target user group are asked to try out the prototype and give their feedback. Designers then repeat the entire process based on the feedback from the first round of testing. The iterative design thinking process emphasizes testing in small batches with refinement in between rather than waiting until the final product and testing with a much larger group.

Design Thinking in the Real World

In reality, the design process often doesn’t live up to the above ideal. Business pressures are the primary culprit here, since a well-designed product will still fail if it’s over budget and past deadlines. Product development team dynamics are also a challenge. The best teams are multidisciplinary, combining unique knowledge from different fields. However, each team member usually thinks their discipline is the most important.

Diversity among users can also impact design. For users with disabilities, designers can turn to a universal design approach. Universal design creates products that are usable by the widest range of people by designing for the highest need, not the average need. Adopting a universal design approach changes how designers choose the types of people and environments to observe as well as the features they focus on most in the prototyping and testing phases.

This approach is “universal” because if a product, environment, or service is designed with disability access in mind, it will typically also be usable for those without disabilities. For example, curb cuts were originally designed for wheelchair users but are also enormously helpful for anyone pushing a stroller or lugging a suitcase.

Technological Innovation

Economic pressures drive innovation. This can take the form of “featuritis," or the tendency to add more and more features to a product to keep up with competitors. These features ultimately degrade the design quality of the original product. Rather than winning over customers with new features, it’s better to do one thing better than anyone else on the market.

Real quality innovation can be either radical or incremental. Radical innovation involves high-risk, game-changing ideas while incremental innovation makes small improvements to existing products over time. The invention of the automobile was radical—all the small improvements that led to cars as we know them today happened incrementally.

The Future of Technology

Rapid technological innovation raises questions about the future of user experience. The way we interact with objects around us will certainly change in response to new technologies, and cultural conventions will change to reflect that. But human needs will remain the same. For example, the keyboard has evolved from mechanical typewriters to computer keyboards to touchscreen versions, but the need to record written information has stayed the same. In other words, human needs won’t change, but the way they’re satisfied will.

Some people fear that the rise of smart technology is making humans less intelligent because we can delegate even the most basic tasks to machines—and if those machines fail, we are totally helpless. It’s true that some traditional skills are becoming obsolete thanks to new technology, but that process ultimately makes us smarter. The energy saved by not having to create a fire every time we need heat or light or rely on long division for simple calculations can be channeled into higher-level pursuits. Our intelligence hasn’t changed, only the tasks we apply it to. The key is in using technology to do the jobs technology can and should do.

Increased Creation and Consumption

Technological innovation has made it easier than ever for anyone with a computer to create and publish new media. While amateur content creation has gotten easier, creating professional content has gotten harder and more expensive. The accessibility of smart technology levels the playing field, but makes it much harder to find quality, fact-checked content.

For manufacturers, new technologies present a different challenge. The need to entice buyers is a fundamental part of business, because a product that doesn’t sell is a failure, no matter how well designed it is. But while services like healthcare and food distribution are self-sustaining (because there will always be a need for them), durable physical goods are not. If everyone who needs a particular product purchases one, there’s no one left to sell it to; If everyone already owns a smartphone, how do you convince them to buy the new and improved model?

One way manufacturers get around this is through planned obsolescence, the practice of designing products that will break down after a certain amount of time and need to be replaced. This creates a cycle of consumption: buy something, use it until it breaks, throw it away, and buy another. While this cycle is good for business, the waste it generates is horrible for the environment. Thankfully, the combination of new technologies and a growing cultural awareness of sustainability issues is creating a new paradigm. The future of technology involves products designed with both the user and the environment in mind.

Introduction

Every man-made object, environment, or program in our world is designed. From doorknobs to smartphone apps, design pervades our lives to the point that it often becomes completely invisible. When we struggle with one of these designs, we assume that our difficulties are our own fault, or that we’re just not smart enough to figure it out. But that blame is misplaced. More often than not, the true culprit in cases of “human error” is actually bad design.

Traditionally, design is described in terms of form and function—how an object looks, and how it works. But this description totally ignores the question of how users interact with the design, and that oversight is a common reason why products fail. An object can be extremely useful and visually beautiful, but if the average user can’t figure out how to work it, it’s ultimately useless.

The first edition of this book was released in 1988 under the title The Psychology of Everyday Things. It was considered controversial at the time, and engineers and designers resisted the idea that understanding psychology was important for design. This revised edition was released in 2013 and contains updated examples and a few added principles, but the main ideas are the same. The author, Don Norman, is both a cognitive psychologist and an engineer. He was one of the first people to formally point out the overlap between the two fields, and to highlight the importance of considering user experience.

User interaction is a two-way relationship between a person and an object. In the first two chapters, we’ll explore the ways that relationship is impacted both by the design of the object and by the user’s thoughts and emotions. Chapter 3 explores the role of human memory for product design, while Chapters 4 and 5 discuss specific ways that good design can guide user experience and prevent dangerous errors. In the final two chapters, we’ll learn more about the ideal product design process and the way real-world pressures force designers to compromise that ideal. The summary concludes with a look to the future of user interaction in an increasingly digital world.

Chapter 1: How the Design of Physical Objects Shapes Our Lives

This chapter lays the foundation for the rest of the book by illustrating how the design of physical objects has a much bigger impact on our lives than most people assume. Poorly designed objects can cause frustration, time delays, and even injury.

What Does “Good Design” Look Like?

At its core, design is any human influence on the physical world. We tend to think of this in terms of buildings, fashion, or products, but design is much broader than just a few fields. Every object or environment that has been created or modified by humans is designed. This applies to everything from ancient architectural marvels to the layout of clothes in your closet.

As technology evolves, new fields of design pop up to focus on specific problems. User interaction is important in every subfield, but it’s most often talked about in industrial, interaction, and experience design.

Each of these fields focuses on redefining “good” design in terms of user experience. Two important principles guiding this definition are discoverability and understanding. Discoverability refers to whether a user can figure out what an object is and how to use it without considerable effort. Discoverability answers the question, “How do I use this thing?”

Understanding, in this context, refers to the user’s ability to make meaning out of the discoverable features of the object. Understanding answers the questions, “What is this, and why do I want to use it in the first place?” On a normal door, handles and hardware indicate where to push or pull—but they also help us understand what this object is (a door) and what it’s used for (opening and closing). In other words, good design has to consider not only the form and function of a product, but also the experience of interacting with that product.

Why Good Designers Make Bad Products

If interactions are such a crucial part of good design, why do designers so often get it so wrong? There are two primary reasons for this.

  1. Traditionally, the objects and technology we interact with on a daily basis are created by engineers, who are typically logical thinkers who have been trained to focus only on function. Their goal is to create a superior product—and because they understand how to use that product, they often assume others will understand, too. In other words, engineers create products under the false assumption that people perform like machines—they always act logically, aren’t influenced by emotion, and rarely make errors.
  2. Engineers and designers typically don’t have the ultimate say in all decisions about a product. They’re limited by the budget set by the company or client, by the logistical capabilities of the manufacturer, and by the needs of the marketing team. The final product must be not only well-designed, but also possible to produce (at scale and within budget) and easy to sell.

Human-Centered Design

One solution to this problem is human-centered design. Human-centered design is not a subfield like industrial or interaction design. It is a design philosophy that can be applied in any design specialization. The goal of human-centered design is to flip the traditional design process on its head by focusing on human needs and behaviors first, and designing products to fit those needs, rather than designing a product and hoping that users figure out how to use it.

Let’s use a simple fork as an example. The traditional design process would most likely begin with a designer thinking, “I’d like to design a new kind of fork.” She would then brainstorm new versions of the fork, create sketches and prototypes of those ideas, and tweak those prototypes until she was happy with the finished product.

Instead of a concrete product idea, a human-centered design approach would begin with a set of questions: “What tools do people use to eat food? What problems do they run into with those tools? How can they be improved?” The designer would then observe different types of people eating different types of food, conduct interviews with people about their preferred cutlery, and research the history of eating utensils. She would identify the main problems in the current technology and only then begin to brainstorm ideas for how to address those needs.

The Speed of Technological Development

Modern technology evolves exponentially faster than ever before. While previous generations may have had one phone per household that was only updated after decades of use, we now have personal cell phones, with new and improved models being released twice a year. In contrast, design practices and principles evolve much slower. This means that technology is outpacing our ability to effectively interact with it.

This creates a paradox: Technology simplifies our lives, but the more advanced technology becomes, the more difficult it is to learn and operate, which adds complication. We see this when users buy a new, state-of-the-art television, only to be overwhelmed by the endless remote control buttons and ultimately only learning to use one or two features.

How Do We Know How to Use an Object?

There are six design principles that influence how we interact with an object: affordances, signifiers, mapping, feedback, models, and the system image. Each of these principles may be more or less relevant than the others for a specific object, but they are always present to some degree.

Affordances

The term “affordance” refers to the relationship between an object and a user. Affordances are the finite number of ways in which a user can possibly interact with a given object. They answer the question, “What is this thing for?”

For example, think of a chair. Chairs typically have a flat surface, which we intuitively recognize as an indicator of support, either for a person or an object. In other words, the look of a chair suggests that it is for sitting on, or possibly resting objects on.

Some affordances are obvious from the appearance of the object itself, like the flat surface of the chair. Other affordances may be less intuitive or even completely hidden. For example, if the chair were light enough (and the user strong enough), throwing across the room may be another affordance of the chair. Similarly, if the chair had a well-hidden secret compartment, hiding objects would be an affordance of the chair, regardless of whether the user discovers the drawer or not. The key is that it is a possible interaction.

Affordances can be deliberately designed—by nature, a chair is designed for sitting, a coat rack is designed to hold coats, and a stair railing is designed to prevent falls. But affordances can also arise completely by accident. The chair could also be used as a step stool, the coat rack as a child’s climbing gym, or the railing as a bookshelf. These aren’t the uses the designer had in mind, but they are equally valid affordances.

However, affordances are not merely properties of the object itself. Instead, they describe the relationship between the object and the user. This means that without a user to interact with, an object has no affordances. It also means that affordances for the same object can vary between different users. Think back to the chair example. If the user is a toddler, crawling under might be another affordance of the chair. But if the user is a large adult, crawling under is not a possible interaction between the user and the chair. The chair remains the same, but the affordance changes based on the different characteristics of the user.

Signifiers

The idea of hidden affordances highlights the need for signifiers. A signifier is a signal that draws the user’s attention to an affordance they may not have intuitively discovered. In a digital context, a “click here” button or flashing icon are possible signifiers. In the example below, the “push” sign is a signifier. It contrasts with the perceived affordance of the handles (pulling), which may cause confusion.

the-design-of-everyday-things-1.jpg

Affordances and signifiers are easy to mix up, and even seasoned designers sometimes use one word when they really mean the other. The key difference is that affordances describe the possible interactions between object and user, whereas signifiers are a way of advertising those affordances (for example, if you take the “push” sign off a door, the door will still open when pushed. You've removed the signifier, but not the affordance). For designers, signifiers are more important than affordances. The most sophisticated technology is pretty useless if a user can’t find the ‘on’ button.

Mapping

For some simple objects or interfaces, signifiers alone will give the user enough information to use the object successfully. However, more complex objects might also require the use of mapping in order to be usable. Mapping uses the position of two objects to communicate the relationship between them. It’s the simplest way to show the user which controls correspond to which affordances.

For example, picture the knobs on a stovetop. How do you know which knob operates which burner? If the stove is well-designed, the arrangement of the knobs will map onto the arrangement of the burners (typically a square). In that case, if you want to turn on the bottom left burner, you intuitively reach for the bottom left knob.

This stovetop example is notorious among designers because effective mapping of knobs to burners is so rare. When most of us picture a stovetop and its controls, we picture the burners arranged in a square, but the knobs arranged in a line. In this setup, the user has to invest far more time and mental energy to figure out which knob controls which burner, and may have to resort to trial and error.

Feedback

The next clue for interacting with an object is feedback. If you’ve followed signifiers to an affordance and used mapping to figure out which control you need to use, how do you know whether you got it right? Feedback is a sensory signal that alerts the user that what they’re doing to an object is having some effect. Information that results from a user’s action is called “feedback”; information that shows a user how to act in the first place is called “feedforward." Feedforward guides users through the execution phase, while feedback guides them through evaluation.

Our sensory systems automatically provide basic feedback about our environment through all of our senses. We automatically process the look, feel, sound, and scent of objects around us. However, for more complex objects, feedback signals may not be automatic. In that case, designers can deliberately add in sources of feedback (like a small green bulb that lights up when a machine is “on”).

Feedback tells us when something works, but more importantly, feedback tells us when an object is not working how we need it to. In a car, a dashboard alert light or the sound of squeaking brakes are both sources of feedback that let us know something is wrong. Without these signals, we might not recognize a major problem until it’s too late. This is especially important when the object’s function is hidden from view.

But too much feedback can also cause problems. Think of a GPS system that announces every single cross street. By the time you reach the street you’re looking for, you’ve long tuned out the constant updates and are likely to miss it. The same is true of smoke alarms that can’t be easily turned off—once you’re aware of the emergency, the constant, ear-splitting beeping makes it more difficult to react appropriately.

Even the smallest decisions on the design of feedback can have enormous consequences. One notorious example of this is the Three Mile Island incident, in which a nuclear reactor at a plant in Pennsylvania suffered a partial meltdown and very nearly resulted in a disastrous radiation leak. The cause of the incident was determined to be human error, but further investigation traced the root of the problem to a single indicator light. Counterintuitively, the light would turn on when an important coolant valve was closed, and turn off when the valve was open. The confusion caused by this tiny design decision set off a chain reaction that allowed a small issue to escalate into a full-scale nuclear incident.

Clearly, feedback is important, and too much or too little can cause problems. But the type of feedback and the way it’s presented is also crucial. A car’s turn signal flashes on the side of the car that matches the direction the driver intends to turn. If the opposite side flashes, or both at the same time, that gives you no useful information about what the car in front of you is about to do.

Models

So far, we know that affordances tell us what an object is for, signifiers tell us what and where those affordances are, mapping helps us find the right controls to engage with those affordances, and feedback tells us whether everything is working right. All of this information produces a model (also called a mental model or conceptual model). A model is a mental picture of an object and how it works.

Example: Refrigerator Controls

The design of an object itself suggests a certain conceptual model. For example, some refrigerators have two temperature control knobs, one labeled “freezer” and the other “refrigerator." These are two separate knobs, which implies that the temperature of each section is controlled completely independently.

the-design-of-everyday-things-2.png

Seeing this, you’d probably assume that the refrigerator and the freezer each contain an independent temperature sensor that controls an independent cooling mechanism. This mental model would look something like this:

the-design-of-everyday-things-3.png

However, in reality, this fridge contains only one cooling unit. The knobs control a central valve that adjusts how much cold air is blown into each section of the refrigerator. This means that adjusting one knob will change the temperature in both the freezer and the refrigerator. So, if the freezer were too cold, adjusting the freezer knob would warm that section up, but would also funnel more cold air into the refrigerator below, cooling that section even further. This model looks more like this:

the-design-of-everyday-things-4.jpg

In this example, an inaccurate mental model of the cooling system is likely to result in frustration (and possibly some spoiled food). But it’s not the user’s fault that their model is inaccurate—the dual knobs would lead almost anyone to the same faulty conclusion. The design itself is the problem here, not the user’s understanding of it.

The System Image

Signifiers, mapping, feedback, and any obvious affordances help us understand how to use an object. But we also have access to even more information that can guide our interactions with that object, like user manuals, product websites, and our own past experience with similar objects. Taken together, the sum total of the information we have about an object is the system image. The system image is ultimately what determines how a user interacts with an object. For designers, this means the object must speak for itself—without the designer in the room to teach them, users still need to be able to figure out how to use it.

The system image gives us another way to think about why designers sometimes make confusing or frustrating things. Engineers and designers expect that the user’s mental model of the object will perfectly match their own mental model. But a designer may have spent months working on a particular design, whereas the user has zero previous experience with that object. The system image is the only way designers can communicate their model of how something works to the user.

Exercise: Reexamine Everyday Objects

We typically don’t think about the design of things around us unless they present a problem. This exercise will help you think critically about items you use every day.

Chapter 2.1: Conscious and Subconscious Processing

This chapter gives an overview of the conscious and subconscious mental processes that determine how we perceive, interpret, and respond to objects in our environment. Traditionally, studying human cognition, emotion, and behavior is the domain of psychologists, and many designers underestimate the importance of understanding human behavior. They assume that their experience with their own thoughts and emotions is more than enough to predict how other people will think and feel in a given situation. But because the bulk of human cognitive processing happens on an unconscious level, our own experiences of our conscious thoughts only show part of the picture.

Every designed object or system will ultimately be used by people. A product that is technically perfectly engineered but is confusing to use is ultimately a failure. In other words, understanding the thoughts and emotions that underlie our interactions with technology has important implications for design.

Evaluating Behavior With The Seven Stages of Action

When we interact with a new object, we have two problems to solve: “How do I use this?” and “Did that work?” Author Don Norman calls these “the gulf of execution” and “the gulf of evaluation."

The Seven Stages of Action

The gulfs above are important because they represent the two components of an action: execution and evaluation. We can break these down even further for a total of seven stages that take us from the impetus of an action all the way through to successful completion. (If seven distinct steps seems excessive, remember that for most actions in our daily lives, these stages play out completely unconsciously. We only become aware of them for tasks that are unfamiliar or confusing.)

A great example of this is driving a car. Experienced drivers make turns and merge into traffic without much conscious thought. The stages of action have become automatic through repetition, only requiring thought when something novel comes up, like construction blocking a particular road. New drivers, on the other hand, consciously think through every step. Where an experienced driver might think “I need to turn left," a new driver would think “I need to slow down, check my mirrors, check for oncoming traffic, turn the wheel, hit the gas pedal with just the right amount of force, then turn the wheel back again.”

The seven stages of action are: goal, plan, specify, perform, perceive, interpret, compare. These steps carry the user across the gulfs of both execution and evaluation. The first stage, “goal," sets the standard that will be used later to determine if the action was successful. The next three stages (plan, specify, perform) bridge the Gulf of Execution, while the final three stages (perceive, interpret, compare) bridge the Gulf of Evaluation. Each of these stages answers a particular question:

  1. Goal: What result do I want to achieve?
  2. Plan: What options do I have for achieving my goal?
  3. Specify: Which of these options will I choose?
  4. Perform: How do I execute my plan?
  5. Perceive: What happened when I did that?
  6. Interpret: What does that result mean?
  7. Compare: Did I reach my goal?

the-design-of-everyday-things-5.png

Let’s use the driving example again to see the seven steps in action. In that case, they may look something like this:

  1. Goal: I need to go grocery shopping.
  2. Plan: Should I drive to the store or take the bus?
  3. Specify: I think I’ll drive.
  4. Perform: I’ll follow the usual route to the store instead of a new one.
  5. Perceive: Everything went smoothly and I’ve parked at the store.
  6. Interpret: This means I can now go inside and shop.
  7. Compare: I’ve met my goal of going grocery shopping!

In the example above, the action was successful in achieving the goal. However, the goal of going grocery shopping is part of an overall system that includes both larger and smaller goals. For example, if I’m making a particular recipe but don’t have an ingredient I need, going grocery shopping would become a subgoal of my overall goal of making that recipe. Grocery shopping itself would have multiple subgoals: locating each ingredient in the store, loading the groceries back into the car, and so on.

Conscious vs. Subconscious Processing

This section gives an overview of the different ways people cognitively process thoughts and emotions. This subject is almost never included in traditional design or engineering training, but it’s important to understand because it is the core of user experience. People react to technology in their lives through thoughts and emotions, and good design can capitalize on those reactions. To do that well, designers need an accurate working model of how the human brain processes information.

Generally speaking, people are only consciously aware of a small portion of their thoughts and emotions. The rest of our opinions, decisions, emotions, and reactions happen without any conscious input. When we learn a new skill, we need conscious focus at first, but once we fully master the skill and make it a frequent habit, performing requires less and less conscious effort until it is fully subconscious. The process of mastering a skill to the point that it can be executed subconsciously is called “overlearning." Think of the new driver compared to the experienced driver—the new driver is actively concentrating, while the experienced driver can safely carry on a conversation or sing along to the radio.

(Overlearning applies to complex skills like driving, walking, and learning a language, but can also apply to factual information. For example, if you’re filling out a form and are asked for your phone number, you’ll most likely be able to answer without much effort. But if you’re asked for the address of the second house you ever lived in, it will take you much longer to come up with the answer.)

Conscious and subconscious processing each have important strengths. Conscious processing is what sets humans apart from animals. It allows us to compare options, predict possible outcomes, and come up with new ideas. Conscious processing happens when we deliberately choose to learn or consider something new.

Subconscious processing, on the other hand, happens automatically. This is how we make connections between seemingly unrelated events in our lives, or jump to premature conclusions based on our past experiences. Subconscious processing works in generalizations—it automatically predicts that new experiences will follow the same pattern as similar previous experiences.

Both conscious and subconscious processing are tied to emotions. Emotions trigger biochemical reactions that prompt the brain to focus more on one type of processing than the other. Generally speaking, negative emotions like fear shut down conscious processing and divert resources to subconscious survival instincts. On the other hand, positive, calm emotions allow for the use of conscious processes like creativity, since the brain is not responding to a perceived threat. This is why we react to strong feelings of fear with a fight, flight, or freeze response. All three of these possible responses are completely subconscious—we can’t consciously choose one, and we have very little conscious control over them once they appear.

The strength of an emotion also matters: Strong emotions bias the brain toward subconscious processes, while more mild feelings leave room for conscious thought to intervene.

Three Levels of Processing

(Shortform note: This section gives a basic overview of Norman’s way of thinking about the ways humans process thoughts and emotions. For a more detailed look at his thoughts on the subject, see his book Emotional Design.)

The study of human cognition and emotion is an extremely complex area of neuroscience. A simplified model of this process is helpful for understanding the basic ideas and their implications for design. This model divides emotional and cognitive processing into three distinct levels: visceral, behavioral, and reflective. Each of these levels has important implications for design, so understanding them is crucial in order to design technology that is easy to use and to enjoy.

The Visceral Level

The visceral level involves our most primitive reflexes, like startling at a loud noise or flinching when something flies towards us unexpectedly. This happens in the lowest part of our brains, the same area responsible for basic functions like breathing and balancing upright.

The visceral level controls the fight, flight, and freeze responses by signaling the muscles and heart to behave in particular ways. For example, the bodily signature of a flight response is a racing heartbeat and increased muscle tension. This happens in response to a fearful stimulus, but the process can also work in the opposite direction. If your heart is racing and your muscles are tense from exertion or excitement, you might mistakenly perceive that combination of sensations as a flight response, and become fearful as a result. Our emotions and perceptions influence our bodies, and vice versa.

Processing at the visceral level is completely subconscious. It can’t be influenced by learning, except for basic processes like adaptation (for example, if you work in an environment with frequent bursts of loud noise, your automatic startle response to that stimulus may decrease over time as your brain learns that sudden loud noises don’t always mean danger).

This level of processing is especially important for designers to understand. Visceral reactions can have a powerful influence on how users respond to an object. An otherwise well-designed product can fail if it provokes a negative visceral response in the user (like with a sudden, blaring alarm, or an unpleasant odor.)

The Behavioral Level

The behavioral level also primarily deals with subconscious processing. This might seem counterintuitive, since we typically choose our behaviors and can observe them consciously. But the behavioral level of processing is not concerned with why we act the way we do, but how.

For example, if you want to speak, you have to control your lips, tongue, and jaw in very specific ways to produce the right sounds. You might consciously choose what you want to say, but most of us don’t actively will our mouths to make certain shapes. The same applies to wiggling your fingers or opening a drawer-- we’re not conscious of the neurological processes involved in those actions. We decide what to do, and our brains subconsciously forward the message to the correct body parts.

(Unlike the visceral level, responses at the behavioral level can be learned and changed. This is where overlearning comes in—when we practice something over and over until it becomes a habit, we’ve moved that skill from a conscious level to the subconscious behavioral level. Now, when the associated trigger pops up, we carry out that action without any conscious thought. Overlearning is an important factor in understanding human error, which is covered more thoroughly in Chapter 5.)

Behavioral processing also has implications for design. By definition, behavioral responses have a specific expectation attached. If you open your laptop and press the power button, you expect it to turn on. When you turn a doorknob and push, you expect the door to open. These expectations are crucial for designers to understand because they have such a huge impact on emotional responses. When our expectations are not met, we typically experience frustration or disappointment; when they are met or exceeded, we experience satisfaction and pleasure. In turn, these emotions strongly influence how we think and feel about the experience of interacting with a given object.

We often make these associations without realizing it. If your laptop reliably powers on each time you expect it to, you learn to associate the laptop with satisfaction and confirmed expectations. If the door frequently doesn’t open when you expect it to (perhaps because it lacks the necessary signifiers), you associate that type of door with frustration and annoyance.

The most important design tool for managing user expectations is feedback. If an experience defies our expectations, we might feel helpless or confused about how to proceed, ultimately influencing how we think and feel about the experience. Feedback mitigates this damage by explaining what went wrong, allowing users to regain a sense of control. Even better, if feedback gives us information about the problem and how to fix it, we’re much less likely to experience feelings of helplessness or confusion.

The Reflective Level

The reflective level is the level of conscious processing. Where visceral and behavioral processing happens instinctively and immediately, reflective processing is deliberate and therefore much slower. The reflective level allows us to brainstorm, consider alternatives, exercise logic and creativity, examine a new idea, and, as the name implies, reflect back on past experiences.

Emotion plays an important role at this level as well. Where the visceral and behavioral levels deal with subconscious, automatic emotional responses, the reflective level provokes emotional responses based on our own interpretation of an experience. For example, while fear is an automatic visceral response, anxiety about possible future events is a reflective response. Anxiety arises from our ability to predict possible futures based on current trends. But this is the same process that underlies feelings like excitement and anticipation. Our own interpretation of our predictions decides which of these emotions we experience.

Another example of this effect is the difference between guilt and pride. In order for us to feel either of these emotions, we have to believe we’re directly responsible for the outcome of a situation. If we judge the outcome of that situation positively, we’re more likely to feel pride. If we judge the outcome negatively, we’re more likely to feel guilt.

This kind of reflective processing has a strong impact on how we think about design. The act of reflecting involves synthesizing information from the other two levels into a cohesive memory of an experience. While visceral and behavioral processing is concerned only with the present, reflective processing looks back at the past and uses that information to make predictions about the future.

In a practical context, this means that memory is often the most important factor in determining how a user feels about the experience of interacting with a given object. If the object provoked pleasant visceral reactions and met our expectations, we’ll remember the experience positively. In cases where visceral and behavioral reactions conflict, reflective processing determines which of these factors we give more weight to and ultimately decides whether we remember the experience as positive or negative.

Optimizing the Three Levels

The most successful designs address issues at all three of these levels, leaving users with a pleasant memory of the experience and a positive view of the design. Traditionally, designers, artists, and architects are trained to focus on the visceral level, creating aesthetically beautiful projects that provoke an immediate positive response. Engineers, on the other hand, focus on the reflective level, creating projects that function based on logic and higher reasoning.

This is one reason common objects are so often confusing or disappointing to use. The ultramodern, solid glass door described in Chapter 1 might provoke a positive visceral response, but the lack of a clear, logical way to interact with it creates confusion at the reflective level. At the other extreme, medical equipment is often designed purely to perform a specific function. These machines might be incredibly technologically sophisticated, but the confusing or sterile aesthetic can trigger a visceral fear response in patients.

The three levels of processing also map onto the seven stages of action. The second and last of the seven stages (“plan” and “compare”) happen at the reflective level, as they involve consciously setting goals and evaluating results. The “specify” and “interpret” stages happen at the behavioral level, involving a mix of conscious and unconscious processing. The “perform” and “perceive” stages happen immediately before and after the action itself, and are processed subconsciously on the visceral level.

Example: Flow States

To understand the importance of this overlap, let’s look at the concept of flow. The term “flow,” coined by psychologist Mihaly Csikszentmihalyi, refers to a cognitive and emotional state in which a person is completely absorbed in an activity. When in a state of flow, people are so engrossed in an activity that they completely shut out the outside environment and often lose track of time.

Flow states are created at the behavioral level of processing. Remember that the behavioral level involves subconscious expectations—we expect a certain outcome to follow a certain action, and when that doesn’t happen, we tend to get frustrated. People are most likely to be in flow when the task they’re working on is just slightly above their skill level.

When an overly easy task satisfies our expectations immediately, with very little effort, the lack of challenge leaves us feeling bored. On the other hand, an overly difficult task is likely to be so overwhelming that our expectations are repeatedly unmet, so we get frustrated and give up. Tasks that trigger flow states strike the perfect balance between ease and frustration (or between met and unmet expectations). The tension between these two emotions powerfully captures our attention and pulls us into a state of flow.

Although flow is a personal experience and no one design can trigger a flow state for every single user, it’s still a powerful force in shaping user experience. Users in a state of flow spend hours at a time engaging with a product and will associate it with a feeling of satisfaction and enjoyment. Those positive experiences are likely to create happy, loyal customers. In other words, helping users get into a state of flow is good for the bottom line.

So, what does this have to do with design? Since the design of technology has a direct impact on how users engage with it, changing certain design factors can maximize the chance of creating a specific internal response for the user, including a state of flow. To do this successfully, we need to understand both the three levels of processing and the seven stages of action.

the-design-of-everyday-things-6.jpg

Exercise: Break Down an Ordinary Action

The seven stages of action happen automatically for easy, routine tasks, but thinking through them consciously is a helpful tool for evaluating design. Let’s practice this now.

Chapter 2.2: Making Sense of Our Own Behavior

We know that behavior can be either event-driven or goal-driven, and that it can be broken down into seven stages of action. But what happens when something goes wrong? How do we explain what happened?

For designers, understanding the way users think about their interactions with technology is important for creating a positive user experience. It is not enough to know how something works on a technical level—we need to understand how the user thinks the object works, and how they explain what happened if something goes wrong, since these are important factors in determining how people respond to technology. For designers and non-designers alike, understanding the biases that shape our own stories helps us make sense of our encounters with bad design.

Causes of Behavior

To understand the way people think about their interactions with technology, we need to distinguish between a user’s overarching goal and the smaller subgoals and actions that lead up to it. Norman quotes Harvard Business School professor Theodore Levitt as an example, who said, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!” However, it’s unlikely that anyone actually wants a quarter-inch hole in their wall just for fun. Instead, drilling a hole is most likely a subgoal leading up to a larger goal of mounting something on the wall.

Determining the overall goal of a behavior is important because it gives designers a better idea of what users really want. If you’re designing in response to someone buying a drill, you’ll keep making new kinds of drills. If you’re designing in response to someone wanting to hang a shelf, you might come up with a new adhesive that allows the user to mount shelves directly on their wall, without drilling holes. You’ve addressed their real need and simplified the process of meeting it.

Root Cause Analysis

To find the original, overarching goal of a behavior, we use a process called root cause analysis. Essentially, we keep asking “why?” about a behavior until there is no further answer. In the drill example, the process of root cause analysis would start with asking, “Why does this person want to buy a drill?," followed by “Why do they need to put a hole in the wall?” until we reached the conclusion that “They want to hang up a shelf.” But we could push this even further by asking why they want to hang a shelf in the first place. Do they have too many books? Are they running out of floor space? Are the walls empty and boring? This gives designers more intervention points to come up with solutions to meet users’ needs.

One of the lessons of root cause analysis is that every action has either an external or internal cause. When an internal goal causes a certain action, we call this goal-driven behavior. When an outside event or a condition of our environment causes an action, we call this event-driven behavior. Event-driven behaviors are often opportunistic action, or behavior that arises in response to unexpected events.

The example above shows how event-driven and goal-driven behavior can be intertwined, since the event-driven behavior (putting in earplugs) only occurs as part of the goal-driven behavior (studying). Designers need to be aware of the differences between event-driven and goal-driven behaviors in order to design for the user’s actual needs. In the studying example, focusing on designing better earplugs focuses only on the external factors. Redesigning the entire environment to be more conducive to the internal goal of studying would address both the internal and external causes, ultimately creating an even better user experience overall.

The Role of Storytelling in User Experience

Root cause analysis helps us make sense of other people’s behavior. To understand our own behavior, we turn to stories. Humans are born storytellers. When we’re faced with a jumble of information, our natural instinct is to organize it into a story that explains cause and effect. Stories help us make sense of our world and our place in it.

Reorganizing information into a cohesive story is usually a subconscious process. We operate under the assumption that there must be an underlying pattern connecting different pieces of information in a way that makes sense, regardless of whether such a pattern actually exists. In practice, this means that humans are really good at coming up with stories to explain our experiences, but not so good at determining whether those stories are actually true.

When it comes to understanding our physical environment, these stories take the form of conceptual models. We create a mental story to explain how something works, connecting whatever bits of evidence we have about that object into cause and effect patterns that make sense to us (but may be totally false).

A common source of false conceptual models is thermostats. Because a thermostat is such a small peek into a complex heating and cooling system, it gives us very few clues as to how it actually works. All we know is that if we’re too cold, we press a few buttons on the thermostat and the room eventually gets warmer. All the steps between “press buttons” and “feel warmer” are hidden and left to the imagination. This leads people to assume the thermostat controls a valve that opens a certain amount based on the setting, and that setting it higher will warm the room faster. In reality, most thermostats are a simple on/off switch, so setting a higher temperature has no effect on how fast the room warms up.

Why Do We Blame Ourselves?

Faulty conceptual models often lead us to blame ourselves when an object doesn’t meet our expectations. If you push a door and it doesn’t open, you push again, but harder. You assume your action was flawed somehow—not that the door itself was poorly designed.

The tendency to blame ourselves when technology fails us is interesting because it is the exact opposite of our default pattern for assigning blame. Normally, when we perform poorly, we blame our environment (perhaps the sun was in our eyes, or the dog ate our homework). But when we perform well, we attribute it to our innate qualities, not the environment.

When we look at other people, this effect is reversed: we assume their successes are products of their environment, but their failures are due to their personal faults. (Shortform note: In psychology texts, this tendency is referred to as the “fundamental attribution error.” To learn more about attribution error and other cognitive biases, read our summary of Thinking, Fast and Slow.)

As new technologies pop up in every corner of our lives, we’re less and less likely to admit to struggling with them, especially when it appears that “everyone else understands this.” In reality, the opposite is true—when it comes to technology, our struggles are more likely due to design, not our own inadequacy. In other words, most people are probably experiencing the same difficulties, whether or not they speak up about it.

The Value of Failure

So, why are we willing to take the blame for failed interactions with technology, but not for our failures in general? One possibility is learned helplessness: the belief that you are doomed to fail in a given situation because you’ve experienced similar failures in the past. A history of repeated failures with a specific experience makes us assume that success is impossible, so we may as well stop trying. So, an encounter with even one or two overly confusing pieces of technology can make us conclude that we’re just not good with technology in general.

We can reframe these experiences using positive psychology. Positive psychology is a subfield of psychology that focuses on people’s strengths and positive emotions instead of their struggles. In this case, positive psychology requires a perspective shift. Instead of seeing repeated failures as evidence that we’re simply not skilled enough, we can actively choose to see failures as learning experiences. For example, if we struggle with a confusing computer program, we might put in the effort to troubleshoot the problem and ultimately end up with a much more thorough understanding of the program than if we’d succeeded on the first try.

Scientists use this practice every day. When an experiment fails, they troubleshoot, find the problem, and try the experiment again. The failure isn’t a bad omen—provides important information that ultimately creates results.

Human Errors Are Really System Errors

People make mistakes. This is a universal truth. But the technology around us often requires us to be perfect—to remember information accurately, never be distracted, and react in the same way every time.

In law, the idea of “human error” is accepted as a valid explanation for tragic outcomes. In reality, these errors are rarely “human," but instead a fault of the system. If a piece of technology is designed without regard to human behavior and cognition, errors are practically guaranteed. Who is responsible for those errors?

Think back to the Three Mile Island incident from Chapter 1. One person misunderstanding an indicator light caused a massive nuclear incident. But why was the control system of a nuclear reactor set up in a way that made it possible for one small mistake to escalate into tragedy? The system was designed to be perfectly logical, but the design didn’t account for the real humans who would be operating it.

Norman recommends getting rid of the phrase “human error” altogether. Instead, we should think of interactions between person and machine the same way we think of interactions between people. When disagreements pop up, each person can clarify their intentions, propose solutions, and move on. The ideal system allows the user and the object to interact in the same way.

For example, some digital calendars allow you to enter dates with natural language. Instead of requiring dates to be entered in a single format, the user can type “August 3rd," “8/3," or “next Tuesday” and the event will be added to the proper date. The machine recognizes that humans sometimes phrase things differently, and is programmed to expect and accommodate that.

(Shortform note: We’ll explore “human error” in much more depth in Chapter 5.)

Designing for Imperfect Humans

If “human errors” are really “system errors," then designers are ultimately responsible for preventing those errors whenever possible. The goal is not to design a perfect system, but rather a system that anticipates inevitable errors, provides helpful feedback to correct those errors, and has no ‘slippery slopes’ where one small mistake can set off a catastrophic chain reaction. Here are some specific recommendations for designers to put this into practice.

Think back to the seven stages of design, and the questions posed by each stage. As a reminder, here are the seven stages and seven questions:

  1. Goal: What result do I want to achieve?
  2. Plan: What options do I have for achieving my goal?
  3. Specify: Which of these options will I choose?
  4. Perform: How do I execute my plan?
  5. Perceive: What happened when I did that?
  6. Interpret: What does that result mean?
  7. Compare: Did I reach my goal?

Users should be able to answer each of these questions easily. Designers can use the following tools to provide these answers:

Exercise: Do a Root Cause Analysis

Trace a small behavior backwards to find the big picture goal.

Chapter 3.1: The Mechanics of Memory

In this chapter, we’ll explore the interaction between “knowledge in the head” (memory) and “knowledge in the world” (design features). A significant chunk of the chapter is dedicated to an overview of different types of memory and how they function. This section isn’t overly technical, and as we’ll see, having a basic understanding of memory has important implications for design.

“Knowledge in the Head” vs. “Knowledge in the World”

Norman refers to any information stored solely in memory as “knowledge in the head." This applies to things like passwords on your computer (unless you’ve written them down) as well as knowing how to use a computer in the first place. Knowledge in the head can be either declarative or procedural.

“Knowledge in the world” is anything we know without having to store it in memory. People learning to use a keyboard can look at the letters—they don’t need to memorize the location of every key in advance. Students take notes in lectures instead of attempting to memorize every word they hear. Even something as simple as putting your wallet under your keys so you don’t forget to bring it when you leave the house counts as knowledge in the world since it takes the burden off your memory. The design principles in Chapters 1 and 2 are examples of knowledge in the world. Signifiers, perceived affordances, and feedback all give us clues on how to use something, eliminating the need to memorize it.

The Limits of Knowledge in the Head

Human memory is an incredible tool that stores massive amounts of information from every part of our lives. But that same power imposes certain limitations. To free up space to remember the things that are most important for our survival, our brains naturally offload any information that is readily available in the environment.

The advent of cell phones is a great example of this. Before cell phones, it was common to have the phone numbers of your closest contacts memorized, making them easy to dial on any available phone. Now, our devices store this information for us. We’ve moved that knowledge into the world, and in response, most of us have completely forgotten many of the numbers we once had memorized.

Furthermore, knowledge in the head is typically imprecise. We don’t need an exact understanding of every detail of an object or situation when a rough working model will do. To safely cross the street, you don’t need to be able to identify the make and model of every nearby car—you only need to be able to tell whether or not it’s moving, and at what speed. In other words, the knowledge we keep in our heads is the bare minimum that we need to function in any given environment.

Example: Common Coins

A 1979 study demonstrated this using American coins. The study found that fewer than half of American college students could identify the correct image of a United States penny from a set of similar images when small details were changed (like reversing the direction Abraham Lincoln’s silhouette faces or moving the word “liberty” to a new location).

This task is difficult because our knowledge of coins is stored mostly in the world. Since most of us rarely need to tell the difference between real and counterfeit coins, our brains only store the necessary details to distinguish one type of coin from another (for pennies, this is color, size, and texture). Given a set of images that have all these distinguishing details in common, we struggle to choose the real penny because the knowledge in our head is so imprecise. But in a pile of dimes and quarters, you would pick the penny out easily every time.

Our imprecise knowledge of details is usually all we need for day to day life. But if those details change significantly, our approximate models may no longer be enough. Coins provide another great example, this time with real-world consequences. In 1979, the United States released the Susan B. Anthony dollar coin, which was nearly the exact size, shape, color, and weight of the existing quarter (but worth four times as much). Suddenly, correctly counting change required precise knowledge of details, causing mass confusion and frustration.

the-design-of-everyday-things-7.jpg

Why is it that two similar coins can cause so much confusion, but we can easily distinguish a one dollar bill from a twenty dollar bill? Cultural and historical context determines how we distinguish one object from another. Because American paper money uses identically-sized bills, we are used to relying on the images on the bills themselves to tell them apart. So introducing a new bill of the same size and color would be no problem for Americans, but would cause uproar in countries that use size to distinguish between different values of paper money.

Constraints and Memory

Constraints are conditions that limit the ways we can interact with an object or system. These conditions can be physical, cultural, semantic, or logical. (The four categories of constraints will be covered in depth in Chapter 4.) Norman uses the ancient practice of traveling performers reciting epic poetry as an example. Reciting even a third of an epic poem like Homer’s Odyssey or Illiad requires memorizing nine thousand lines, roughly equivalent to two and a half hours of recitation. This feat sounds impossible, in part because it is. Studies show that skilled performers are not reciting the poem word for word from memory, but actually recreating it on the spot using the constraints of poetry.

Depending on the form, poems have to follow certain rules of structure and rhyme. These rules act as constraints. If a performer tries to remember the last word of a line of poetry from the full English lexicon, they’re sunk. But if they know the word has to rhyme with the line before it, and it also has to convey a certain meaning, those constraints make it much easier to recall the right word.

Imagine trying to remember a certain word while reciting a poem. If you know from the rhyming structure that the word has to rhyme with “ear," that narrows the field, but not by much: The word could be “fear," “dear," “sneer," and so on. But if the word also has a constraint on its meaning, and has to mean “a unit of time," the combined restraints allow for only one answer: “year." With these constraints, a performer only needs to know the narrative of the story and the form of the poem to be able to produce what looks and sounds like an exact, word-for-word recitation.

Another example of this is taking apart an appliance to replace a part. Without memorizing, how do we know which parts go where when it comes time to reassemble? Physical constraints simplify this process: Certain size screws will only attach to certain size nuts, and parts of a particular shape will only slot into spaces that accommodate that shape. The physical nature of the objects themselves limits the ways they could be assembled, reducing the need to memorize the layout.

For designers, understanding constraints is crucial for creating a positive user experience. Designing objects and programs that are constrained by physical, cultural, or semantic rules takes the burden off the user, since the object itself limits the possible ways to interact with it.

Types of Memory

This section dives deeper into the different types of memory. It’s important for designers to have a working understanding of each type of memory, particularly in the digital age, as increasingly complex technology requires users to combine “knowledge in the head” with “knowledge in the world” in more and more sophisticated ways.

Digital passwords are an especially important example of this effect. While new technologies make it much easier to store knowledge in the world, doing so makes the information far less secure, since knowledge in the world is accessible to anyone in that environment. To protect private information, computer systems require passwords. Simple passwords were sufficient at first, but the rise of hacking and the ability to store sensitive information like bank records online quickly required a new approach to security. Now, most programs have complex password requirements that use a combination of numbers, letters, and symbols. Some programs require the password to be changed on a regular basis.

Unfortunately, all this added complexity has created less security, not more. The design of this system does not account for the limits of human memory. Simple passwords that held personal meaning were easy to remember, but long strings of random characters are not. To cope, we write them down. We store sensitive information (like banking information or medical records) in the world, then require knowledge in the head (a password) to access it. But that knowledge is too complex to remember, so we put it back into the world. The system has defeated itself.

The most secure systems combine the two sources of knowledge by requiring both “something you have” and “something you know." A physical object like a key is knowledge in the world—it doesn’t need to be stored in memory, but can be used by anyone, not just authorized users. But a security system that requires both a physical object and a password allows for privacy without putting all the burden on human memory.

Short-Term Memory

As the name implies, short-term memory is the automatic storage of recent information. Short term memory is also called “working memory” because it is the information we keep in our minds in order to complete any given task.

The information held in short-term memory is constantly being replaced as we encounter new stimuli, so holding onto any piece of information for more than a second requires rehearsal, or consistently repeating the information until it’s no longer needed (like when you hear a phone number spoken aloud and repeat it to yourself over and over while you search for a pen to write it down).

The science of memory is complicated, but a simplified conceptual model is all we need for design purposes. Think of short-term memory as five to seven mental “slots” where recent information is stored. If all the slots are filled when a new piece of information comes along, it will knock out an older piece of information and take over that slot. It is possible to keep more information in working memory through the use of mnemonics, or techniques that enhance memory by making meaning out of meaningless data.

Guidelines for Designers

Long-Term Memory

If information is important enough or rehearsed often enough, it moves from short-term memory into long-term memory. Long term memory is more robust, and memories encoded here do not automatically replace other memories. While we encode and access short-term memory automatically, it takes time to encode long-term memories, and it typically takes time to access them later.

We still don’t know exactly how short-term memories become encoded into long-term memory, but most scientists agree that this process happens during sleep. This is important for anyone designing a product or system that requires users to store information in long-term memory—that process typically is not instant, and it might require several encounters with the information with periods of sleep in between.

Information stored in long-term memory is much more durable, but there is an important caveat here: memories are encoded based on our interpretation of events, not as they really happened. Much like performers reciting epic poetry, we don’t remember every single detail of an event, but rather the main details and our subjective interpretation of them. This also means that each time the memory is recalled, we are recreating it based on that limited information. So each time we access the memory, we inadvertently change small details of it and then re-encode that version of events. This process has powerful implications for law and criminal justice settings, since it demonstrates just how unreliable eyewitness testimony can be.

Guidelines for Designers

Chapter 3.2: Memory and Design

The subjective way memories are encoded affects how we retrieve them later on. Details of an event that were especially meaningful to us might be remembered as much more important to the overall story than they were at the time. In fact, whether information is meaningful is one of the biggest factors influencing our ability to remember it.

Meaningful things are easy to remember. They don’t need to be meaningful to our personal lives, as long as they have a meaningful relationship to each other, or to something else we know. Meaning helps us connect the information to a bigger picture. Arbitrary, unrelated things are much more difficult to remember. This is why rote learning is so difficult to do—the information being learned has no underlying structure to provide meaning. Typically, we cope with this by imposing structure of our own.

For example, the order of letters in the alphabet is arbitrary information. There is no underlying meaning explaining why C comes after B, except that we collectively agree that it does. To make this arbitrary sequence easier to learn, we impose structure in the form of the alphabet song.

Another way we turn arbitrary information into meaningful information is through interpretation. The author gives an example of his friend, Professor Yutaka Sayeki, learning to use the turn signal on his motorcycle. The signal was mounted on the left handle bar, and signaled a left turn by pulling the lever backwards and a right turn by pushing forwards.

Designers can make this process much easier by creating meaningful controls. For example, in a traditional car, the turn signal is pushed up to signal right and down to signal left. This takes advantage of our sense of clockwise and counter-clockwise direction: If we could extend the motion, the turn signal lever would be like the hand of a clock, and pushing up on it would ultimately send it to the right.

Approximate Models

Professor Sayeki’s mental model of the turn signal doesn’t account for all the mechanics of turning a motorcycle (like that fact that executing a left turn often means first steering slightly to the right). But the model works—it imposed meaning on the direction of the signal lever, making it easy to remember and use. For most everyday situations, approximate models are all we need to successfully interact with our environment.

We use approximate models all the time, often without realizing it. Mental math is a great example. For instance, if your job pays weekly and an official form asks for your monthly income, you need a precise answer. You’d multiply your weekly income by the number of weeks you work in a year and divide the resulting number by twelve months to get an average monthly income. But most of us would struggle to do that math without a calculator.

Even this chapter has relied on approximate models when describing short-term memory. In reality, there are no “slots” in the brain where information sits, waiting to be replaced by new information. For neuroscientists, this model is too simplistic to be useful. But for designers, it gives a close enough explanation of the process to inform design.

“Out of Sight, Out of Mind”

As we’ve seen, human memory is not a particularly reliable tool, and we are much more likely to remember something if we transform it from knowledge in the head to knowledge in the world. This process can have extremely critical consequences, as in the case of airline pilots. When pilots are preparing to take off or land the plane, they are also listening to a steady stream of instructions and important information from air traffic control, most of it delivered rapidly and in technical language.

Because the stakes are so high, pilots are taught to immediately translate this information into knowledge in the world. They program instruments as the correct settings are announced and write down any important or unfamiliar information.

Airplane technology is evolving to make this process even easier by relaying this information digitally, allowing instruments to be set automatically and important information to be displayed visually. The idea is to create a design that takes as much burden as possible off the pilots’ memory, which reduces the risk of dangerous mistakes.

Reminding

Transferring information from short-term memory into external knowledge is relatively simple when the information is immediately relevant. But what about remembering things that haven’t happened yet, like the date and time of a dentist appointment several months from now? That requires prospective memory, or remembering to do something in the future. We typically think of memory as relating only to the past, but prospective memory functions in the same way: to avoid forgetting an item in prospective memory, it must be continually rehearsed or transformed into knowledge in the world.

How do we remember things that haven’t happened yet? To do this, we need reminders—external cues that will trigger our memory for an event at precisely the right time in the future. Effective reminders rely on the concept of memory for the future, which is our ability to predict future states based on our memory of the past.

So, to create a reminder of that future appointment, you need to remember the date and time of the appointment itself (prospective memory) as well as what you’re likely to be doing at that time based on your typical schedule (memory for the future). This tells you where and when to place the reminder so it will trigger your memory.

(Some future events can be easily remembered without reminders. The difference between arbitrary and meaningful information applies here too: While you might need a reminder for that future appointment, you’ll most likely have no trouble remembering the date of your upcoming wedding or important presentation.)

Effective reminders have two components: signal and message. The signal alerts us that there is something to remember; the message tells us what that something is. For a digital example, imagine entering a reminder into the calendar on your phone. If you set the date and time for the reminder to alert you, but don’t enter the name of the event, you’ll be left wondering what on earth you were supposed to remember when the alert goes off. On the other hand, if you enter the event details but don’t set an alert to go off at the correct time, the lack of signal means you’re likely to forget the message exists at all.

Reminders can take several different forms. Physical reminders like a note taped to the door combine signal (the sight of the note) with message (whatever is written on it). These types of reminders act as knowledge in the world, removing the need to rely on memory.

Knowledge in the head, on the other hand, needs to be cued. The message is there, but without a signal to prompt retrieval, it does us no good. As we move to increasingly digital formats for getting things done, incorporating appropriate signals is an important challenge. Massive amounts of information are available to us through technology, but because it’s not immediately visible on screen at all times, the burden is on our memory to remember that it’s there at all.

Designers can reduce this burden by providing a “roadmap” with meaningful headings that can serve as memory cues, usually in the form of a menu. Let’s use digital photo storage as an example. If you store every photo you’ve ever taken in one folder labeled “Photos," you’re likely to completely forget what specific photos are in there. But if that folder were broken down into nested subfolders based on certain categories, a quick glance at the headings would act as a memory cue. Imagine the difference between searching for a photo of a specific person in one big “Photos” folder versus a folder with nested subsections that help you narrow the search to “Photos>2020>Family>Mom’s birthday party."

Combining Memory With Knowledge in the World

Reminders are not always set up in advance. Sometimes, we can work collaboratively with others in our environment to remind ourselves of information we’ve forgotten. For example, if you’re with a group of friends and collectively forget the name of a famous actress, you might list the name of a few movies she was in, which reminds one friend of another of her film roles, which in turn reminds another friend of an award she won. The group circles closer and closer to the answer until finally someone remembers the correct name. This process is called transactive memory, in which we combine knowledge from multiple sources to reach an answer none of those sources could come up with on their own.

Transactive memory applies to more than just groups of people. Imagine the same scenario where you’ve forgotten the name of a famous actress, but without a group of people around to help. Instead, you type the name of one of her movies into a search engine. The first results are about the remake of that film, so you refine the search further by adding the year the movie was released. Now you’re able to comb the cast list for the correct film and ultimately discover the name you’ve forgotten.

This type of transactive memory is powerful. The correct answer was in the world, somewhere on the internet. But in order to retrieve it, the search engine needed information stored in the user’s memory (the year the film was released). Even an incredibly powerful machine like the modern computer could not produce the correct answer alone. (The powerful effects of combining human brain power and modern technology are explored further in Chapter 7.)

Natural Mapping

The search engine example is an effective combination of knowledge in the head and knowledge in the world. Not all attempts to combine these sources of information are quite as successful.

Think back to the discussion of mapping in Chapter 1. Mapping describes the relationship between controls and the objects they control. Ineffective mapping requires storing the knowledge of how to operate those controls in long-term memory, which takes time and effort to retrieve. In contrast, natural mapping takes advantage of basic properties of physics and psychology to make the control/object relationship intuitive. In other words, mapping is a tool for putting knowledge into the world and reducing the burden on memory. The closer the controls are to the object they control, the more understandable the relationship.

The Stovetop Problem

Think back to the stovetop example of mapping from Chapter 1. This example pops up often in design textbooks because the problem itself is relatively straightforward, but the solution is almost never implemented into stove designs. This means that almost everyone has encountered the issue of trying to figure out which knob controls which burner.

In the case of stove burners, it isn’t possible to use the best or even second-best natural mapping strategies, since mounting controls directly onto the burners or in close proximity to them is a major safety hazard. The third best mapping strategy is relatively intuitive here: If the burners are arranged in a square, the knobs should also be arranged in a square. But this type of configuration is almost never seen. Instead, we see layouts like these:

the-design-of-everyday-things-8.png

All four of these designs can be found on stovetops currently on the market. The layout of the burners doesn’t change, but the layout and location of the knobs change each time. The lack of standardization alone would make things confusing, but on top of that, each of them requires mentally mapping a one dimensional line (the controls) onto a two dimensional square (the burners). This requires some complicated mental gymnastics, and mistakes can easily lead to serious accidents and injury.

The easiest way to reduce the risk of accidents is through natural mapping. Effective natural mapping puts the knowledge of how to use the stove completely in the world rather than in the head. Compare these layout designs to the ones above:

the-design-of-everyday-things-9.png

A slight change in layout makes a huge difference. If the solution is so easy, why does this problem still exist? One major reason is that the people buying appliances are often not the same people who will be using them. As an individual, we typically only have to replace major appliances like stoves a few times in a lifetime at most, so we’re more likely to make usability an important factor when choosing a new model. On the other hand, construction and housing companies often buy appliances in large quantities to supply newly constructed homes, and are much more likely to make their decision based on cost and available features. After all, it’s easy to ignore the usability factor when you’ll most likely never use the appliance yourself.

Culture Influences Mapping

Although natural mapping makes user interaction more intuitive, what is considered intuitive in one culture may be completely different in another. This presents a particularly tricky problem: Cultural differences impact how people interact with design, but most of us are so immersed in our own culture that we don’t recognize the places where our viewpoint might not be universal.

For example, the way we represent time is culturally dependent. People in western cultures typically think of time as a road stretching out before them. Each individual person is walking forward on their own road, with the future in front of them and the past behind them. But this view is not universal. For some cultures, it’s not the person who moves, but the “road” of time itself, with future events always moving toward us. Other cultures represent the time line horizontally, with the future either to the left or the right (typically corresponding to whether the local language reads right to left or left to right).

Why is this important when it comes to design? As more and more products come equipped with digital displays, understanding the differences in how people visualize abstract concepts becomes crucial. The author gives an example from his experience of giving a talk in Asia with an accompanying slideshow projected onto a large screen. The remote control for the projector had two buttons arranged vertically. When it came time for the next slide, Norman pushed the top button—but the presentation moved backward, not forward.

The question of “Who is moving, the user or the object?” also impacts the way we read text on a screen. As you read this summary, which direction do you scroll to keep reading? On modern touchscreens, swiping up with a finger almost always makes the text scroll down. This is the same action you’d use to move printed material in real life (like reading a newspaper lying flat on a table and pushing the newspaper away from you in order to bring the bottom sections into your view).

But there is another way to visualize this. Before the advent of touchscreens, computer displays used a “moving window” paradigm, where the text was visualized as static display, and the screen as a small window onto that display that showed a certain amount of the text at a time. To read more, the window would move, not the text. In this case, the cursor controlled the window, not the text, so scrolling down meant physically moving the cursor down, not up.

These examples make it clear that the “right way” to do something depends on our mental model of it, and mental models can vary by culture. To design a successful product, designers need to understand how the majority of target users will visualize the concepts needed to use the product. If designers try to introduce a new paradigm, users will be confused and frustrated. If this is a widely-implemented change, the product is not necessarily doomed to fail, but there will be a significant adjustment period.

Exercise: Redesign Reminders

This chapter covers a lot of information, and it can be difficult to connect it to concrete situations in our daily lives. This exercise asks you to redesign an element of your daily life using your new understanding of memory.

Chapter 4: Guiding Behavior With Design

Building on the lessons of the previous chapters, we’ll now explore a new way to guide behavior with design: constraints. Constraints limit the ways users can interact with an object. They create affordances, but they also guide users on which affordances to focus on. There are four main types of constraints: physical, cultural, semantic, and logical.

Physical Constraints

Physical constraints are physical qualities of an object that limit the ways it can interact with users or other objects. The shape and size of a jar lid act as physical constraints that prevent it from being attached to the wrong jar; different-sized holes on some electrical outlets constrain the way plugs can be connected; the height of a doorknob constrains the type of people who can use the door.

Physical constraints can be deliberately designed to ensure an object is used correctly, but this doesn’t always result in increased usability. Cylindrical batteries are a common example: Although the shapes of each end of the battery are slightly different, that difference only constrains the electrical circuit, not the placement of the batteries. This is why most of us have experienced accidentally inserting a battery backwards and having to pry it out without damaging the terminal.

Forcing Functions

Forcing functions are physical constraints specifically designed to prevent certain actions from occurring at the wrong times or in the wrong contexts. Three important forcing functions are interlocks, lock-ins, and lock-outs.

Cultural Constraints

Cultural constraints are the “rules” of society that help us understand how to interact with our environment. These rules operate through schemas (also called “scripts” or “frames”): mental models that help us interpret our environment. Schemas help us navigate new situations based on our knowledge of similar environments.

Imagine you wander into an unfamiliar building in a new city. You see tables all around, some of them with groups of people sitting and eating. Other people are carrying trays of food to and from the tables. These people are wearing aprons and name tags. All of these clues combine to activate the “restaurant” schema in your head—now you know not only what type of building you’re in, but how to behave appropriately in the situation.

Conventions

Schemas help us interpret our environment, but conventions and standards tell us how to interact with it. In the restaurant example, a schema helped you interpret the environment by combining the information from your senses and matching that to an existing mental model of “restaurant." Now that you understand the environment, you need to choose how to act appropriately within it. For that, you’re more likely to rely on conventions. Conventions are culturally-dependent agreements about how things are done. They are social rules that constrain our behavior.

Conventions also act as cultural constraints that affect how we interpret signifiers and perceived affordances. Think of a typical doorknob. The knob itself is the same shape and size as a cupped hand, so “grasping” is an intuitive perceived affordance. But nothing about the knob itself tells us it should be twisted, or that its purpose is to open and close doors. Those functions are cultural conventions that are learned from other people in our environment. To illustrate this, think of a doorknob mounted on a regular wall. You probably wouldn’t attempt to twist it, or expect it to open or close the wall, because the knowledge that “knobs open doors, not walls” is a universal cultural convention.

Case Study: Destination-Control Elevators

Conventions are valuable, but they also present a challenge to new technologies. One form of this is the legacy problem. The legacy problem is the extreme inertia faced by any design that attempts to revolutionize a fundamental product or service. Drastically changing the design of a common item (like an elevator or a house key) requires a complete logistical overhaul. The social and economic cost of implementing the new design is too high, so the existing design (the legacy) will win out almost every time, regardless of how difficult it is to use.

A classic example of the legacy problem is the use of destination-control elevators in large buildings. These elevators only travel to set floors and have no panel of buttons inside for passengers to select a floor. Instead, a control panel in the lobby directs users to the correct elevator based on their desired floor.

For large buildings with many elevators (like hotels or office buildings), this design is far more efficient than the traditional elevator, where the person going to the highest floor has to wait while the elevator stops at every other passenger’s desired floor in between. In spite of this, destination-control elevators are still rare. Cultural convention is to blame here, since implementing this new system would require overhauling a cultural convention about how elevators work. For designers and developers, the increased efficiency was not worth the cultural taboo of violating convention, and the legacy of less-efficient elevators won out.

The lesson of destination-control elevators is that consistency is important, because there will always be cultural resistance to change. Typically, it’s not worth fighting ingrained conventions for small changes. But if there is a revolutionary new way of doing things that is objectively better, that change needs to be implemented universally to avoid confusion.

Standards

Conventions are informal rules governing behavior. Standards are more official and can be applied cross-culturally. When a convention is codified into law or written into the official literature of an industry, it becomes a standard. Standards are typically very slow to develop, since they require so many people to officially agree to one set of overarching rules. For example, when cars were first adopted into popular use, there were no official traffic laws. Informal conventions arose quickly (for example, what side of the street to drive on), but these were not enough to reduce the enormous accident rate. Eventually, these conventions were written into official standards.

The purpose of standardization is to alleviate confusion. The modern environment is complex by nature, with so much information available at all times and so many different tasks needing attention. Complexity is a fact of life, but confusion is a design problem (these definitions are addressed in more depth in the author’s book Living with Complexity). Easily discoverable and understandable designs can alleviate confusion by providing a clear conceptual model.

When design itself is not enough to avoid confusion, standardization is an important tool. Norman calls standardization “the principle of desperation." For design questions where there is no way to put the knowledge of how to use something into the world, standardization at least ensures that everyone only needs to learn how to use the device once, since that knowledge will transfer to every other encounter with that type of device.

For example, telling time on analog clocks is not intuitive. We have to be taught the meanings of each hand. But imagine how much harder this process would be if there were no standard design for clock faces. Try it out on the nonstandard clock below.

the-design-of-everyday-things-10.jpg

Establishing standards is not an easy task. Different groups will have different ideas about the proper standard, and compromise is not always possible when those ideas are fundamentally different. Beyond that, companies who are already manufacturing whatever design is chosen as the new standard will have an edge (since they won’t need to spend time or resources changing the production process), so corporate representatives often lobby hard for whatever model they already produce.

Semantic and Logical Constraints

(Shortform note: Norman pays much less attention to these types of constraints in this chapter, so they have been collapsed into one section.)

Semantic constraints dictate whether information is meaningful. They help us filter the information in our environment and decide what is important. If you’re driving on a crowded city street, you might see colorful lights all around you. But you automatically pay more attention to red lights on the back of other cars (brake lights), or tri-colored lights that hang over an intersection (traffic lights). In this situation, the combination of color and location conveys semantic information so that you know which lights to focus on and which to ignore.

Meanings can change over time, or in response to new technologies. If self-driving cars become widespread, we may no longer need to pay much attention to brake lights on other cars, and future generations who grow up with that technology may not associate any particular meaning with red lights at all.

Logical constraints make use of fundamental logic to guide behavior. For example, if you take apart the plumbing beneath a sink drain to fix a leak, then discover an extra part leftover after you’ve reassembled the pipes, you know you’ve done something wrong because, logically, all the parts that came out should have gone back in.

The natural mapping demonstrated by the stovetop example earlier in Chapter 3 is made possible by logical constraints. If controls are arranged in the same shape as the objects they control, it makes logical sense that each control will correspond to the object in the matching location.

Discoverability

Physical, cultural, semantic, and logical constraints aid in discoverability. This section applies these concepts to common objects like doors, switches, and faucets.

Norman Doors

Doors are infamous examples of bad design that we encounter in everyday life. In the first edition of this book, Norman laid out the concept of “discoverability” and showed how a huge portion of modern doors violated this principle. This section of the book was revolutionary in the design community, and was the first exposure many designers had to the idea of discoverability and the importance of user experience. The door example became so popular that confusing, poorly-designed doors are now often referred to as “Norman doors."

In contrast to Norman doors, modern emergency doors are examples of discoverability done right. In the past, emergency fire exit doors often opened inward. People rushing to get out of the building instinctively pushed against the doors and often became trapped. Now, the law requires emergency doors to open outward, and to be equipped with a panic bar. Unlike a regular door handle or a smaller metal plate, a panic bar spans almost the entire width of the door. It’s immediately visible and doesn’t require dexterity or precise movements to open. In other words, panic bars are highly discoverable.

the-design-of-everyday-things-11.jpg

Switches

Switches are another notorious offender when it comes to discoverability. The design and placement of switches need to provide two types of information: what type of object they control, and which specific object in the room corresponds to that specific switch. Traditionally, switches for all the lights in a room are grouped into one panel. This makes it easy to control everything from one spot but gets confusing quickly if the switches aren’t arranged logically. For public spaces with many different users, this may require adding homemade labels as signifiers.

the-design-of-everyday-things-12.jpg

Is there a better way? The author recommends two solutions: a natural mapping approach (as discussed in Chapter 3) or an activity-centered approach. In the stovetop example, natural mapping is fairly straightforward, since all four controls and all four burners are in view simultaneously. For a lighting system where not all the lights are in view from any one spot, an overhead diagram of the room or entire floor makes mapping much easier.

the-design-of-everyday-things-13.jpg

Another option to streamline multiple controls is an activity-centered approach, where controls are grouped by activity rather than by type. Lecture halls are a great candidate for this approach. Typical lecture halls have all the light controls grouped together on one panel, all the projector controls on another, and any other system controls randomly placed elsewhere. An activity-centered approach would instead group these controls based on activities, with options like “lecture mode” or “video mode." For example, in “lecture mode," the projector screen would lower into place, the projector itself would turn on and allow for presentation controls, and the lights would calibrate to illuminate both the screen and the speaker.

Faucets

You’ve probably encountered a faucet that didn’t immediately work the way you expected it to. Maybe the hot and cold taps were switched, or the drain mechanism had no visible controls, or the knobs looked like they should be twisted when in reality they needed to be pushed. Nearly every residential and commercial building has at least one faucet, so why haven’t we found a universal standard that gets it right?

The most basic purpose of a faucet is to allow the user to control both temperature and flow rate of the water. Controlling flow rate is relatively simple, but temperature control involves two separate pipes (one for hot water, one for cold). So the user has just two needs, but the design of the plumbing system requires three separate operations to control flow rate, hot water, and cold water. Designing a control that makes it easy and intuitive to meet two separate goals through three separate requirements is more difficult than it seems.

There are many ways to address this problem, which is why we encounter so many different faucet designs on a regular basis. Almost all of these designs will have either one or two controls.

The fact that there are fewer controls than necessary functions puts the burden on the user to figure out which control does what. For example, for designs with one control for hot water and one for cold, how do you know which is which, and how do you know how to turn each control on or off?

Cultural conventions can be helpful here—in most of the world, the left control is for hot water, the right is for cold. In the United States and the United Kingdom, though, this rule is considered more of a loose guideline at best.

The design of the handle also presents a problem. For knobs that need to be twisted, we have another convention to help: Any mechanism with a screw thread is tightened by turning clockwise and loosened by turning counterclockwise. We intuitively think of knobs as screwed-on caps blocking water from flowing out of the pipe, so we understand that they need to be turned counterclockwise to open (unlike temperature controls, this is a universal convention).

But what about lever-style controls? Do we push away from us, since that is technically turning counterclockwise? Or do we pull toward us, envisioning the tap as a push/pull mechanism instead of a screw mechanism? If the levers work the same as knobs, pushing the right one away from us would increase flow rate, but doing the same on the left would decrease flow rate by turning the screw mechanism the opposite way.

Solutions where the left and right controls do different things when used the same way assume that we always use two hands to operate the controls. But what if you’re holding something in one arm, or otherwise have use of only one hand? In that situation, you now have to remember which control gets turned in which direction. The potential for error is high.

Designs with one control present similar problems. When one control is used for two separate functions, how do we know which does which? Does turning change the temperature, or the flow rate? If it’s temperature, does turning clockwise make it hotter or colder?

the-design-of-everyday-things-14.jpg

The fact that there are so many possible configurations for faucet controls means most of us resort to trial and error. While not ideal, this usually allows us to figure things out pretty quickly and go about our day. But when feedback is not immediate—when moving the control doesn’t create an immediate change in temperature or flow rate—we have no confirmation that the system registered our input, so we repeat it. This is a particular problem with shower controls, as the distance between the control and the faucet causes a slight delay in feedback (which is how we end up cranking the heat up in a freezing cold shower, only to be scalded moments later).

Using Sounds as Signifiers

As we’ve seen, signifiers play a huge role in discoverability. But signifiers are not always visual. Sound signifiers are especially useful in situations like driving because they don’t interfere with visual attention. On the other hand, sound signifiers can be distracting if not properly designed. It’s also difficult to direct sound to just one person without using headphones, which presents a privacy concern in some settings.

Using sounds as signifiers is a powerful tool, but tricky to get right. To be effective, sounds have to be inherently meaningful, not arbitrary. Users should be able to understand them without knowledge of the device. For example, the crunching sound of car tires on a gravel road combined with the rumble of an engine instantly tells us not only that something is coming toward us, but what that something is (a car), how it’s traveling (on gravel), and the direction it’s coming from. If the car were able to float above the gravel and instead emit a consistent beeping, we’d have no idea how to interpret that sound without previous knowledge of that type of car.

Case Study: Electric Vehicles

We may not need to worry about floating cars just yet, but adding artificial sounds to vehicles is already an important safety concern in the case of electric cars, which are almost silent at lower speeds. While this silence can be a perk for the driver, it poses a major threat to pedestrians. Blind people in particular are at risk, since they use sound as the primary signal to determine whether it’s safe to cross the street. Silent vehicles also pose a danger to sighted pedestrians if they are distracted (looking at a phone, for example). To address this design issue, electric car manufacturers add artificially produced sound.

But deciding which sounds to add is more difficult than it seems. Some companies initially tried to artificially mimic the sound of gasoline engines. This is called skeuomorphic design, or adding features of older designs to new technology for aesthetic (not functional) reasons (for example, digital “save” icons in the shape of floppy discs or manilla file folders). Skeuomorphic design can ease transition to new technology and provide helpful signifiers, although some designers argue it hinders creativity.

On the other hand, some companies designed unique artificial sounds to use as brand identifiers. But this presents a new challenge: how would blind pedestrians know the sound they’re hearing is from an approaching car without memorizing every brand’s unique sound?

Ultimately, various government bodies created sets of research-based standards that ultimately allow each manufacturer to create their own sounds, so long as those sounds meet certain safety requirements. Specific standards vary by country, but they all include three main requirements:

Exercise: Identify Physical Constraints

Physical constraints are so common in daily life that they can become invisible. This exercise will help you practice identifying physical constraints in everyday items.

Chapter 5: Human Error

In this chapter, we’ll break down different types of errors that can happen when humans interact with technology. These errors can take the form of either “slips” or “mistakes," each of which can be broken down further into different categories. The first edition of this book, published in 1988, included many more categories of slips and mistakes, but here they have been pared down to only those most relevant to design. The chapter ends with recommendations for turning knowledge of human error into specific design guidelines.

The Error of “Human Error”

Industry professionals estimate that between 75 and 95 percent of industrial accidents are attributed to human error. But if we think of “error” as something that goes wrong in a particular system, how is it possible for the vast majority of events in that system to be considered “errors”? Error, by definition, should be the exception rather than the rule. In other words, what we think of as human errors are more likely outcomes of a system that has been unintentionally designed to create error, rather than prevent it.

If there’s an underlying cause of these accidents, why do we write them off as human error? One reason is that people analyzing the incident tend to stop as soon as they find someone to blame. This offers a quick and convenient “solution”: identify who made the mistake, punish them for it, and move on. But when errors stem from underlying design issues, punishment doesn’t prevent the same thing from happening again.

How Do We Define Error?

What, specifically, counts as an error? The term “error” applies to any action that differs from the general understanding of appropriate behavior. In this context, “error” is not the same as “accident”: an error is an incorrect behavior that may or may not lead to an accident (an event that causes harm). Errors can be classified as either “slips” or “mistakes."

The defining difference between slips and mistakes is that slips happen subconsciously while mistakes involve conscious choices. Slips and mistakes can be further broken down into subtypes. Mistakes can be broken down into knowledge-based, rule-based, and memory-lapse mistakes. Slips can be classified as either memory-lapse or action-based. Action-based slips can then be broken down further into three types. Each of these subcategories will be defined in the following sections.

the-design-of-everyday-things-15.png

Slips

Most everyday errors are slips, not mistakes. Remember, the execution phase of action is mostly subconscious. We decide what action we want to take, and we see the results, but the actual process between thinking and doing is below our level of awareness. Slips happen during that subconscious transition from thinking to doing.

The fact that slips happen subconsciously and look different for different people leaves a lot open to interpretation. Sigmund Freud proposed that slips were brief glimpses into someone’s subconscious mind. (Shortform note: This is where we get the phrase “Freudian slip.") Most modern psychologists agree that slips are more formulaic and unlikely to reveal hidden secrets.

Counterintuitively, slips happen more frequently to experts than beginners. Beginners use conscious thought to work through every step of a task, so the chance for subconscious slips is low. But experts have overlearned the same task after years of practice, so the process no longer requires conscious attention and is executed subconsciously.

There are two classes of slips: memory-lapse and action-based.

Action-Based Slips

Action-based slips can be broken down even further into subtypes, including capture slips, description-similarity slips, and mode errors.

Mistakes

Mistakes are the result of conscious choices based on faulty information, misinterpretation, or simple forgetting. Mistakes can be classified as knowledge-based, rule-based, or memory-lapse mistakes (not to be confused with memory-lapse slips).

What Causes Errors?

The next step in addressing system errors is to determine the underlying cause. Broadly speaking, there are two main causes of error: system designs that fail to account for basic human traits, and the social environment of the users in those systems.

Systems Built for Perfect Humans

Every person is different, but we all have nearly identical underlying needs and tendencies. We don’t focus well when we’re tired, hungry, bored, or upset; we can only remember a small amount of new information at a time without a chance to absorb it; we struggle to focus intently for long stretches of time. These are normal human traits, but when something goes wrong, they’re quickly reclassified as “errors." In order to truly prevent dangerous accidents, we need machines and systems that not only function properly, but are designed in a way that accounts for human nature.

All of these qualities come into play in cases of interruption. Interruption is a major cause of error. As a rule, humans struggle to efficiently pivot between different types of tasks. Most of us have experienced being interrupted while working intently, only to turn back to the task and think “what was I just doing?” Important information can be lost this way, especially when systems are designed for continuous focus (automatic session timeouts on certain website pages are an example of this). Multitasking is also a form of interruption, since we are asking the brain to quickly switch focus between multiple tasks.

Like all causes of error, interruptions and multitasking are especially dangerous in high-risk environments like medicine and aviation. To prevent interruption, the Federal Aviation Authority (FAA) requires a “Sterile Cockpit Configuration” during take-off and landing, meaning pilots are prohibited from discussing anything other than the task at hand.

Even systems that are designed for humans and machines to work in tandem can have problems. Norman calls this “the paradox of automation," where technology can easily take over tasks that are simple for humans but fails on complex tasks when humans need it most. We learn to rely on technology to the point that we no longer monitor the situation, so when the system fails, it does so with no warning and often with major consequences.

The Social Context of Error

The social environment plays a massive role in understanding error. In corporate environments, economic pressure is often the culprit. The larger the system, the more expensive it is to shut down even temporarily to investigate and fix errors. The pressure to keep things running as usual is called time stress, which is a major cause of accidents.

Social and economic pressures played a critical role in the Tenerife airport disaster, a 1977 crash in the Canary Islands that remains the deadliest accident in aviation history. The accident involved two planes: one taking off before receiving clearance, the other taxiing down the runway at the wrong time due to miscommunication with air traffic control. The first plane had been rerouted and delayed, and the captain decided to take off early to get ahead of a heavy fog rolling in, ignoring the objections of the first officer. The crew of the second plane questioned the order from air traffic controllers to taxi on the runway, but it obeyed anyway. Social hierarchy and economic pressure to keep things moving led both crews to make critical mistakes, ultimately costing 583 lives.

Why should designers care about social context? Although it may not seem obvious, social pressures are a design issue. They affect the way we think, feel, and behave, ultimately influencing how we interact with the environment. Beyond that, social systems themselves are often the product of design, since they are shaped by institutional rules, hiring practices, traditions, and choices.

Finding the Root Cause

When accidents happen in industrial settings, a root cause analysis is usually performed. If the cause is technology-related, there is a full-scale investigation into the root problem, and engineers continue testing until the problem has been designed out. But if human error is discovered, the investigation usually stops immediately. Someone is held accountable and the system moves on unchanged.

Aviation history provides an example here too, this time of a 2010 crash involving a US Air Force fighter jet pilot. The particular type of plane the pilot was flying at the time had a history of malfunction that caused oxygen deprivation for the pilot. The official Air Force investigation into the crash concluded that the cause was human error—the pilot had failed to recognize the problem and correct the dive. Years later, the Department of Defense reanalyzed the case and concluded that, while the Air Force report was technically correct, it stopped before asking the crucial question: why didn’t the pilot notice or correct the problem? Given the nature of the crash and the history of the plane, it’s likely the pilot was already unconscious due to lack of oxygen. Is that human error?

One tool for more effective root cause analyses is the “Five Whys," developed by Sakichi Toyoda, the founder of Toyota Industries. As the name suggests, Toyoda’s idea was to ask “why?’ five times in order to find the true root of a problem. In reality, the number can vary, but the core idea is to push past the initial answer to a problem to find underlying causes. This technique is still in use today at Toyota.

Even when a true root cause of an accident is determined, there is always pushback to the idea of changing the system, since change is difficult and often costly. There is also a cultural element here: we are so used to the idea of human error that people deemed responsible for accidents usually agree that they were, even if other people have made the same mistake. This makes it hard to reframe as system error.

Deliberate Error

Sometimes, we choose to deliberately ignore a certain rule, or do things we know we shouldn’t do. These infractions tend to be seen as minor until tragedy happens. Most drivers have driven over the speed limit at some point—this is regarded as normal behavior. But when a crash happens due to excessive speed, the driver is blamed for the same behavior.

Why do we ignore rules, even when it’s dangerous? In corporate environments, there are often two sets of rules: the formal rules that are written into employee contracts and adhere to the required laws, and the informal social rules that determine what is actually expected of each employee. These rules often conflict to the degree that doing a job well (satisfying informal expectations) requires violating some of the formal rules. The more employees break the rules, the better they appear to perform, which reinforces the rulebreaking behavior. Social expectations can make us more likely to make mistakes.

Social Pressure and Error Reporting

When errors inevitably do occur, social pressures affect whether we report them properly. Fear of judgment or repercussions stops us from reporting both our own errors and errors made by others. But identifying errors as they occur is crucial for preventing them from escalating into major problems. How can companies combat the stigma against reporting errors?

Once again, Toyota’s manufacturing division offers a helpful example. Part of their error-reduction strategy is Jidoka, or “automation with a human touch." In this philosophy, error is an expected part of the manufacturing process and should be addressed immediately, even if it means shutting down an entire production line. Employees are expected to report errors immediately and can face repercussions for not reporting them, which combats social pressure to remain quiet.

Toyota engineers are also responsible for the idea of poka-yoke, or “error proofing," typically through constraints. Covering emergency switches so they can’t be accidentally activated, attaching physical guides to machines to ensure parts are aligned correctly, and designing parts with asymmetrical attachments are all examples of poka-yoke.

Another example of effectively combating social barriers to error reporting is the NASA Aviation Safety Reporting system. This system is voluntary and records are stored without identifying information, so pilots don’t have to worry about repercussions from their employer or the FAA. Beyond that, NASA employees analyze the submissions and provide reports of common sources of error to the FAA and individual airlines, ultimately improving aviation safety across the entire industry.

How Can Designers Minimize Errors?

Although it’s not possible to eliminate errors completely, thoughtful design can reduce the frequency or severity of those errors. The first step in that process is detecting when and where errors occur.

Detecting Error

A truly error-proof design makes it easy to detect errors before they become dangerous. Doing this requires understanding how we notice errors in the first place, and more importantly, why we sometimes fail to notice them.

In general, slips are easier to detect than mistakes. Detecting simple errors like action-based slips is typically easy—if you accidentally put your keys in the freezer, you’re likely to realize it pretty quickly. Memory-lapse slips are harder to detect until something cues retrieval of the memory (for example, not realizing you left your wallet at home until you need to pay for gas).

Mistakes are difficult to detect because they are conscious choices. By definition, we usually don’t recognize mistakes right away because we genuinely believe we’re making the right choice as we’re making it. Mistakes only become apparent later, when something goes wrong and the cause is traced back to the original mistake.

One reason we don’t catch mistakes earlier is the natural human tendency to explain away minor deviations from the norm. The author tells a story of driving with his family to a ski resort in California and passing several billboards for Las Vegas hotels. The family agreed that advertising on billboards located hours away from Las Vegas must be an odd marketing strategy and carried on with their journey, not realizing until two hours later that they’d missed a turn and were mistakenly headed straight toward Las Vegas. We’re much more likely to notice novel information in our environment, but once we have an explanation, it’s no longer novel. This explains why the author’s family was able to ignore all the other Las Vegas advertisements they passed before finally realizing their mistake.

In the aftermath of an accident, the chain of events leading up to it often seems obvious. This is the power of hindsight bias, or the tendency to estimate our ability to have predicted a certain outcome before it happened. We wonder how anyone could have missed the signs of an important error when they seem so obvious—in reality, without the benefit of hindsight, we’d most likely have missed them too.

One way to improve error detection is with checklists. Checklists are helpful tools, but they need to be designed with social influences in mind. Having multiple people run through checklists helps in error proofing, but this should always take the form of two people working simultaneously, not sequentially. Having one person run through a checklist now and another person double check things later can actually lead to more errors, since there is a tendency to let things slide, knowing someone else is likely to catch the mistake later. But when everyone takes this attitude, errors quickly add up. (Shortform note: To learn how to correctly use checklists, read our summary of The Checklist Manifesto.)

Understanding How Accidents Happen

There is rarely only one cause of an accident. More frequently, accidents are the result of a number of conditions lining up in a particular way. James Reason, an accident researcher, calls this “the Swiss cheese model." Think of each slice of cheese as a condition affecting a certain task (for example, weather). The holes in each slice are all the possible configurations of that condition (in the weather example, this would mean a hole for rain, a hole for snow, a hole for bright sun, and so on).

For accidents and errors to occur, the holes in several slices have to line up perfectly. If the hole in any one slice doesn’t line up, the event can’t happen.

the-design-of-everyday-things-16.png

In a car accident, for example, the four slices above might represent weather, alertness of the driver, condition of the brakes, and speed. If the holes line up perfectly—if it’s raining, the driver is sleep deprived, the brakes are worn out, and the driver is speeding—an accident is likely. But if any of those holes didn’t line up (like if the driver were more alert or the brakes were new), the accident could likely have been avoided.

To prevent accidents, we need to prevent the holes lining up. There are three main ways to do this:

Specific Guidelines for Designers

How can we create products and services that minimize errors, especially dangerous ones? Norman suggests some concrete ways:

Is Design Always the Answer?

Unfortunately, design only goes so far, and there is still such a thing as “real” human error. This is especially true in cases where an activity can have catastrophic results if something goes wrong, but the likelihood of something going wrong in any one particular case is relatively low. When we know there’s a one in a million chance of something terrible happening, we assume we’re safe. The problem, of course, is that one person in that million will be wrong. We see this sense of invincibility at play when people deliberately make risky choices, like ignoring safety measures in order to get a job done faster, or driving after drinking.

However, even in cases that appear to be pure human error, there is often a design element at play—specifically, the design of systems. For example, we know that a sleep-deprived doctor is far more likely to make critical errors than a well-rested doctor. Yet hospital procedures still frequently have doctors working dangerously long shifts with little to no sleep. If a doctor in such a hospital makes an error, is it her fault? Or is it the fault of the complex system that required such long working hours in the first place?

Thankfully, some industries have adjusted the design of their systems to ensure employees are in top form before performing potentially dangerous operations. In aviation, pilots are only permitted to fly a certain number of hours without rest, but they must complete a minimum number of flying hours to keep their license active. This ensures that pilots only fly when they’re able to do so safely, and also that they remain in good practice. This kind of preventative strategy is called resilience engineering.

Resilience Engineering

Preventative approaches to safety are especially important in industries where errors could lead to particularly disastrous effects, like medicine, transportation, and electrical power systems. Resilience engineering focuses on building robust systems that can withstand any complications they might face, whether from human error, system breakdown, or external forces like natural disasters.

The resilience engineering approach assumes that errors are inevitable, that people perform differently under extreme stress, and that the areas where systems are most vulnerable to error are constantly changing to reflect a changing environment. In practice, resilience engineering involves several processes. These assumptions give rise to three of the main tenets of resilience engineering.

Exercise: Identify System Errors

This chapter discussed the ways that systems are often designed without human needs in mind. Let’s connect this to your own experience.

Chapter 6: Design Thinking

“Design thinking” is the process of examining a situation to discover the root problem, exploring possible solutions to that problem, testing those solutions, and making improvements based on those tests. This process is iterative, which means it is repeated as many times as necessary, each time with slight improvements based on previous iterations.

Design thinking is an important part of the philosophy of human-centered design. This chapter describes two models for thinking about the process of design thinking and compares these approaches to the traditional design process. But these approaches represent the ideal, and they’re not always feasible in practice. The chapter ends with a discussion of the practical constraints that prevent designers from working through each step of the design thinking process in full.

(Shortform note: Chapters 6 and 7 are new to the 2013 edition.)

The Design Thinking Process

Design thinking has two phases: finding the right problem and finding the right solution. Each of the two phases has two steps. The entire process is often described using “the four D’s”: discover, define, develop, deliver.

The first step in the design process is discovering the problem. This seems like it would be easy, since designers are typically hired to solve a particular problem. But the problems designers are asked to solve are often downstream effects of the “real” problem. In other words, designers are hired to address symptoms, but good designers will go beyond that and dig into the root cause (tools like root cause analysis and “The Five Whys” are especially useful at this stage).

The next step is defining the problem. In the “discover” phase, design teams will brainstorm all the factors that could be influencing the problem, then investigate each of those factors to see which are most relevant. This helps the team decide on the actual problem they will attempt to solve.

Once the real problem is discovered and defined, designers can begin developing solutions. This is the part of the process that most people picture when they imagine designers at work—a flurry of new ideas, rapid sketches, and rough prototypes. Instead of committing to a solution right away, good designers brainstorm as many solutions as possible, including ideas that are obviously not feasible. Even a seemingly ridiculous solution might have an underlying principle that ends up guiding the real design.

Lastly, designers will choose the most promising solution and refine it into final form by continuously testing, making changes, and retesting. This process culminates in delivering the final design to the client.

The Double Diamond Model

An iterative design process can be represented visually with the “double diamond diverge-converge model." The first step of each phase of the process is divergent, which means designers focus on expanding the problem by generating a wide range of questions and ideas. The second step of each phase is convergent, which involves selecting one of those questions or ideas and refining it until everyone converges on a single outcome.

the-design-of-everyday-things-17.png

The Cycle of Iteration

The double diamond model is a helpful conceptual overview of the process of design thinking, but it doesn’t give us much practical guidance for how to go about that process. In practice, there are four main tasks of design thinking: observation, idea generation, prototyping, and testing. This process is iterative, so it continuously repeats itself until the final product is developed. Human needs are complicated, so it’s hard to get any design right on the first try. Even defining the problem correctly can be difficult, since people’s observed behavior is often very different from their self report of the same behavior.

An iterative design process conflicts with traditional product development mindsets that avoid failure at any cost. Instead, iterative design is designed to produce failure as often as possible, since failure gives designers valuable feedback about what needs fixing. A popular mantra at IDEO, one of the most influential design firms in the world, is “fail frequently, fail fast."

Observation

The first step in addressing any design problem is observation. The most useful observation for design teams takes place in the real world, not in a controlled setting like a lab. One way to do this is through applied ethnography, which involves observing users in their usual environments, carrying out everyday activities, for as long as possible. This gives the designer the most comprehensive picture of users’ needs and expectations. Applied ethnography is based on techniques of academic anthropology but has been adapted to be much faster and have a more specific aim.

It’s important that the people being observed are part of the intended audience for the final product. The nature of the product determines the type of approach designers take to choosing people to observe. Activity-based approaches are useful for products that are used in more or less the same way, regardless of cultural differences (like cars, computers, and phones). At first, this may seem like a contrast to human-centered design, since the focus is no longer on individual users. But activity-based design is ultimately a tool of human-centered design, since it focuses on helping the user create a working conceptual model.

It’s important to distinguish activities from tasks. In the example above, “driving” is the activity, but “steering” and “checking mirrors” are tasks that serve that activity. This distinction relates to the goal of each action: Activities relate to “be-goals," or goals that relate to our choice and self-image, while tasks are “do-goals," or simple steps that only matter because they are part of a higher level activity.

When the product is culturally specific or meant to indicate status within a particular group, a culture-based approach to observation can be useful. This is especially true for products like eating utensils and clothing. This type of observation is best done in person, with native members of the particular culture in their local environment. This is the only way to get a true sense of how and where the product will be used.

Design Research Is Not the Same as Market Research

The process of observing people to figure out how they might use a certain product might remind you of market research. The two processes are similar, but have fundamentally different aims. Designers want to know what people need and how they might use certain products, while marketers want to know which groups of people are most likely to buy the product.

Another difference in the two disciplines is breadth versus depth. Market researchers typically survey as many people as possible. They are interested in averages, not specifics, and typically use quantitative data collection methods.In contrast, designer researchers typically study far fewer people in much greater depth, often observing one person for hours or even days. Their methods are qualitative and are useful for uncovering specific needs and problems rather than averages.

A successful product is both well-designed and well-marketed. Without a solid marketing strategy, even the best product will fail, since users can’t appreciate the design if they’re never enticed to buy the product in the first place. Without quality design, even the best marketing strategy only goes so far, since users are unlikely to become repeat customers if the product design is clunky and confusing.

Idea generation

Observation provides the necessary background knowledge to both discover and define the problem. The first step to exploring solutions is idea generation. There are three rules for successfully generating ideas.

Prototyping

The next step in the design process is to explore the most promising ideas in more detail. This is done through rapid prototyping, which focuses on creating very rough models of several ideas instead of a more accurate model of one specific idea. Rapid prototyping can happen through sketches, cardboard models, arrangements of sticky notes, spreadsheets, or even skits. More detailed prototypes can be tested once the list has been narrowed down to one or two ideas.

The “Wizard of Oz technique” can be helpful for testing early prototypes. Just like the wizard in the classic story uses smoke and mirrors to make himself appear larger and more powerful, designers can create a facade that mimics the experience of the final design (for example, by having a research assistant play the part of a future computer program and supply answers in an “automated” chat with users).

Testing

Once the team has narrowed the list of possible solutions to one idea and developed that idea into a more sophisticated prototype, it’s time for the testing phase. This begins with bringing in members from the target user group (usually five is enough) and having them use the product how they normally would.

If the product is meant to be used by just one person, it’s useful to put them in pairs, with one using the prototype directly and the other offering suggestions, commentary, and questions. This requires users to talk through their thought processes out loud, which is helpful for designers observing the testing session.

Testing only five people might seem small, but keep in mind that testing is part of an iterative process. After the first testing session, designers will use the feedback from the first five users to tweak the design of the prototype. Then, a new set of five potential users will be tested with this iteration. This method allows for continuous feedback on increasingly successful iterations, rather than testing a large group just once and hoping the changes made as a result will be successful.

Is an Iterative Process Always the Best Option?

Unlike the cyclical process of iteration, the traditional design process is linear. It often uses a “waterfall” method, where decisions are made sequentially without pausing to test and reexamine. Linear design can also use “gated” methods, where periodic management reviews that serve as a checkpoint before proceeding to the next stage of the design process.

Linear methods are traditional for a reason. They make logical sense and are usually far more cost-effective up front, since they typically proceed faster and spend less time in the observation and testing stages. The downside is that the lack of iterative testing means some design flaws may not be discovered until the product is already on the market, which can be a costly mistake.

In general, linear methods are useful for large-scale, high-budget projects, where the benefits of iteration would be outweighed by the high cost of failure. On the other hand, iterative methods are best suited for smaller projects where prototyping and testing can be done without enormous financial investment.

There are hybrid methods that balance these two approaches to maximize the benefits of each of them. An adapted gated method has tightly scheduled management reviews, but allows for free iteration between those reviews. This keeps the process moving along while still allowing designers to hone in on the true problem and best solution. It’s also possible to use iterative design for the problem definition phase of a project, then switch to a linear method for the solution development phase.

Design Thinking in the Real World

Human-centered design has become a buzzword, and even well-intentioned companies often fall short of the ideals. Pushing a product through a lengthy iterative design process that involves weeks of field observations is a great idea in theory, but in practice, budget and time constraints play a much bigger role. This is why the author proposes his own eponymous “Law of Product Development”: that all product development projects are “behind schedule and above budget” from the moment they start.

What Makes the Design Process Difficult?

Beyond time and budget, the makeup of the product development team can present difficulties. The best teams are multidisciplinary, combining unique knowledge from different specialties and from every phase of the product development process. The downside of this arrangement is that every team member typically thinks their own discipline is the most important. To manage this, it’s important for teams to reach a mutual understanding of each other’s strengths and needs, and why those needs are important for the overall success of the product.

Strong management is crucial to prevent individual departments from making independent changes to the design, but getting everyone on the same page takes time. To streamline the process, Norman recommends having the design research and market research teams working in the field consistently, even before there is a specific product to be tested. This gives the entire team a head start with a solid foundation of user needs and motivations that can be honed through the rest of product development.

Designing for Special Populations

Another hurdle in the design process is conflicting product requirements. Designing a single product that satisfies all members of the product development team and meets the needs of a diverse group of users is a tall order.

One tool for designing to suit as many users as possible, despite the wide range of human needs and traits, is physical anthropometry. This field studies measurements of the human body, both at rest and while performing specific tasks (like range of motion for reaching behind you). These standardized measurements allow designers to work within percentiles, ensuring that the final product is likely to work for as many users as possible. But no one design works for everyone, and even the most inclusive designs can be unusable for tens of millions of users.

Not all products are meant to be used by every type of person. The author gives the example of clothing—we don’t expect every piece of clothing to fit every person. Instead we design and produce different sizes for different people. Designing for users with disabilities is an important area requiring designers to focus on specific needs rather than averages.

Traditionally, the design of assistive devices like mobility aids has focused exclusively on function. This is why when we think of a walker or wheelchair, most of us picture a clunky contraption that looks more at home in a hospital than anywhere else. The institutional look of these designs contributes to the social stigma already faced by people with disabilities. After all, there is little functional difference between a wheelchair and a bicycle, but the two objects provoke very different social reactions.

Universal Design

A universal design approach offers a different perspective on designing for disability. Universal design creates products that are usable by the widest range of people, not by designing for the “average” person, but by designing for the highest need. If a product, environment, or service is designed with disability access in mind, it will also be usable for those without disabilities.

A great example of this is OXO, a company that designs and manufactures kitchen utensils. The founder of OXO started the company by designing a vegetable peeler that wouldn’t put stress on his wife’s arthritic hands. The resulting product replaced the thin metal handle of traditional peelers with a thick, ergonomic rubber grip. It was then marketed not as a medical or adaptive product, but as simply a better vegetable peeler, and launched the company into a household name. The lesson here is that designing for disability typically ends up benefiting everyone (Shortform note: this principle is known as the “curb cut effect”).

One of the keys to universal design is flexibility. Flexible designs that allow users to customize the environment to their own needs can serve the greatest number of people with a single product. We often see this kind of flexibility in products like ergonomic office chairs, where users can adjust the height of the seat, headrests, and armrests to fit their specific needs.

When Is “Bad” Design a Good Idea?

Until now, this book has focused on identifying and working toward more user-friendly designs. But there are also cases where deliberately designing something to be difficult to use is the best design choice. For example, think back to Norman doors. A door with hardware mounted in a hidden recess at the very top of the door would normally be a nuisance. But if the door is part of a school and is only meant to be operated by adults, not children, mounting the hardware where only adults can see and reach is good design.

“Bad” or intentionally confusing design has many applications. For example:

However, there is a difference between sloppy design and intentionally designing confusing products. Intentionally confusing designs still need to be easy to use and operate for some users, just not all users (for example, a child-proof cap on a medication bottle is not an example of good design if an adult can’t open it either). In other words, you need to know the rules in order to break them well. This can take many forms, such as:

Exercise: Observe the Right Users in the Right Settings

We know that observation is the first step of the design thinking process, but choosing the right users and settings can be difficult. Let’s practice this now.

Chapter 7: Design in the Real World: Competition, Innovation, and Ethics

In an ideal world, every company would implement a human-centered design approach using a research-intensive, iterative design process. In the real world, this is far easier said than done. Producing well-designed products requires keeping the company afloat, which often translates to making concessions in the design process.

The Pressures of Business

To keep profit margins high, manufacturers typically focus on price, features, and quality (in that order), so a lengthy and expensive design process that drives up the ultimate price of the product isn't practical. Even with the perfect combination of those three factors, products (and companies) can still fail purely due to timing. Successful products capitalize on the zeitgeist (German for “spirit of the time”), hitting the market at just the right moment in the cultural and economic climate. On top of all that, businesses also need to identify and market to the “real” customers—not the end user, but the distributors who decide which products to sell in their stores. Each of these pressures has an important impact on the design of the final product.

“Featuritis”

Competitive pressures can create unexpected consequences. For example, “featuritis” is the “disease” affecting product development, characterized by what Norman calls “creeping featurism," or the temptation to add more and more features to an already well-designed product. There are several possible sources of creeping featurism, including:

The problem with creeping featurism is that it often degrades the overall quality of a product. Instead, Norman recommends companies focus on their strengths, and develop them even further. Rather than winning over customers with new features, it’s better to do one thing better than anyone else on the market.

Turning Ideas Into Successful Products

Technological change happens quickly, and new product designs are developed quickly enough to keep up. But turning an idea into a successful product happens much slower, if it happens at all. Early models of new technologies are often prohibitively expensive, as was the case with digital cameras. Apple’s 1994 QuickTake digital camera was one of the first on the market, but it failed quickly, as consumers found the new technology confusing, expensive, and unnecessary.

Another cause of delay is the risk-averse attitudes of large corporations. Radical innovation has a high failure rate, and most big companies would rather stick to a proven product. Smaller companies are more willing to take these risks, but often don’t have the resources to withstand initial struggles. This is why most start-up companies fail, regardless of the quality of their ideas.

Because most of the product development process happens out of the public eye, many of us don’t realize just how long it can take to turn a product idea into reality. Looking deeper into the development of video calling technology and the QWERTY keyboard can help illustrate this point.

Case Study: Videophone

The idea of communicating via two-way video was first proposed in 1879, just two years after the invention of the telephone. The first working videophone was created in the 1920s but did not become commercially available in the United States until the 1960s, where it quickly failed. It wasn’t until the 2010s that technology finally caught up to the 150 year-old vision, and video calling exploded in popularity.

In the 1879 cartoon that first publicly imagined videophones of the future, the invention was credited to Thomas Edison. But Edison did not invent the videophone. This is an example of Stigler’s Law, where famous names are attached to products purely by reputation. We associate videophone technology with the names of companies who finally successfully popularized it, not with any of the actual inventors or original distributors. The lesson here is that being first is not always an advantage if the timing is not right.

Case Study: QWERTY Keyboards

Keyboards are another example of the long and winding road of product development. The modern QWERTY keyboard was first developed in the 1870s for mechanical typewriters. On these machines, each key was connected to the interior typebar by a metal lever. Using keyboard designs with more logical layouts (alphabetical, for example) often resulted in these levers crossing and jamming the machine. The QWERTY arrangement was created for the specific purpose of spacing out these levers to make them far less likely to jam.

We no longer need to worry about jamming metal levers, and many updated keyboard designs exist that make typing easier and faster. So why do we still use a keyboard layout designed to accommodate outdated technology? This is a classic example of the legacy problem. The QWERTY layout was adopted by Remington, the first company to successfully produce and market mechanical typewriters to a wide audience. An entire generation learned to type on Remington typewriters, and thus on the QWERTY layout—a powerful legacy.

By the time modern computer keyboards were invented, the existing keyboard layout was so universally ingrained that introducing a new design was nearly impossible. It doesn’t matter that alternate keyboards are faster and more efficient, or that extensive use of the QWERTY layout can contribute to symptoms of carpal tunnel syndrome; the cost of updating technology and retraining entire generations is too high.

Radical vs. Incremental Innovation

Technological innovation takes two forms: radical and incremental. Radical innovation refers to the revolutionary, paradigm-shifting ideas that change the fundamental nature of a product. Incremental innovation refers to the slower, iterative process of refining those ideas to make them better and more user-friendly. For example, the invention of the automobile was a radical innovation, but incremental innovation is responsible for the steady stream of small changes that turned the first automobile into the modern car.

Incremental innovation happens through repeated iterative design cycles of observation, idea generation, prototyping, and testing. This happens not just within a single design project, but across the industry and over years or even decades. This process is sometimes called “hill climbing," based on the metaphor of climbing a hill blindfolded. Each step is a test: if you move your foot in one direction and find higher ground, step forward—if not, pivot and try again.

Radical innovation happens through the invention of new technologies as well as the combination of existing technologies. For example, music, television, and book publishing were once entirely separate industries. Now, internet streaming services have collapsed these industries into one platform, completely changing the way media is made, distributed, and consumed.

Radical innovation can be especially important for vital services (like housing, transportation, education, and medicine) where incremental change hasn’t addressed underlying fundamental issues. The complicated infrastructure of these fields makes change especially difficult, but the potential benefits of radical innovation are also much greater.

The Future of Technology

All this innovation raises questions about the future. With technology evolving at such a rapid pace, how will our relationship to it change? On one hand, humans and technology are more linked than ever. Artificial intelligence and advanced medical technology like pacemakers and bionic prosthetics are blurring the line between human and machine in a brand new way. Our cultures have also changed to reflect this new relationship.

However, while people continue to evolve, our needs remain largely the same. We still need food, water, sleep, shelter, and social interaction, among other things. The changes in our technology affect how we meet those needs, but not the needs themselves. Phones have become smaller and now contain cameras, but the need to talk to one another from a distance and to record history in a visual way has not changed. Keyboards have evolved and may eventually be replaced entirely, but there will always be a need to record information in written form. In other words, human needs won’t change, but the way they’re satisfied will.

Do We Rely Too Much on Technology?

With technology becoming so integrated into our daily lives, are we becoming too dependent on it? Do our devices make us more capable, or less? These questions are not new—even Socrates argued that the invention of books would decimate the human capacity for memory, discussion, and independent thought. But books have arguably made us more intelligent, not less. Not having to memorize every story or essay we encounter has made it possible for us to engage with more sources than ever before.

In the same way, technological innovation has made it possible to automate tasks that once took up huge amounts of time and energy—resources that can now be applied to more than just the necessary activities for survival. Our intelligence hasn’t changed, only the tasks we apply it to.

The key is in using technology to do the jobs technology can and should do. This frees up time and effort for humans to do even bigger and better things than before. Relying on human skills for part of a task and technology for another part is called distributed cognition, and the combination typically outperforms what either people or machines could do alone.

Pitting human chess masters against computer opponents is a classic example of this effect. In a one to one competition, modern computer programs almost always win. But when a human player and the computer work as a team, they beat both human and computer opponents. This doesn’t require the human team member to be a grandmaster (or the computer to be running the world’s most advanced chess software) so long as they work effectively as a team.

The Ethics of Good Design

Design impacts people on a cultural and societal level, not just individuals. It can be a powerful force for social change, which means it must be used responsibly. What does that mean for designers?

The Future of Content Creation

The almost universal availability of new technologies is changing the way we create and engage with information. Tools like blogging sites, integrated cameras, and free editing software make it possible for anyone to publish new media on any subject and reach a wider audience than ever before. Norman calls this “the rise of the small."

Easy access to technology is a game changer for parts of the world with less developed infrastructure. Access to information and low-cost technology has made it possible for people to innovate in brand new ways and develop solutions for their needs based on the resources they have available (like bicycle-powered water pumps and solar chimneys). These designs can then be shared online and adopted wherever they might be useful, even across the world.

While technological advances make amateur content creation easier and more accessible, the opposite is true for professional content creators. The programs and devices required to make professional-quality media are more sophisticated (and more expensive) than ever before. This leveling of the playing field is a double-edged sword: while accessing information is easier than ever, not all that information is true, and creating media that has been fact-checked by experts requires expensive professional resources.

Consumerism and Sustainability

Like content creation, the design of durable goods is also changing as technology evolves. The ability to compare items from different companies online before deciding which to purchase has made competition between manufacturers fiercer than ever. In this environment, more emphasis is placed on aesthetics and features that are likely to entice buyers and give companies an edge. A product can be perfectly designed for its function, but will still lose out to more attractive versions, even if they’re far less functional.

The need to attract buyers creates another hurdle. While services like healthcare and food distribution are self-sustaining (because there will always be a need for them), durable physical goods are not: If everyone who needs a particular product purchases one, there’s no one left to sell it to. One way manufacturers get around this is through planned obsolescence, the practice of designing products that will break down after a certain amount of time and need to be replaced.

Trends are another tool companies use to entice repeat buyers—if there is a newer, more fashionable version of a product, buyers are more likely to upgrade. This creates a cycle of consumption: buy something, use it until it breaks or goes out of style, throw it away, and buy another. While this cycle is good for business, the waste it generates is horrible for the environment.

Thankfully, the combination of new technologies and a growing cultural awareness of sustainability issues is creating a new paradigm. It is easier than ever to design sustainable versions of products (like streaming services rather than physical copies of movies), and environmental friendliness is now a selling point.

Exercise: Does Technology Make Us Smart?

The debate over whether technology makes us more or less intelligent rages on. This exercise will help you examine how technology affects your own life.

Exercise: Consider Consumer Values

Think about the power you have as a consumer to influence design.

Exercise: Reflect on The Design of Everyday Things

Now that you’ve finished the summary, let’s figure out how to apply the lessons of the book to your own life.