Thinking in Systems is an introduction to systems analysis. Many aspects of the world operate as complicated systems, rather than simple cause-effect relationships. Many problems in the world manifest from defects in how the systems work. Understanding how systems work, and how to intervene in them, is key to producing the changes you seek.
A system is composed of three things:
To define it more cohesively, a system is a set of elements that is interconnected in a way that achieves its function.
Many things in the world operate as systems.
Stocks and flows are the foundation of every system.
A stock represents the elements in a system that you can see, count or measure. It can be commonly thought of as an inventory, a store, or a backlog.
Flows are the means by which the stocks change over time. Inflows increase the level of stock, while outflows decrease the level of stock.
Let’s take a simple system: a bathtub.
This can be drawn on a stock-and-flow diagram, as here:

Many systems are analogous to the bathtub:
Stocks take time to change. In a bathtub, think about how quick it is to change the inflow or outflow. It takes just a second to turn on the faucet. It takes minutes to fill the tub.
Why do stocks change so gradually? Because it takes time for the flows to flow. As a result, stocks change slowly. They act as buffers, delays, and lags. They are shock absorbers to the system.
From a human point of view, this has both benefits and drawbacks. On one hand, stocks represent stability. They let inflows and outflows go out of balance for a period of time.
On the other hand, a slowly-changing stock means things can’t change overnight.
As humans, when we look at systems, we tend to focus more on stocks than on flows. Furthermore, we tend to focus more on inflows than on outflows.
This is just one example of how we, as simplicity-seeking humans, tend to ignore the complexity of systems and thus develop incomplete understandings of how to intervene.
Systems often produce behaviors that are persistent over time. In one type of case, the system seems self-correcting—stocks stay around a certain level. In another case, the system seems to spiral out of control—it either rockets up exponentially, or it shrinks very quickly.
When a behavior is persistent like this, it’s likely governed by a feedback loop. Loops form when changes in a stock affect the flows of the stock.
Also known as: negative feedback loops or self-regulation.
In balancing feedback loops, there is an acceptable setpoint of stock. If the stock changes relative to this acceptable level, the flows change to push it back to the acceptable level.
An intuitive example is keeping a bathtub water level steady.
Also known as: positive feedback loops, vicious cycles, virtuous cycles, flywheel effects, snowballing, compound growth, or exponential growth.
Reinforcing feedback loops have the opposite effect of balancing feedback loops—they amplify the change in stock and cause it to grow more quickly or shrink more quickly.
Here are examples of runaway loops in the positive direction:
Here are examples of runaway loops in the negative direction:
From these basic building blocks, you can build up to more complicated systems that model the real world. In this 1-page summary, we’ll cover only one simple system and show how systems analysis can lead to an understanding of its behavior.
We’ll look at a system with one stock and two loops that compete against each other—one reinforcing, and one balancing.
The concrete example we’ll use is the world population.
The stock and flow diagram looks like this:

How does the population behave in this scenario? It depends on the relative strength of the two competing loops.
Different circumstances can drive the relative strength of the birth or death loop:
Read the full summary to learn how:
Systems are capable of accomplishing their purposes remarkably well. They can persist for long periods without any particular oversight, and they can survive changes in the environment remarkably well. Why is that?
Strong systems have three properties:
Creating systems that ignore these three properties leads to brittleness, causing systems to fail under changing circumstances.
Think of resilience as the range of conditions in which a system can perform normally. The wider the range of conditions, the more resilient the system. For example, the human body avoids disease by foreign agents, repairs itself after injury, and survives in a wide range of temperatures and food conditions.
The stability of resilience comes from feedback loops that can exist at different layers of abstraction:
At times, we design systems for goals other than resilience. Commonly, we optimize for productivity or efficiency and eliminate feedback loops that seem unnecessary or costly. This can make the system very brittle—it narrows the range of conditions in which the system can operate normally. Minor perturbations can knock the system out of balance.
Self-organization means that the system is able to make itself more complex. This is useful because the system can diversify, adapt, and improve itself.
Our world’s biology is a self-organizing system. Billions of years ago, a soup of chemicals in water formed a cellular organism, which then formed multicellular organisms, and eventually into thinking, talking humans.
Some organizations quash self-organization, possibly because they optimize toward performance and seek homogeneity, or because they’re afraid of threats to stability. This can explain why some companies reduce their workforces to machines that follow basic instructions and suppress disagreement.
Suppressing self-organization can weaken the resilience of a system and prevent it from adapting to new situations.
In a hierarchy, subsystems are grouped under a larger system. For example:
In an efficient hierarchy, the subsystems work well more or less independently, while serving the needs of the larger system. The larger system’s role is to coordinate between the subsystems and help the subsystems perform better.
The arrangement of a complex system into a hierarchy improves efficiency. Each subsystem can take care of itself internally, without needing heavy coordination with other subsystems or the larger system.
Problems can result at both the subsystem or larger system level:
We try to understand systems to predict their behavior and know how best to change them. However, we’re often surprised by how differently a system behaves than we expected.
At the core of this confusion is our limitation in comprehension. Our brains prefer simplicity and can only handle so much complexity. We also tend to think in simple cause-effect terms, and in shorter timelines, that prevent us from seeing the full ramifications of our interventions.
These limitations prevent us from seeing things as they really are. They prevent us from designing systems that function robustly, and from intervening in systems in productive ways.
Systems with similar structures tend to have similar archetypes of problems. We’ll explore two examples of these; the full summary includes more.
Also known as: Keeping up with the Joneses, arms race
Two or more competitors have individual stocks. Each competitor wants the biggest stock of all. If a competitor falls behind, they try hard to catch up and be the new winner.
This is a reinforcing loop—the higher one stock gets, the higher all the other stocks aim to get, and so on. It can continue at great cost to all competitors until one or more parties bows out or collapses.
A historical example was the Cold War, where the Soviet Union and the United States monitored each others’ munitions and pushed to amass the larger arsenal, at trillions of dollars of expense. A more pedestrian example includes how advertising between competitors can get increasingly prevalent and obnoxious, to try to gain more attention.
The solution is to dampen the feedback wherein competitors are responding to each others’ behaviors.
One approach is to negotiate a mutual stop between competitors. Even though the parties might not be happy about it or may distrust each others’ intentions, a successful agreement can limit the escalation and bring back balancing feedback loops that prevent runaway behavior.
If a negotiation isn’t possible, then the solution is to stop playing the escalation game. The other actors are responding to your behavior. If you deliberately keep a lower stock than the other competitors, they will be content and will stop escalating. This does require you to be able to weather the stock advantage they have over you.
Also known as: dependence, shifting the burden to the intervenor
An actor in a system has a problem. In isolation, the actor would need to solve the problem herself. However, a well-meaning intervenor gives the actor a helping hand, alleviating the problem with an intervention.
This in itself isn’t bad, but in addiction, the intervenor helps in such a way that it weakens the ability of the actor to solve the problem herself. Maybe the intervention stifles the development of the actor’s abilities, or it solves a surface-level symptom rather than the root problem.
The problem might appear fixed temporarily, but soon enough, the problem appears again, and in an even more serious form, since the actor is now less capable of solving the problem. The intervenor has to step in and help again to a greater degree. Thus the reinforcing feedback loop is set up—more intervention is required, which in turn further weakens the actor’s ability to solve it, which in turn requires more intervention. Over time, the actor becomes totally dependent on—addicted to—the intervention.
An example is elder care in Western societies: families used to take care of their parents, until nursing homes and social security came along to relieve the burden. In response, people became dependent on these resources and became unable to care for their parents—they bought smaller homes and lost the skills and desire to care.
When you intervene in a system:
Read the full summary to learn more common system problems:
Learning to think in systems is a lifelong process. The world is so endlessly complex that there is always something new to learn. Once you think you have a good handle on a system, it behaves in ways that surprise you and require you to revise your model.
And even if you understand a system well and believe you know what should be changed, actually implementing the change is a whole other challenge.
Here’s guidance on how to become a better systems thinker:
What is a system? A system is 1) a group of things that 2) interact to 3) produce a pattern of behavior.
Many things in the world operate as systems.
Systems may look different on the surface, but if they have the same underlying structure, they tend to behave similarly.
In reality, if you look deeper, the world population is a much more complex system—it’s subject to the economy, health science, and politics, each of which is its own complicated system. But from a certain simplified vantage point, the bathtub and the world population behave similarly.
Understanding the underlying system and how it behaves may be the best way to change the system.
When we try to explain events in the world, we tend to look for simple cause and effect relationships.
This simplicity is reassuring in some ways. Turn this knob, and you solve the problem—easy. In turn, it becomes easy to blame people who do not turn the knob the way you want it to be turned.
However, reality tends to be more complex than simple cause and effect relationships, because they are a product of complicated systems. Systems consist of a large set of components and relationships; a system’s behavior is not the result of a single outside force, but rather the result of how the system is set up.
When viewed from this system lens, causing change is much more difficult than applying a single cause and expecting an effect. It requires an upheaval of the system itself, with all its components and connections.
Without understanding the system that produces a problem, you cannot design effective solutions that solve the problem. Indeed, many “solutions” have worsened the problem because they ignored how the system was set up.
This book will teach you to be a systems thinker.
As a systems thinker, you’ll begin to see systems everywhere, and you’ll look at the world in a new way. In so doing, you’ll be more effective at restructuring systems to achieve the outcome you want.
A system is composed of three things:
To define it more cohesively, a system is a set of elements that is interconnected in a way that achieves its function.
Many things in ordinary life are systems. Let’s define how a professional football team is a system:
As you look around the world, you’ll see systems everywhere. So what is not a system? A set of elements that are not interconnected in a meaningful way or overall function is not a system. For example, a pile of gravel that happens to be on a road is not a system—it’s not interconnected with other elements and does not serve a discernible purpose.
In this chapter, we’ll dive deeper into understanding the three attributes of a system. We’ll then understand how systems behave over time, and how the interconnections can drive system behavior.
(Shortform note: in this chapter we’ll develop an extended example of a football team beyond what’s contained in the book, but in a way that’s consistent with its ideas. You should actively apply the ideas to develop your own examples of systems, such as a corporation, a university, a tree, or the government.)
Let’s go deeper into understanding the three attributes of a system.
Elements are usually the most noticeable. They are the things that make up the system. The football team consists of players, coaches, and a ball. If you zoom out a bit further and define a larger system, the elements can also include the football fans, the city the team resides in, the staff that supports the team, and so on.
Elements don’t need to be tangible things. The system of a football team also consists of intangibles like the pride that fans have for their team, the reputation of players in the league, or the motivation to practice.
You can define an endless number of elements in any system. Before you go too deep down this rabbit hole, start looking for the interconnections between elements.
Interconnections are how the elements relate to each other. These interconnections can be physical in nature. Take the football team again:
Interconnections can also be intangible, often through the flow of information.
The information interconnections may be harder to see at first, but they’re often vital to how the system operates.
The purpose or function is what the system achieves.
Of the three elements, the purpose is often the hardest to discern. It’s not often made explicit, such as in the case of a tree.
The best way to figure out a system’s purpose is to see what it actually does. The way a system behaves and its result is a reflection of the system’s purpose.
A vital purpose of most systems is to perpetuate itself.
Which of the three aspects of a system is most important? The elements, interconnections, and purposes are all necessary, but they have different relative importances.
To think about this, consider changing each item in a system one by one.
Changing elements: You can change the individual players and coaches, but if you keep the same interconnections and purpose, it’s still clearly a football team. Likewise, you can change the members of the US Senate or a corporation, but they still maintain their identities. Thus, the specific elements are usually the least important in the system. (Shortform note: This is related to the Ship of Theseus thought experiment.)
Changing interconnections: Changing the interconnections can dramatically change the system. Imagine changing the rules of the game from football to rugby, while keeping the same players. Or imagine a system where the players paid money to watch the fans. These are certainly not recognizably football teams, even if all the elements remain the same.
Changing purpose: Changing the purpose can also be drastic. What if the goal of the team were to lose as many games as possible? What if the purpose were to make news headlines? The system would be very different, even if the elements and interconnections stayed the same.
Therefore, typically, the purpose is often the most important determinant of a system, followed by interconnections, then by elements.
However, sometimes changing an important element can also change the interconnections or the purpose. Changing China’s leadership from Mao to Deng Xiaoping dramatically changed the nation’s direction; even though the one billion people and millions of organizations were exactly identical, their interconnections and their purpose became very different.
Because you can now see systems everywhere you look, you might have seen that systems can be composed of other systems, and so on in a fractal-like pattern.
For example:
The systems can thus get quite complex, as we’ll explore throughout the summary.
The individual subsystems may have conflicting purposes from the overall system purpose, which can lower function of the system. For example, if the individual players on a football team care more about their personal reputation than the success of the team, the overall team system will perform poorly. For a system to function effectively, its subsystems must work in harmony with the overall system.
Pick a system that you want to understand better. If you can’t think of one, here are suggestions: your employer; your favorite store; your political party; an organism.
What are the major elements of the system?
What are the key interconnections between these elements? Think about physical interconnections, as well as intangible ones based on transferring information.
What are the major purposes of the system? Remember, look at what the system actually does, not what its stated purpose is.
Next, we’ll understand how systems behave over time, by considering stocks and flows. This forms the basic foundation that lets us build up into more complex systems.
A stock represents the elements in a system that you can see, count or measure. It can be commonly thought of as an inventory, a store, or a backlog.
Flows are the means by which the stocks change over time. Inflows increase the level of stock, while outflows decrease the level of stock.
Let’s take a simple system: a bathtub.
This can be drawn on a stock-and-flow diagram, as here:

The clouds signify wherever the inflow comes from, and wherever the outflow goes to. To simplify our understanding of a system, we draw boundaries for what’s important for understanding the system, and ignore much of the outside world.
Many systems are analogous to the bathtub:
Systems like bathtubs and world populations are usually not static. They change over time—the inflows and outflows change, which causes the stocks to change.
Let’s consider how you can change the flows in a bathtub, by manipulating the inflow and the outflow:
This behavior can be put on a graph, which visualizes the system over time.

Systems thinkers use graphs to understand the trend of how a system changes, not just individual events or how the stock looks currently.
The bathtub should be an intuitive model, and it’s simple as it represents just one stock, one inflow, and one outflow. But from this basic example you can find a few general properties of systems:
As humans, when we look at systems, we tend to focus more on stocks than on flows. (Shortform note: This might be because the stock is much more obvious and tangible than the flows. This is also related to how elements in a system are more easily recognizable than the Interconnections.)
Furthermore, we tend to focus more on inflows than on outflows.
Changing the outflows can have very different costs than changing the inflows.
Stocks take time to change. In a bathtub, think about how quick it is to change the inflow or outflow. It takes just a second to turn on the faucet. It takes minutes to fill the tub.
Why do stocks change so gradually? Because it takes time for the flows to flow. As a result, stocks change slowly. They act as buffers, delays, and lags. They are shock absorbers to the system.
From a human point of view, this has both benefits and drawbacks. On one hand, stocks represent stability. They let inflows and outflows go out of balance for a period of time. A stock buys you time to solve the problem and experiment.
On the other hand, a slowly-changing stock means things can’t change overnight.
If you understand that stocks take time to change, you’ll have more patience to guide the system to good performance.
As agents in a system, we make decisions to adjust the level of a stock to where we think it should be.
If you look at the world through a systems lens, you’ll see a large number of stocks, and flows that adjust the levels of stocks.
Systems often produce behaviors that are persistent over time. In one type of case, the system seems self-correcting—stocks stay around a certain level. In another case, the system seems to spiral out of control—it either rockets up exponentially, or it shrinks very quickly.
When a behavior is persistent like this, it’s likely governed by a feedback loop. Loops form when changes in a stock affect the flows of the stock.
(Shortform note: Balancing feedback loops are also known as negative feedback loops or self-regulation.)
In balancing feedback loops, there is an acceptable setpoint of stock. If the stock changes relative to this acceptable level, the flows change to push it back to the acceptable level.
An intuitive example is keeping a bathtub water level steady.
Your bank account is another example of balancing feedback loops.
Balancing feedback loops tend to create stability, and they keep a stock within an acceptable range.
In a stock-and-flow diagram, balancing feedback loops look something like this:

A property of balancing feedback loops is that the further away the stock is from the desired level, the faster it changes.
(Shortform note: Reinforcing feedback loops are also known as positive feedback loops, vicious cycles, virtuous cycles, flywheel effects, snowballing, compound growth, or exponential growth.)
Reinforcing feedback loops have the opposite effect of balancing feedback loops—they amplify the change in stock and cause it to grow more quickly or shrink more quickly.
Here are examples of runaway loops in the positive direction:
In a stock-and-flow diagram, reinforcing feedback loops look something like this:

Rule of 72
Since we think about compound growth a lot, there’s a rule of thumb called the Rule of 72 that lets you calculate the time it takes a stock to double. Take the current growth rate in percent, divide 72 by that number, and you’ll get the number of periods it takes to double.
For example, if your annual growth rate of investments is 4%, it will take 72 / 4 = 18 years to double. If it grows at 6%, it will take 72 / 6 = 12 years to double.
(Shortform note: The book cites the rule of 70, but 72 tends to be more commonly used, since more numbers divide evenly into 72 than 70.)
Here are examples of runaway loops in the negative direction:
As you continue looking at the world through a systems lens, you’ll see feedback loops everywhere.
In fact, here’s a challenge: try thinking of any decision you make without a feedback loop of some kind. Can you think of any?
You might start finding that many things influence each other in reciprocal ways. Instead of pure cause and effect, you might see that the effect actually influences the cause.
Invert your thinking. If A causes B, does B also cause A?
This type of systems thinking makes blame much more complicated. It’s not an easy cause and effect relationship. It’s a system of interconnected parts and complicated feedback loops.
Think about everyday systems and how feedback loops affect them.
Can you think of any decision you make where a feedback loop is not involved? What is it?
Think harder—in what way does what you’re deciding about influence what you decide?
Think of a recent time where you blamed a problem on something. Phrase it as “A caused B.” (Try not to think about a single person making a mistake, but rather about a system-level problem, such as a political or societal issue.)
Now invert the situation. Is it possible that B caused A? How could this be true?
In reality, systems are much more complex than the simple examples we’ve covered so far.
For example, the world population has an inflow representing birth rate, but birth rate is influenced by a vast number of inputs, such as the overall economy, healthcare, and politics, which are themselves complex systems.
In this chapter, we’ll take what we’ve learned and build up to more complicated systems, which are simplistic models of real-world systems. The author calls this collection of systems a “zoo,” which is an appropriate metaphor. Like in a zoo, these animals are removed from their natural complex ecosystem and put in an artificially simplistic environment for observation. But they give a hint of patterns in the real world and yield surprisingly insightful lessons.
First, we’ll look at a system with one stock and two balancing loops that compete against each other. We know that a balancing loop tries to bring a stock back to a setpoint. What happens when two balancing loops have different setpoints but act on the same stock?
The concrete example we’ll use is a thermostat that heats a room that is surrounded by a cold environment.
The stock and flow diagram looks like this:

So how does the system behave? It depends on which balancing loop is stronger:
Where exactly the room stabilizes its temperature depends on the relative strength of the balancing loops. The stronger loop will drive the stock closer toward its setpoint. The general takeaway: in a system with multiple competing loops, the loop that dominates the system determines the system’s behavior.
One implication of two competing loops is that the stock levels off at a point near to the stronger loop’s setpoint, but not exactly at it. If a thermostat is set to 68°F, the room temperature will level off slightly below 68°F, because heat continues to leak from the stock even as the furnace generates heat. (For this reason, people often set the thermostat to a bit above the temperature they want, or thermostats temporarily overshoot to bring the temperature above the setpoint.)
The strength of the loops may also vary over time. For example, the outside temperature drops during night and rises during day. Thus, the cooling loop is stronger at night and weaker during the day. If the heating loop stayed at the same strength throughout the day, then you’d find the room temperature would drop more at night than during the day. By understanding the strengths of the loops and how they fluctuate over time, you can predict the room temperature.
A thermostat is fairly unimportant in the grand scheme of things, but think about how this system applies to your paying off credit card debt. To pay off the debt, you need to pay not just the current bill but also cover the interest generated while you’re paying—otherwise, you will never quite pay off your bill. This applies equally to the national debt.
We’ll now take the system we just discussed—one stock with two balancing loops—and introduce delays. Delays are common in the real world. It takes time to transmit information, to process the information, to act on the information, and to get the result of your actions. All of these delays disrupt how the system behaves.
For this example, imagine the manager of a car dealership who has to stock her car lot for sale:
The manager wants to keep the inventory at a set number of cars to cover ten days’ worth of sales. In an ideal world, every time a customer drives a car off the lot, a new one instantaneously appears to replace it. The car stock is kept at exactly the same number.
Of course, reality doesn’t work this way. In practice, there are several key delays that hamper the response time:
At an equilibrium, these delays aren’t a big deal. The manager can figure out the typical rate of car sales and adjust for all the delays. The car lot stays at a steady number.
But let’s look at a new situation—say there’s a sudden, permanent increase in daily car sales of 15%. In the long run, since the manager wants to stock 10 days’ worth of cars in the lot, she needs to increase her inventory. Again, in an ideal world, the cars would just appear instantaneously on the lot without any delay, and she’d be set.
But the real world has delays. What’s the effect of all of these delays? Surprisingly, the delays cause oscillations:

Why does this happen? In essence, the delays cause decision making that falls out of sync with what the car lot really needs:
As an analogy, if you’ve taken a shower where it takes a long time for turning the knob to do anything, you’ve experienced the wild, unpleasant oscillations in water temperature. You don’t get quick feedback on your actions—if the water’s too cold, you turn the knob all the way hot; then it gets scalding, and you quickly clamp it down; then the water gets too cold, and so on.
Oscillations can cause more problems.
How do you fix this? One intuitive reaction is to see the delays as bad. Surely, shortening the reaction time should solve the problem! The car manager decides to shorten her perception delay, reacting to two days’ data instead of three, and shortening her response delay, ordering the entire shortfall in one go, rather than spacing it out.
This actually makes things worse. The oscillations increase dramatically:

(Shortform note: this graph is an approximation to illustrate the point. System analysts build quantitative models to produce more accurate charts.)
In essence, shorter delays cause larger overreactions—at the car lot’s lowest points, the manager places large orders multiple times in succession. This causes a huge buildup of cars, which takes a long time to draw down. The cycle repeats.
Counter-intuitively, the better reaction to oscillations is actually to slow reaction time. Oscillations are a sign of overreacting. The manager should under-react—if she increased her delays from 3 days to 6 days, the oscillations would dampen quite a lot:
As you’ve seen, delays affect systems a lot, in somewhat unpredictable ways. That’s why systems thinkers are obsessed with identifying delays and studying their impact.
This example just represents a single simple car lot. Imagine things like this happening throughout the broader economy, in hundreds of thousands of places, all interconnected. All the car lots around the world are connected to the manufacturer, which is connected to the parts makers, who are connected to suppliers of steel, rubber, electronics. Every single member here has its own idiosyncratic delays and oscillations—imagine the coordination it takes to produce new cars and ship them nationwide!
Next, we’ll look at a system with one stock and two loops that compete against each other—one reinforcing, and one balancing.
The concrete example we’ll use is the world population.
The stock and flow diagram looks like this:

As with the previous system, the relative strength of the two competing loops determines how the stock changes over time.
Different circumstances can drive the relative strength of the birth or death loop:
The economy works similarly to the world population, in that it fits the same system model of a stock, one reinforcing loop, and one balancing loop:
As with the world population, the strength of the two loops determines the behavior of the system. If reinvestment is stronger than depreciation, the capital stock will rise. Otherwise, the capital stock will fall, and the economy will decline.
And, again, changing the circumstances can change the strengths of the loops. New technology that reduces the breakdown of industrial machines decreases depreciation. Inversely, political actions that reduce reinvestment can weaken an economy.
You can see how two systems that look very superficially different actually behave similarly. This is the type of insight that systems thinking yields.
And of course, the population and economy are inextricably intertwined. People contribute labor to the economy and they consume within it; likewise, the economy affects the population’s birth rates and death rates. Systems can get complicated very quickly.
So far we’ve just focused on one-stock systems. In the models, we haven’t worried too much about where the inputs came from and how much there were—the population model assumes infinite food, the thermostat model assumes infinite gas to the furnace.
But in the real world, the inputs have to come from somewhere. In a system model, the inflow into a stock comes from another stock, which is finite. This finite stock causes a constraint on growth—the population can’t grow forever, and the economy can’t grow forever.
We’ll explore this here with two system models, one with a non-renewable stock (oil mining) and one with a renewable stock (commercial fishing). Changing whether the stock is renewable changes the implications of how growth ends.
Consider a reservoir of oil underground. The stock of oil is finite. There is an outflow as the oil is mined. (Shortform note: There is also a very slow inflow of generation of fossil fuels through geological processes, but this occurs over millions of years and is so slow that it’s irrelevant in this situation.)
The company that decides to mine this oil reservoir is a system. The system looks a lot like the system of the economy we just discussed: there is a capital stock (for example, building a mining rig), an inflow of investment in a reinforcing loop, and an outflow of depreciation in a balancing loop.
These two systems are connected by one element—the profit from the oil that is mined. Simplistically, the more oil that is mined, the higher the total profit, which is reinvested into the capital stock; this higher capital stock then mines the resource at a faster rate. In this simplistic system, the capital stock would just grow exponentially, mining faster and faster until the stock is depleted.

But we’ll add one complication to this system—the oil gets harder to extract as the stock goes down. In the real world, this is true—the oil gets deeper and needs more drilling to access, or it becomes more dilute and more costly to purify.
This builds a more complicated feedback loop that ties together the two systems:
This feedback loop leads to a predictable behavior of the system:
Oil companies clearly know this behavior, so before an oil field runs out, they’ve already invested in discovering new oil sources elsewhere.
Now that you understand the behavior of the system, nudges to different points in the system can influence its behavior:
The book explores a final consequence of this system—increasing the size of the stock does not proportionately increase the time that the resource lasts. If the size of the oil reservoir doubled, the reservoir would not last for double the amount of time. This is because the capital stock grows exponentially—even if the global oil reservoir were instantaneously doubled, at a capital growth rate of 5% per year, the peak extraction time would only be extended by 14 years (recall the rule of 72 from earlier). And the higher capital stock grows, the steeper the plummet when the stock becomes depleted (in real terms, these are jobs that disappear and communities that are battered).

(Shortform note: While the book steers away from explicit political statements, the implication is that the more we reinvest in harvesting a nonrenewable resource like fossil fuels and avoid developing alternatives, the higher the capital stock and extraction rate will grow, and the steeper the fall will be when the resource depletes.)
We also depend on stocks that are renewable, that replenish themselves after we harvest from them. Examples include lumber from trees we can replant, and farming animals and plants that we can replenish.
Let’s consider the example of fishing from the ocean. The extraction system of capital stock is the same as with oil mining, except instead of oil rigs, the capital stock represents fishing vessels. The capital stock has an inflow of reinvestment, and an outflow of depreciation. The capital stock determines the harvest rate of the fish.
However, the stock it is harvesting from is now renewable—the fish population has a runaway loop that creates an inflow of more fish.

There are a few nuances to the situation:
Depending on how the fishing industry behaves, we can arrive at very different system outcomes.
The fishing industry can balance its harvest rate so that it matches the inflow rate of fish. The fish population is held at a constant stock level.
This also requires that the industry constrain its level of capital stock to be steady. It reinvests in capital stock only to balance the rate of depreciation, thus keeping the harvest rate steady. If it invested any more, the capital stock would rise, and the harvest rate would increase beyond the replenishment rate of the population. This leads to the next system outcome.
Here, the industry’s harvesting rate exceeds the replenishment rate of the population. Oscillations appear because there is a natural feedback loop at work:
As with the car manager on her car lot, oscillations appear in the system due to delays. It takes time for the fishing industry to respond to a declining fish population by reducing investment, and it takes time to reinvest once fishing is good again.
This might not be a deliberate malicious action by the industry—a new fishing technology like deep-sea fishing nets may increase fishing efficiency and reduce costs. However, the oscillations have real human impacts—due to the long times it takes to replenish a fish population, this might introduce cycles extending over decades, which in turn affect jobs and communities.
Here, the industry harvests the fish population beyond the point of replenishment. When there are too few fish in the sea, they don’t find each other and they can’t reproduce enough to overcome the fishing rate.
What happens is a wipeout of the fish population, which in turn destroys the fishing industry. Without careful management, the industry may, without knowing it, drive itself out of business.
A few factors make extinction more likely:
We’ve seen a series of system models and connected them to real-life situations. You might now see the benefit of systems-level thinking—it can clarify a situation and how it is likely to unfold in the future.
As you try to view the world with a systems lens, these three lines of questioning will help you more accurately understand the system:
1. What are the driving factors of the system? Would the driving factors affect the system as described?
2. What is causing the driving factors?
3. How likely are the driving factors to unfold as predicted?
Systems are capable of accomplishing their purposes remarkably well. They can persist for long periods without any particular oversight, and they can survive changes in the environment remarkably well. Why is that?
Strong systems have three properties:
We’ll discuss each one in more detail.
A resilient system is able to persist after being stressed by a perturbation.
Think of resilience as the range of conditions in which a system can perform normally. The wider the range of conditions, the more resilient the system.
Resilience doesn’t mean that the behavior is static or a flat line. Dynamic systems, like the year-long oscillation of a tree growing in spring and shedding leaves in fall, can be resilient as well. Resilience just means that the normal behavior of a system persists through perturbations.
The stability of resilience comes from feedback loops that can exist at different layers of abstraction:
To understand this, consider the human again. The body has baseline feedback loops that regulate our breathing and injury repair without our thinking about it. Above this, we also have a brain that can consciously regulate our behavior, discover drugs that influence our bodily feedback loops, and design economies that help us discover drugs. The human body is thus a remarkably resilient system.
At times, we design systems for goals other than resilience. Commonly, we optimize for productivity or efficiency. This can make the system very brittle—it narrows the range of conditions in which the system can operate normally. Minor perturbations can knock the system out of balance.
Examples include:
Beware of designing a system that is brittle and cannot heal itself after a perturbation.
Self-organization means that the system is able to make itself more complex. This is useful because the system can diversify, adapt, and improve itself.
Our world’s biology is a self-organizing system. Billions of years ago, a soup of chemicals in water formed a cellular organism, which then formed multicellular organisms, and eventually into thinking, talking humans.
While self-organization can lead to very complex behaviors, its cause doesn’t need to be complex. In fact, a few simple rules can give rise to very complex behavior.
(Shortform examples: The biological development described above occurred through the combination of a few simple rules:
From these rules, a unicellular organism can ultimately lead to humans.
Similarly, the economy, a complex system, largely works with a few simple rules, such as:
Simple rules like these can allow an apple farmer to trade with a furniture maker, and ultimately give rise to a complex economy, consisting of a vast web of relationships that functions productively without any global supervisor.)
Self-organization produces unexpected variations. It requires experimentation, and tolerance of whole new ways of doing things.
Some organizations quash self-organization, possibly because they optimize toward performance and seek homogeneity, or because they’re afraid of threats to stability. This can explain why some companies reduce their workforces to machines that follow basic instructions and suppress disagreement.
Suppressing self-organization can weaken the resilience of a system and prevent it from adapting to new situations.
In a hierarchy, subsystems are grouped under a larger system.
There are many nested layers in the hierarchy of our world. Let’s take you as an example:
As systems self-organize and increase their complexity, they tend to generate hierarchies naturally. For example:
In an efficient hierarchy, the subsystems work well more or less independently, while serving the needs of the larger system. The larger system’s role is to coordinate between the subsystems and help the subsystems perform better.
The arrangement of a complex system into a hierarchy improves efficiency. Each subsystem can take care of itself internally, without needing heavy coordination with other subsystems or the larger system.
This arrangement reduces the information that the subsystem needs to operate, preventing information overload. It also reduces delays and minimizes the need for coordination.
For example:
In a hierarchy, both the subsystems and the larger system have their role. If either deviates from the role, the system’s performance suffers.
The subsystems work to support the needs of the larger system. If the subsystem optimizes for itself and neglects the larger system, the whole system can fail. For example:
On the other side, the larger system’s role is to help the subsystems work better, and to coordinate work between them. In other words, the larger system is a supporter and an air traffic controller. It should not exercise more control than this, but it often does:
A good hierarchy has balance. There is enough central control to coordinate the subsystems to a larger goal, but not so much control that it suppresses the subsystem’s performance or self-organization.
Think about how to improve your system with the three main reasons they perform well.
What’s a system you care about? If you need ideas, consider your employer, a group you belong to, yourself as an individual, or a physical system.
Is this system resilient? What is the range of conditions in which the system can perform naturally? How can you improve the system’s resilience?
Is the system capable of self-organization? Does it freely allow experiments that allow it to adapt? How could you improve its self-organization?
Does the system have a capable hierarchy? Can the subsystems work independently with minimal communication with other systems? Does the larger system function to support the subsystems in pursuit of a larger goal? How can you improve the hierarchy?
We try to understand systems to predict their behavior and know how best to change them. However, we’re often surprised by how differently a system behaves than we expected. Systems thinking is counter-intuitive in many ways, even for trained systems thinkers.
At the core of this confusion is our limitation in comprehension. Our brains prefer simplicity and can only handle so much complexity. That prevents us from seeing things as they really are.
This chapter discusses a collection of such limitations. The underlying themes are:
When we try to understand systems, we tend to see them as a series of events.
While events are entertaining, they’re not useful for understanding the system. Events are merely the output of a system, and are often the system’s most visible aspects. But events often don’t provide clarity into how the system works or the system structure, which means they don’t help you predict how the system will behave in the future or tell you how to change the system.
In fact, an event-driven view of the world is entertaining precisely because it has no predictive value. If people could merely study daily events and predict the future, the news would lose its novelty and stop being fascinating.
Instead of individual events, the better way to understand a system is to see its performance over time, and to see patterns of behavior. Look for historical data and trends, and plot graphs over time.
Understanding how the system behaves over time can help you deduce the structure of the system, which can in turn help you predict the behavior into the future.
(Shortform examples:
In our daily lives, we see the world act linearly. There is a straight-line relationship between two things. For example:
However, much of the world doesn’t act linearly. Two elements can’t be related on a straight line. For example:
Nonlinearities often exist as a result of feedback loops. As we learned, a reinforcing feedback loop can lead to exponential growth. Even more complex nonlinearities can result when an action changes the relative strength of feedback loops in the system—thus, the system flips from one pattern of behavior to another.
Recognize that nonlinearities can exist.
Be wary of extrapolating from a small part of the curve out linearly.
When we studied stock-and-flow diagrams above, we represented the inflows and outflows as clouds. These mark the boundaries of the system we’re studying.
We cap systems to simplify them. This is partially because we can only tolerate so much complexity, but also because too many extra details can obscure the main question. When thinking about a fishing population, it would be confusing to have to model the entire world economy and predict how a football team’s performance might work its way down to fishing behavior.
However, we tend to oversimplify systems. We draw narrow boundaries that cause us to ignore elements that materially affect the system. This can cause the system to behave against our expectations, because we had an incomplete model of the system. For example:
We also tend to draw natural boundaries that are irrelevant to the problem at hand. We draw boundaries between nations, between rich and poor, between age groups, between management and workers, when in reality we want happiness and prosperity for all. These boundaries can distort our view of the system.
(Shortform note: A hierarchy, discussed in the previous chapter, naturally introduces system boundaries. A subsystem is largely concerned with how it operates and can ignore the operations of other subsystems. This is one of the virtues of hierarchy that leads to efficiency, but it can also prevent a subsystem from perceiving the whole system accurately. For example, a corporation’s product development team may happily work on developing its new products, ignoring that the accounting department is engaging in fraud that will cause the entire company to collapse.)
The ideal balance is to set the system boundaries at the appropriate scope for the question at hand. Don’t exclude anything that’s important, and don’t include anything irrelevant.
Furthermore, when you move to a new question, readjust the boundaries to be appropriate for the new question. Don’t be stuck to your old boundaries.
In this world, nothing physical can grow without limit. At some point, something will constrain that growth. However, we naturally tend to avoid thinking about limits, especially when they involve our hopes and aspirations.
Consider a typical company that sells a product. At all points, it depends on a variety of inputs:
At any point, one of these factors can limit the growth. No matter how much you supply of the other inputs, the output stays the same; the limit is constraining the growth of the system. For example, the company may have more than enough labor and machines to produce the product, but it might lack the raw materials to make full use of the other inputs. (Shortform note: This limit is often called a “bottleneck.”)
We make two major mistakes around ignoring limits:
Mistake 1: Misidentifying the Limit
At times, we may simply incorrectly diagnose what the limit is. We may supply more of other inputs, then be confused about why the system hasn’t improved its output.
For example, an NGO may send aid as money to a developing country, thinking that capital is the limiting factor in the country’s growth. In reality, the limiting factor might be a corrupt government, a weak legal system, or human capital.
Mistake 2: Ignoring that the Limit Changes
In the process of solving one limiting factor, a new limiting factor may appear. This requires the intervention to change over time; doing more of the same won’t cause the system to grow.
For example, a company may have so much business that its limiting factor is having enough workers to keep up with demand. It hires rapidly, but the new workers aren’t trained well, and the product quality suffers. Customer demand drops, and management creates new worker training programs. Customer demand improves again, but the new bottleneck becomes its order fulfillment systems, which break under the heavy load.
This iterative process of finding the limit and relieving it may continue until the company reaches a more intractable limiting factor, such as saturating the customer demand for its product.
To find a system’s limit, consider all the relevant inputs into the system. Then find the input that is most limiting the system’s performance, and relieve it.
Be aware that the system won’t be able to grow forever, so make peace with what limits you are willing to live with. For example, a national economy may decide that it’s willing to grow only at a pace that reduces environmental consequences and social inequality.
As we saw in the car dealership example, delays happen at each step of a flow:
We tend to underestimate delays as a habit. You’ve had personal experience with this when a product took far longer than you expected, even if you tried to anticipate that delay.
Delays can cause surprising behavior with significant ramifications for society:
Delays cause us to make decisions with imperfect information. We may be perceiving information that is too old. In turn, our actions may overshoot or undershoot what the system needs. Like the manager of the car lot, this can lead to oscillations.
The author shares a rule of thumb when estimating delays: predict the delay to the best of your ability, then multiply that by 3.
When exposed to delays, recognize that your decisions will overshoot or undershoot.
Delays are often places to intervene in a system. In some cases, reducing the response delay can increase the accuracy of the actions. However, beware of the tendency to overreact to information—remember the car inventory lot, where acting more aggressively to shortages caused more pronounced oscillations.
We can only make decisions with the information we comprehend accurately.
First, sometimes the information just doesn’t exist. Especially if it’s far removed from us.
Second, even if we had total information, we’re limited in the information we can take in and digest.
Third, even the information we can take in is biased.
It’s small wonder then that we can act with good intentions but cause bad results.
Don’t blame an individual for the behavior. If you were put in that situation, with exactly the information they had and their preferences, you would probably behave the same.
The way to fix a system is not to put in new people, but to change the system.
In previous chapters, we’ve explored a range of system models and how they relate to real-life situations, such as restocking a car inventory lot and managing renewable resources. We’ve explored how misbehaving in the system can cause poor system performance, whether that means wild oscillations in restocking the car lot or driving the fish population to extinction. And in the previous chapter, we covered our limitations in comprehending how systems work.
Taken altogether, it’s little surprise that we can design systems that completely fail to achieve the purpose we desire. Furthermore, when problems appear, we can fail at designing the right solution for the problem, and our behavior can make the situation worse.
In this chapter, we’ll describe system archetypes, which are system structures that produce problematic patterns of behavior (the author also calls them “system traps”). These archetypes are ubiquitous in the real world, explaining phenomena such as nuclear arms races, the war on drugs, and business monopolies. We regularly get mired in these problems, but by understanding how the system predictably produces the behavior, we can also find the right way to intervene.
A theme to keep in mind: beyond fixing the problem, it’s far better to avoid getting trapped in these problems in the first place.
Policy resistance is the failure of policies to achieve the desired outcome. It occurs when the actors resist the policies placed on them. The situation seems to be stubbornly stuck, regardless of what policies are passed to solve the problem.
From a system point of view, policy resistance occurs because the actors in the system have personal goals that differ from the policy. Visualize it as multiple actors pulling the system stock toward opposing goals. Each actor has a desired setpoint for the stock, and the actor takes action when the stock differs from its personal goal. However, each actor has a different setpoint, so all the actors are trying to pull the stock in different directions.
Furthermore, like typical balancing feedback loops, each actor’s behavior is proportional to how far the stock is from the actor’s setpoint. The stronger one actor pulls the stock to its favored direction, the stronger the other actors try to pull back to the center. You might think of this like a game of tug of war.
The system state thus is pulled tightly in multiple directions by all the actors. But since the system stock isn’t at any one actor’s preferred setpoint, everyone is dissatisfied with the situation.
The war on drugs has multiple actors with different setpoints for the system stock of drug supply:
When one actor gains an advantage, the other actors pull the system back to where it was. For example, the police might increase border patrols to seize stockpiles of drugs. A number of events happen in sequence:
Thus the setpoint is back to where it began, though now with actors pulling more tightly than before. Issuing additional policies will do little to change the situation.
In the 1960s, the Romanian government, headed by dictator Nicolae Ceausescu, was worried about the country’s low birth rate, which would stagnate the economy. They decided to outlaw abortion and contraception for women under 45. However, the real underlying problem of low birth rates was that parents were too poor to raise more children, and they didn’t want more children to be born.
After abortion was outlawed, the birth rate tripled for a period of time, but then the population practiced policy resistance:
The policy was so extreme that when the dictator’s government was overthrown, the head family was executed, and the first law passed was to reverse the ban on abortion.
Much of the reason that policy resistance occurs is that people are resisting the policy. Stronger policies cause stronger counterreactions.
The solution is counter-intuitive: give up. Cancel the policy. Things won’t get as bad as you think, because other actors behaved intensely only because you did.
The ideal solution is to develop a policy that aligns all actors’ goals and find a way to satisfy everyone. This stops the actors from pulling against each other and more toward the same goal.
Also known as: Keeping up with the Joneses, arms race
Two or more competitors have individual stocks. Each competitor wants the biggest stock of all. If a competitor falls behind, they try hard to catch up and be the new winner.
This is a reinforcing loop—the higher one stock gets, the higher all the other stocks aim to get, and so on. It can continue at great cost to all competitors until one or more parties bows out or collapses.
(Shortform note: Conceptually, this is similar to policy resistance, in that the agents in the system are responding to how the others are behaving. However, where policy resistance had balancing feedback loops driving the system stock toward a central point, escalation has reinforcing feedback loops driving the stocks as extreme as they can go.)
Arms races are classic examples of escalation. During the Cold War, the Soviet Union and the United States monitored each others’ munitions and pushed to amass the larger arsenal. This was done at tremendous expense (trillions of dollars) and led to a drag on both economies, let alone the development of weapons that threaten humanity.
More pedestrian examples of escalation include:
Escalation can occur in the other direction as well, such as pricing wars where competitors progressively undercut each other in pricing.
As in policy resistance, the solution is to dampen the feedback wherein competitors are responding to each others’ behaviors.
One approach is to negotiate a mutual stop between competitors. Even though the parties might not be happy about it or may distrust each others’ intentions, a successful agreement can limit the escalation and bring back balancing feedback loops that prevent runaway behavior.
If a negotiation isn’t possible, then the solution is to stop playing the escalation game. The other actors are responding to your behavior. If you deliberately keep a lower stock than the other competitors, they will be content and will stop escalating. This does require you to be able to weather the stock advantage they have over you.
Also known as: competitive exclusion, success to the successful
Two competitors have access to the same limited pool of resources. The winner gets a greater share of resources, which in turn allows the winner to compete even better, which then earns it more resources. On the other side, the loser gets progressively fewer resources, which makes the loser even less able to compete. Reinforcing feedback loops are at play here.
If this cycle is allowed to continue unabated, the losers will eventually be forced out of the game entirely. This outcome may be unhealthy and run counter to the goals of the whole system.
In ecology, one species can outcompete another, taking all the system resources to the point that the loser becomes extinct. (Shortform note: An example is an invasive species that reproduces so successfully it disrupts the local ecosystem.)
In commerce, a successful business can reinvest its profits into more technology, capacity, or political lobbying, which further enlarges its share of profits. Left unchecked, the business can become a monopoly.
The author also discusses this in the context of social inequality. The poor get poorer in a number of ways:
If you’re an individual agent losing against a growing competitor, you can choose to diversify, or play a different game. In business, this can mean entering a new market or seeking different resources. However, this doesn’t always work, if the monopolist can remove all budding competition, say by destroying them or buying them up.
From a system point of view, the monopolist’s reinforcing feedback loop can be countered with balancing feedback loops that limit the monopolist’s growth. These include:
These have the effect of leveling the playing field and equalizing the players.
Also known as: eroding goals, boiled frog syndrome
The actor sets a performance bar that it tries to perform above. However, instead of keeping the bar fixed at an absolute level, the bar is actually set relative to previous performance.
If the actor performs under the bar, the bar is then set a bit lower. This might be excused by sentiments like, “Well, it’s not that much worse than last year” or “Well, everyone’s in trouble too, so we’re not doing that badly.”
And if the actor has a tendency to undershoot the bar, this leads to a vicious cycle that drags the performance down, possibly to complete failure.
This drift tends to happen at a gradual decline. A sharp decline would cause alarm and prompt a forceful correction. However, in a gradual decline, the actor tends to become complacent and forget how good things used to be.
A business that is losing market share can lower its targets and progressively lose share until it becomes bankrupt.
(Shortform note: Steve Jobs commented on why he was infamously harsh on hiring: “A players hire A players, but B players hire C players and C players hire D players. It doesn’t take long to get to Z players. The trickle down effect causes bozo explosions in companies.”)
Instead of setting performance standards relative to previous performance, set absolute standards that don’t change with performance.
Better yet, turn the vicious cycle into a virtuous cycle—set the goals relative to the best performance. If the system overperforms the goal, set the goal higher.
The actors in a system share a limited common resource. When the actor uses part of the resource, he receives all of the gain, but the cost of using the resource is spread among all the users. In sum, this usage is a net gain for the actor. Therefore, the rational behavior is to use as much of the resource as he can.
The problem is that all actors in the system are thinking this way too. As all actors act in their own self-interest, the common resource is eroded, possibly to the point of irrevocable destruction.
In essence, there is a weak feedback loop between the actor’s behavior and its future impact. It’s not clear the actor is behaving in a self-destructive way until it’s too late.
The classic example is of a common pasture, where farmers each maintain a herd of cattle. Each farmer is incentivized to grow his herd—he gets all the benefit of the additional animal, but the cost of the animal’s grazing is shared by all the pasture owners. All farmers grow their herds as much as they can, and the tragedy arises when the pasture is destroyed and all the herds die from starvation.
Other examples include:
The Tragedy of the Commons occurs because there is missing feedback between the actor’s behavior and its consequences. The solution is to tighten this feedback loop, which can be accomplished in a few ways.
The first method is to educate the actors. Help them see the effects of their usage, persuade them to moderate their behavior, and hope that they self-regulate.
The second method is to parcel out the resource so it’s no longer shared. Here, the actor owns her own share of the resource, so she bears the consequences of her behavior.
Not all resources, like river water or fish in the ocean, can be divided, which leaves the third method: regulate the resource. Once the rules are set, all the actors trust that all the other actors will behave responsibly. This is also known as “mutual coercion, mutually agreed upon.” Examples include:
For regulation to succeed, it must be strictly enforced to discourage rulebreakers.
Also known as: dependence, shifting the burden to the intervenor
An actor in a system has a problem. In isolation, the actor would need to solve the problem herself. However, a well-meaning intervenor gives the actor a helping hand, alleviating the problem with an intervention.
This in itself isn’t bad, but in addiction, the intervenor helps in such a way that it weakens the ability of the actor to solve the problem herself. Maybe the intervention stifles the development of the actor’s abilities, or it solves a surface-level symptom rather than the root problem. Temporarily, the problem seems to be solved, and everyone pats themselves on the back.
But soon enough, the problem appears again, and in an even more serious form, since the actor is now less capable of solving the problem. The intervenor has to step in and help again to a greater degree. Thus the reinforcing feedback loop is set up—more intervention is required, which in turn further weakens the actor’s ability to solve it, which in turn requires more intervention. Over time, the actor becomes totally dependent on—addicted to—the intervention. (This is why the systems term for this archetype is “shifting the burden to the intervenor.”)
A system addiction behaves similarly to a narcotic drug causing a physical addiction:
Addictions occur when well-meaning interventions undermine the system’s ability to take care of itself:
Policies that cause addiction are deceptive—they are often well-meaning, seem like they solve the problem, and may have consequences that are hard to foresee (unless you map out the system).
Like many of these system archetypes, the best solution is to avoid getting addicted, or causing addiction, in the first place.
When you intervene in a system:
Once addiction takes hold, the only way to cure it is to remove the intervention and go through withdrawal. This can be done gradually or all at once, depending on how bad the withdrawal symptom is. It’s better to withdraw as early as possible—the longer you wait, the more painful withdrawal will be.
Also known as: Following the letter of the law, but not the spirit
A system has a goal, and it sets rules in place to try to achieve the goal. However, the actors in the system will try to evade the rules. They’ll find technicalities that let them stay nominally compliant but behave in a way that violates the intent of the rules.
This effect gets worse when the rules are designed especially poorly, so that the evasive behavior totally contradicts the system’s goals.
The instinctive reaction is often to strengthen the rules or enforcement. However, this usually leads to worse evasion. (Shortform note: Remember policy resistance from earlier—the harder one actor pulls in one direction, the harder the other actors pull in the opposite direction.)
The better reaction is to design better rules. Try to foresee how actors will try to evade the rules and modify rules so that they better suit the goals of the system.
Also known as: “Be careful what you wish for.”
Optimizing for the wrong goal is the conceptual opposite of rule beating. Here, a goal is defined incompletely or inaccurately, so that progress toward the goal doesn’t lead to the desired result.
This can happen:
A system is often so good at achieving its goal that it can march steadily toward that goal before anyone realizes they never wanted that goal in the first place.
A common theme is confusing motion with progress.
National governments fixate on gross national product (GNP) as a key metric. The higher the growth rate, the better. But GNP is woefully incomplete in describing the welfare of the population. It measures the value of the goods and services produced; it doesn’t reflect happiness in a family, the quality of our political discourse, or social equality.
Counterproductively, GNP rewards inefficiency—car accidents increase GNP through medical bills and buying new cars; a new lightbulb that costs the same to manufacture but lasts twice as long decreases GNP. GNP measures throughput (the production rates of things) rather than capital stocks (the houses, computers, and things that constitute wealth).
Imagine a world where instead of competing for GNP, countries competed for the highest ratio of capital stocks to throughput (rewarding efficiency), or for the highest healthy life expectancy.
Create goals that lead to the intended result. Choose indicators that show the welfare of the system, not a narrow slice of its output. Don’t confuse motion with progress.
Leverage points are places to intervene in a system. It’s important to 1) find the right leverage point, and 2) push it in the right direction.
Counter-intuitively, people often find a good leverage point, but push it in the wrong direction. Remember the car lot, where reducing delays actually worsened the oscillations.
The author presents 12 leverage points in order of increasing effectiveness.
Before we dive in, some themes to keep in mind:
In addition, at a high level, we group the leverage points into three major categories, also in increasing order of effectiveness:
Parameters are numbers described in the system. These include the level of the stock, the quantity of the flows, and more generally it includes swapping individual elements in the same system.
We spend the vast majority of our time worrying about parameters, but the author argues this is definitively the least effective place to intervene. For example:
Adjusting parameters doesn’t work well because there are stronger system effects at play, such as feedback loops, incentive structures, and delays. If a system is described by a runaway loop, tweaking linear parameters does not meaningfully change the system’s behavior.
A minority of parameters can become effective leverage points when they affect higher leverage points in this list, such as the growth rate in a reinforcing feedback loop or the time delay. But the author argues these are rarer than most people think.
We’ve learned that stocks are buffers that can stabilize the system over fluctuating flow rates. Your bank account is a stock of money that helps you withstand volatility in your income and expenses.
Changing the stock changes the behavior of the system. You can stabilize a system by increasing the stock, but this comes at the cost of efficiency—larger stocks cost more to build or maintain. In contrast, you can increase efficiency by decreasing the stock, but this comes at the cost of lower robustness.
Stocks can be effective leverage points, but they are often slow to change, especially when they’re physical in nature.
Changing the stock-and-flow structure means changing which flows are connected to which stocks. This can change the behavior of the system:
However, the stock-and-flow structure is hard to change once it’s set, especially if it’s physical. It’s better to design it well first—understand the structure’s limitations, and prevent fluctuations that exceed the structure’s capacity.
As we’ve seen, delays in feedback loops tend to cause oscillations. In turn, oscillations worsen the effectiveness of your decisions, because you’re making decisions with delayed information. Furthermore, the results of your actions are delayed—by the time your actions have results, the situation may have changed so that your actions have become inappropriate.
Changing the length of delays affects system behavior:
Changing delays can cause big changes to system behavior. Just make sure you change delays in the right direction—decreasing delays may seem intuitively good, but they can lead to unwanted overreactions.
However, many delays can’t be changed—a baby takes a certain number of years to grow to become an adult; fish can only reproduce so quickly. In these cases, you’ll need to intervene elsewhere in the system.
This category of leverage points moves from the concrete layer of the system’s physical structure to a higher layer of information and control.
A balancing feedback loop keeps a stock at a setpoint. It can be broken down into components and parameters:
Each of these can be leverage points for changing how the balancing feedback loop works.
The balancing feedback loop should be designed to be strong enough to regulate whatever it’s regulating. If the feedback loop is too weak for the system changes, it will fail to keep the stock at its desired setpoint.
It’s also common to ignore the value of balancing feedback loops and to remove seemingly unnecessary ones. Some loops exist to protect against rare emergency cases and may rarely or never be activated; thus, they might seem useless. Removing these can be a grave mistake, analogous to removing emergency shutdown systems in nuclear power points. A system may have multiple feedback loops operating at different timelines and under different conditions, and they may all be necessary to ensure a resilient system.
Reinforcing feedback loops grow exponentially; left unchecked, they can cause serious, irreversible damage.
The leverage point for reinforcing feedback loops is its gain, or its growth rate.
Often, reinforcing feedback loops are countered by balancing feedback loops. An intuitive response to controlling a reinforcing feedback loop is to strengthen the balancing feedback loop. But it’s often easier to simply reduce the growth rate of the reinforcing feedback loop, to keep it more manageable.
In a system, decisions and well-functioning feedback loops often require information. The car lot manager requires information about current inventory; a democracy requires information about the government for voters.
In contrast, missing information can lead to system malfunction. People without information cannot make decisions to meet the system goals. The leverage point is then to provide the right information in the right form.
Create new information flows, and you can create new, powerful feedback loops that can keep the system functioning and resilient.
A system’s rules define its boundaries. These include national constitutions, laws, incentives, punishments, and contracts.
In a well-functioning system, good rules are set that achieve the system’s goals. (Recall from the last chapter that bad rules invite counterproductive evasion or cause actors to chase the wrong goal.)
Changing the rules can dramatically change behavior.
To figure out why a system is dysfunctional, look at who’s setting the rules. The rules may be self-serving and contradictory to the purported goals of the system.
Beyond the system’s information and control layer—how it regulates itself—is an even higher level of abstraction—how a system defines its purpose and changes its purpose.
Self-organization is the ability of a system to change itself, and possibly reinvent itself. This leads to impressive resilience, as the system can adapt to changing circumstances. It’s also more powerful than any leverage point we’ve discussed so far, because a self-organizing system can improve its rules, information flow, and feedback loops.
As previously explained, vastly complex self-organization can result from a few simple rules:
The leverage point is therefore to modify a system’s ability to self-organize. On one hand, you can improve a system’s resilience by enabling self-organization.
(Shortform note: Taking the technology example above, this can mean:
On the other hand, you can make a system brittle by quashing self-organization. Invert any of the enablers above, and you reduce self-organization—this may mean eliminating biological diversity by killing species; suppressing access to human knowledge; or crushing new experiments as they arise.
Any system that becomes so calcified it can no longer self-organize, or that deliberately suppresses self-organization, is doomed in the long run.
A system’s goals determine its behaviors. Changing goals can mutate every lower leverage point to suit the system’s new goals.
Usually changing elements in a system has little effect, except if the element can change higher leverage points, such as system goals.
Therefore, the author argues that any technology or system isn’t inherently good or bad. It depends on the goals of who’s using it.
A system’s paradigms are its implicit beliefs about how the world works. They’re often so deeply ingrained in our minds that we don’t consciously articulate them—there’s no need to, because we believe everyone else believes them too.
In Western societies, we have these paradigms:
These paradigms are so natural we don’t often think about them, yet they govern much of what we do. These paradigms drive every lower leverage point—the system goals, the information flows, and so on. These paradigms govern how we set up rules around property ownership, how we set up systems for social welfare, and how we reward system actors that produce growth.
Yet these paradigms can be completely foreign to other societies, either traditional societies that exist today or historical ones from just thousands of years ago.
It can seem harder to change system paradigms than stocks and flows. But a single person with a single idea can generate a paradigm change, which can in turn overturn the entire system:
Outside of the originator of the paradigm, getting a whole system to shift a paradigm can take considerable time and effort. But ultimately, a paradigm shift can completely revolutionize every element of a system.
The highest leverage point, even beyond system paradigms, is to free yourself from fixed paradigms. A paradigm is just an idea. No paradigm is absolute truth, and it contains just a small slice of understanding an endlessly complex universe. To transcend paradigms is to prevent yourself from being fixed to any particular paradigm, and to be flexible.
(Ironically, the idea that there is no true paradigm is itself a paradigm, and the author finds this hilarious.)
To let go of paradigms is to be comfortable with not knowing, not having certainty. This is empowering—if you are no longer enslaved by paradigms, then you can choose whichever one you please, or none at all.
These 12 leverage points are just a guideline. The exact order is less important than which leverage points are more or less changeable in the system you’re studying.
Understanding leverage points in theory is just the beginning. Now you need to deeply analyze a system—understand its structure, its rules, its paradigms—to begin to know where you can push and prod. To get an accurate understanding, you may need to discard your own prior assumptions and paradigms about how the world works.
Learning to think in systems is a lifelong process. The world is so endlessly complex that there is always something new to learn. Once you think you have a good handle on a system, it behaves in ways that surprise you and require you to revise your model.
And even if you understand a system well and believe you know what should be changed, actually implementing the change is a whole other challenge.
The author ends with advice that she and other systems thinkers have learned over their lifetimes. (Shortform note: We’ve organized her points into three sections:
Before you eagerly dive in and try to repair a system, make sure you understand it well first.
We all have our favorite assumptions about how things work, and how problems should be fixed. To truly understand a system, we have to discard these and start from scratch.
To understand a system, first watch to see how it behaves. Get a sense of its beat.
This doesn’t necessarily mean stopping and watching it in real-time. Rather:
Our modern industrial society obsesses over things that are quantifiable. Find how to express things in numbers, then make the good numbers go up. (Shortform note: This is another paradigm we tend to accept without thinking about it.)
In your system model, don’t ignore the unquantifiables. These might drive actors in the system as strongly as the quantifiables, and omitting them will lead to an incomplete model. These unquantifiables might include things like:
If you want to build a quantitative system model, you may need to put your unquantifiable onto a quantitative scale (such as measuring human dignity on a scale of 1 to 10).
As discussed earlier, we are prone to drawing artificially narrow boundaries when understanding systems. To really appreciate a system’s complexity and guide it to a good outcome, you’ll need to relentlessly expand the boundaries by which you perceive the world.
The author focuses on three boundaries in particular.
As a society, we tend to fixate on the short-term. In how few years can this investment pay off? How do we get faster growth, sooner?
Systems, of course, can persist over decades, centuries, and much longer time scales. Focusing on the short-term is like hiking a treacherous path by staring down at your feet.
Instead, try thinking in centuries. How will this system behave 10 generations from now? System behavior 10 generations in the past affects your life today; system behavior today will affect lives 10 generations from now.
To make sense of a complex world, you’ll need to understand more disciplines beyond your major in college, your current profession, and what you’re comfortable with. You’ll need to seek wisdom in psychology, economics, biology, the arts, religion, and more.
As you learn from these other disciplines, be aware that its practitioners see the world through their own lenses and draw their own boundaries. As an interdisciplinary thinker, you will need to hurdle these boundaries.
Push yourself to care about more than you naturally do. The system is a cohesive whole and requires its parts to function properly. Your liver wouldn’t succeed if your heart failed, and vice versa. Likewise, the rich wouldn’t succeed if the poor failed; your country wouldn’t succeed if its neighbor failed; the world wouldn’t succeed if its natural ecosystems failed.
As you understand a system, put pen to paper and draw a system diagram. Put into place the system elements and show how they interconnect. Draw the feedback loops (both balancing and reinforcing), find the delays. Articulate the system’s goals and paradigms.
Drawing your system diagram makes explicit your assumptions about the system and how it works.
Next, expose this model to other credible people and invite their feedback. They will question your assumptions and push you to improve your understanding. You will have to admit your mistakes, redraw your model, and this trains your mental flexibility.
You will get many contradictory ideas. Be able to hold these in your head at the same time, rather than fixating on your favorite one. Rule out ideas only until you have enough evidence to do so.
The allure of systems thinking is to gain a superpower to predict the world and to control it. Believing this is a great mistake. The world is complex, and many parts of it elude understanding.
But you can still play a role. You can’t predict a system exactly, but you can guide it to a good direction. You can’t control a system, but you can influence its design and evolution. The author describes this as “dancing with the system.”
For many systems thinkers, the goal of understanding a system is to fix a problem or make it work better. But even if you understand a system and believe you know the best intervention for it, actually implementing that change is another, possibly greater, challenge.
When probing a system and investigating why interventions don’t work, you may bring up deep questions of human existence.
For example, you might think, “If these boneheads were only to open their eyes and accept this data, then they’d see the value of the solution, and we could fix the system.” But this raises deeper questions: How does anyone process the data they receive? How do people view the same information through different cognitive filters? Why can the same data lead to two completely different conclusions from two different people?
Likewise, you may think a system needs to reorient its values. But where do values come from? Why does someone arrive at the values they have? Why does it seem easier to orient around quantitative values than qualitative ones?
Social systems are driven by deep human needs and emotions, and that’s why they can be so resistant to change. This complexity can be maddening, but understanding it is necessary to appreciating why systems fail and how to improve them.
As an intervenor, resist the tendency to bring in outside solutions that cripple the system’s ability to take care of itself (this is the system trap of addiction). Instead, try to empower the system to do more internally, to solve its own problems.
For instance, to nurture a developing economy, don’t bring in foreign corporations to build factories to “create jobs.” A more internally-empowering solution would be to issue small loans to local entrepreneurs and teach business skills.
Actors in a system can behave poorly when they evade accountability for their actions. Tighten the feedback loop between the actor and the consequences of its behavior. Examples include:
Intervenors can issue rules, laws, and policies to influence systems. These policies are often static—change this parameter in perpetuity, regardless of what changes in the world.
However, systems behave dynamically, and a fixed policy that was appropriate at one stage of its development may be inappropriate in another. Instead, consider dynamic, flexible policies that can operate nonlinearly over a range of conditions, and that is open to changing over time.
For example, consider a law to reduce a new pollutant, based on preliminary, uncertain data that the pollutant is harmful. Instead of a static policy of a fixed tax, consider a dynamic policy:
With a dynamic policy like this, the policy can continue to achieve its goal even if if the situation turns out to be better or worse than currently understood.
More generally, systems thinking can be applied at a societal level to improve how people think and behave.
The language we use affects our thinking, and how we think affects how we behave. If we use systems terms in our discussions, we can make everyone a better systems thinker.
For example, we use the terms “productivity” and “growth” pervasively. In contrast, we talk much less often about “resiliency,” “feedback loops,” and “self-organization.” You can see how using these terms can make us aware of systems, which can in turn guide us to improve the ones we belong to.
Information is vital in making good decisions. Bad system behavior can result with delayed, biased, or hidden information. In contrast, presenting accurate, transparent information can cause magical changes.
The author notes that in the 1980s, a new US law required companies to report their level of pollutions. Reporters then published lists of top local polluters. Within a few years, reported pollution decreased by 40%. This was done, not through stringent penalties or enforcement, but merely by opening access to information.
Information is power. Be aware of who controls the information and how they may mold it to suit their own purposes.
We previously learned about the system trap of progressively declining standards, where the standards bar is pegged to recent performance, and so the bar can slip progressively lower.
The author notes that society’s morals are caught in this vicious cycle. Bad behavior might be shocking at first, but it goes unpunished, and over time we become desensitized.
Instead, we should maintain absolute moral standards and refuse to let them slip. The further our society deviates from these standards, the harder we should push to recover.