Antifragile is the fourth book in former options trader Nassim Nicholas Taleb’s five-book Incerto series, which explores randomness and its unpredictable effects. In this book, Taleb discusses strategies and principles that will allow us to be helped by unforeseen events rather than harmed by them.
People often think that the opposite of fragility is durability. If something is fragile, that means it’s easily broken. Therefore, if something isn’t easily broken, logically that should mean it’s the opposite of fragile. However, there’s another step beyond durability: something that actually gets stronger under stress. Since there isn’t an established English word for such a thing, let’s call it antifragility—not just the lack of fragility, but its true opposite.
Antifragile discusses:
We live in an unpredictable world. The models and theories we use to try to predict the future invariably fall apart as unforeseen events prove them wrong and, in turn, destroy the plans we made based on those models. Clearly, systems based on such flawed models are bound to be fragile—easily broken.
The solution to this problem is antifragility. Instead of a never-ending search for more accurate models and better predictions, all we need to do is make sure that we’re in a position to benefit from uncertainty and volatility instead of being harmed by it.
This is hardly a new concept; nature exhibits antifragility in almost everything she creates. An organism can strengthen itself through minor damage in the form of exercise. In a similar sense, a species can strengthen itself through minor damage in the form of natural selection, which leads to evolution.
However, unlike nature, humans try to control the world through models and rules. We think we can perfectly predict the future and avoid any shocks that would cause our fragile systems to fall apart. We think we can outsmart millions of years of evolution and antifragility, and we’re almost invariably wrong.
Instead of trying to predict the future, we should assume that there will be major events we can’t see coming—because, sooner or later, there will be. If we’re prepared for them, using the methods and practices explained in this book, we can make sure that such events work to our advantage instead of hurting us. By avoiding fragility and embracing antifragility wherever possible, we can set ourselves up to thrive in an uncertain world.
The phrase “necessity is the mother of invention” points to another, more immediate sort of antifragility. Basically, people overreact to setbacks. They use more energy and effort than they need to compensate for the problems they experience. The excess energy goes on to become innovation and progress.
For example, a speaker who’s quiet or hard to understand will capture his audience’s attention more effectively than one who sounds like a trained actor. By straining to hear and understand him, the audience will naturally pay more attention and retain the information better.
However, this is only effective up to a certain point. An audience that can barely understand the speaker will pay closer attention; an audience that can’t hear him at all will simply give up.
The opposite is also true: A lack of challenge causes people to undercompensate. For example, the automation of airplanes actually led to an increase in preventable flying accidents at first. The pilots were becoming complacent and—more dangerously—bored. Their skills and their attention waned, and as a result they got into accidents that more alert pilots wouldn’t have.
In short, people in a challenging situation will rise to the challenge and become stronger from it. However, people who get too comfortable miss out on the chance to benefit from antifragility, like the pilots who relied too much on their newly automated systems and ended up crashing their planes.
A basic guideline is that anything living—whether literally, as with organisms, or figuratively, as with an artist’s growing and changing popularity—has some degree of antifragility. Another way to think about antifragility is in terms of simple versus complex systems. A light switch is a simple system: You flip the switch and the light turns on. If any part of the system is damaged, like a faulty wire or a burned-out bulb, the system works less effectively or not at all. In other words, this system is fragile.
A human body, on the other hand, is a complex system. It has many different parts that communicate with each other and help each other compensate for stressors. When part of a system is able to communicate that it’s been damaged, and other parts of that system are able to compensate for that damage—or overcompensate, as discussed earlier—antifragility is the result. In humans, we can see a simple example with exercise: Weak muscles break down and become bigger and stronger as they heal.
So, in short, simple systems are easily broken while complex systems often have built-in ways to absorb shocks. When complex systems are able to strengthen themselves in response to the damage, they’re antifragile.
Modern society, in many ways, tries to remove the stress and randomness from life. We schedule our every move: When we work, when we eat, when we sleep, not to mention how we do all of those things. When every step is preplanned, there’s little struggle or danger except in carrying out that plan.
The side effect of this lack of struggle is a lack of personal growth, of artistic expression, and in many cases, of valuable lessons. Without the randomness and danger of life, there’s no chance for us to benefit from antifragility.
For example, one learns a language best by being immersed in it, by making mistake after mistake before learning how to communicate effectively. However, to avoid that embarrassment and struggle, we teach languages in classrooms using books and rules, and we get much worse results.
As complex living things, humans benefit from—and secretly hunger for—some amount of risk and chance. Someone who can’t find the energy to go to a preplanned gym session could lift a car off of a trapped child, if need be. Someone who can’t stick to a diet will skip meals quite easily if there’s simply no food to be had.
“Modern” societies are focused on making life last longer and longer, even as the people living those lives get sicker and sicker. By contrast, in “primitive” hunter-gatherer societies, the effects of old age didn’t take hold until very near to the end of life.
Modernization promotes uniformity, which people often confuse with stability. If something never changes, then naturally it seems stable.
However, what people miss is that when such a system does change, it’s usually catastrophic. Like the switch and the lightbulb, the only reason an unchanging system changes is because something breaks. In other words, it’s fragile.
A system that has some natural variation might seem less stable and less reliable than the previous situation, but it’s usually more stable in the long run. A system that’s naturally subject to change and randomness will be able to cope with variations. It may even become better as those variations destroy weak and flawed parts of the system, leaving the best and most stable parts behind. Such a system would be antifragile.
For example, it might seem like a bank clerk has a more stable job than a cab driver. The clerk has a fixed schedule and brings home the same amount of money each week, while the cab driver may have good days and bad days that make his income seem less reliable.
However, the clerk’s stability is an illusion. He’s totally dependent on the bank and, on a larger scale, the economy to keep him employed. If something goes wrong he could find himself out of a job, suddenly bringing home no income at all. There’s no chance for the clerk to recover from the shock or grow stronger from it; his job is simply gone.
The cab driver, on the other hand, is relatively protected from shocks like that. There’s no chance that he could suddenly lose his job, because people will always need cab drivers. If his income starts to drop, he improves his skills—becoming a “stronger” cab driver—and continues about his business. The cab driver’s career is variable, and therefore has the chance to be antifragile.
Modern society also tends to create large, rigidly controlled systems: centralized governments, multinational corporations, and so on. Large systems are more vulnerable to fragility because they have, and need, many more resources. The effects when those systems break are also much more pronounced.
For example, if a large bank gets into financial trouble and has to suddenly sell off billions of dollars in stock, it creates a shock that affects the entire global market. However, a bank a 10th of that size in a similar situation might only have to sell stock valued at millions, not billions, which the market could absorb much more easily.
Along with size, another common cause of fragility is rigid, top-down control. Large organizations such as major banks, corporations, and even (or especially) governments are prone to this type of fragility. They’re run based on rules, predictions, and models, which almost inevitably fail sooner or later. When that happens, as in the above example of the bank, the results can be disastrous.
Perhaps the best “federal” government in the world is Switzerland’s, which can hardly be called a federal government at all. It’s a loose collection of small cantons or regional governments, each of which deals primarily with its own local issues. There’s no grand supreme leader handing down edicts to the cantons.
While there is occasional friction between cantons when an issue impacts more than one of them, that actually serves to strengthen the system as a whole. There’s some natural variation in how they run their affairs, which gives the entire system a chance to benefit from antifragility. Good policies survive, bad policies are eventually thrown out. This seems to be the ideal system of government: small, decentralized, and with some chance for small struggles.
Modern society tends to rely on predictive models to plan for the future. Rather than setting ourselves up for antifragility, we try to foresee everything that might go wrong and plan specifically for those events. This is reflected in everything from weather forecasts to quarterly projections at a corporation.
However, predictions and models are notoriously fragile and easily destroyed by a single piece of data that doesn’t fit. Decisions based on those models are similarly fragile, and often have harsh consequences: People lose jobs, stock markets crash, and sometimes people even die.
In spite of predictive models’ terrible track records, many people insist on using them to decide on courses of action. This is especially dangerous when people want to intervene in something that would be just fine if left alone.
Perhaps the best example of this tendency is in medicine. Medicine has always had risks and harm associated with it, which are called iatrogenics—unintended harm from treatments. For example, before we knew what germs were, countless people died of so-called “hospital fever.” Even now, medicines and treatments almost always have side effects, some of which can be more devastating than the disease they’re supposed to treat.
Therefore, ideally we’d only intervene when potential benefits outweigh potential risks. Someone who will die without heart medication should, of course, have it. Someone whose blood pressure is sometimes a bit high would probably do better without it—the side effects of the medication may be more harmful than the condition.
Iatrogenics and its non-medical equivalents happen because of a common logical fallacy: People mistake a lack of evidence for evidence. Smoking, for example, was once thought to have health benefits. There wasn’t yet evidence that it was harmful, so people took that as evidence that it wasn’t harmful. It wasn’t until decades later that people realized the horrific side effects.
We could avoid many harmful side effects if we simply shift the burden of proof. Rather than having to prove that something is harmful, we should have to prove that it isn’t harmful before starting to use it—guilty until proven innocent, so to speak.
The key to antifragility is having more potential benefits available than risks. When that’s the case, on average you’ll gain more than you lose from random events. For example, you might put a small amount of money into stocks with high potential to increase or decrease in value. If the stocks go down, you haven’t lost anything you couldn’t afford to lose. However, if they go up, you could stand to make a great deal of money.
One way to make sure you’re in an antifragile situation is to have many options available to you, so you can make the best choice at any given time.
This principle applies even in very mundane situations. For example, someone invited to a dinner party that isn’t especially interesting, but is better than eating alone, might call around first to see if any other friends are available that evening. If a more interesting opportunity presents itself, the person can skip the party and do that instead. If not, the party is available as a fallback option, with the key word being option.
This person has very little to lose and a fair amount to gain. At worst, he winds up at a party that’s less exciting than he’d hoped. At best, he finds something more fun to do and has an excellent evening.
Making proper use of options doesn’t take any great intellect or education, only rationality. Nature itself uses optionality to always find the best outcomes—or at least, the outcomes that will allow life to continue. Countless mutations and evolutionary trends fail, and the organisms carrying them die out. However, these failures don’t harm nature itself. At the same time, the beneficial mutations and trends propagate, making it ever-more-likely that life will continue to exist on Earth. You wouldn’t call nature intelligent, but there’s no arguing that it’s rational.
Related to the intelligence versus rationality point is the idea of the long shot. Many intelligent people will point out that lotteries and casinos—which offer very small chances of very large payouts—aren’t worth it. And they’re correct: The options that you get from lotteries and casinos are too expensive, giving you ever-increasing downsides that are unlikely to ever be offset by your winnings.
Where these intelligent-but-irrational people go wrong is in applying the same logic to any kind of long shot. For example, it’s extremely unlikely that buying stock in any given company will make you rich. However, if you invest a few dollars each in a lot of different companies—including, say, Amazon or Apple when they were first starting out—you’ll have limited cost with the potential for immense profits.
In short, decisions made based on traditional intelligence—which is to say, book smarts—tend to be fragile. They can easily go wrong if you misunderstand something, or incorrectly apply your knowledge, like how people who understand the lottery try to apply the same logic to the stock market. However, decisions based on rationality tend to benefit from antifragility; rationality is simply about examining your options and choosing the best one at any given time.
Both fragility and antifragility have exponential effects. In other words, as the significance of an event increases, the effect of that event increases even faster.
For example, if you drummed your fingers against a window, you wouldn’t damage it at all; however, you could break it quite easily with a punch. Thousands of tiny impacts from your fingers don’t add up to the same effect as one large impact from your fist.
In the same way, a fragile situation has a limit to how good it can be, but the negative impacts increase infinitely (or near-infinitely). A window can never be anything more than a window, and once broken, it’s useless.
However, an antifragile situation is exactly the opposite, with limited risk but potentially infinite (or near-infinite) benefits. The previous example of many small stock purchases shows this principle: $100 invested in Amazon back in 1997 would be worth around $120,000 now.
We can sketch these situations with the graphs shown below:
In both cases, as the significance of an event increases (in other words, as we go farther to the right), the outcome of that effect becomes either much worse (for fragility) or much better (for antifragility).
The two graphs also illustrate why fragility dislikes randomness, and antifragility loves it. Imagine picking random points on each of the graphs; depending on where the point falls on the significance axis, it may have a positive or negative outcome. Now imagine that you keep picking such random points over and over again. Eventually you’re going to land on a point with enough significance that the outcome is either hugely negative (for the fragility graph) or hugely positive (for the antifragility graph).
Finally, fragility and antifragility have an ethical component as well. Modern society makes it possible to give the benefits of a situation to one person or group and the harm of that situation to another—in other words, it makes one person’s situation antifragile by making others’ situations fragile, which is blatantly unfair.
The problem is imbalanced agency. Some people have the power to make decisions that affect many others, and the others have no choice but to accept the outcomes of those decisions. To give an extreme example, this imbalanced agency is how we end up with countries that have a handful of billionaires, and millions of people in poverty.
When we say agency, what we really mean is optionality. The people who have the power are the ones who have all the options and therefore, all the antifragility. They can shunt the fragility onto those who are less powerful, generally the poor and the marginalized. This is why the CEO of a failing bank can be let go with a severance package worth tens of millions, while taxpayers are being gutted to pay for his bank’s bailout.
The best way to correct this imbalance is by making sure that everyone has something to lose—some kind of fragility that they need to defend against. Remember that antifragility of the whole depends on the fragility of the parts, like muscles getting stronger after the weak parts of them break down. By allowing some people to gain all the benefits while others take on all the harm, modern society has overturned that principle.
Rather than correcting this imbalance with complex rules and massive enforcement agencies (remember, large size and rigid controls make systems fragile), we could address it with simple laws. In fact, the Code of Hammurabi found an ideal solution almost four millennia ago.
One tenet in the Code stated that any injuries or deaths caused by the collapse of a house would also be inflicted on the one who built that house. If the owner of the house died, the builder would be executed; if the owner’s son died, the builder’s son would be executed, and so on.
It might sound brutal to our modern-day sensibilities, but the punishment wasn’t the point. The point was to give the builder a stake in what he was building. A craftsman knows his craft better than any government-sponsored team of safety inspectors, so this is the simplest and most effective way to make sure that what he makes is safe and of good quality.
So, in short, the best way to ensure ethical behavior is by making sure everyone has something to lose. Someone who has to shoulder the risks herself, rather than diverting them to those with fewer options, will most likely act in good faith.
Antifragile is the fourth book in former options trader Nassim Nicholas Taleb’s five-book Incerto series, which explores randomness and its unpredictable effects. In this book, Taleb discusses strategies and principles that will allow us to be helped by unforeseen events rather than harmed by them.
People often think that the opposite of fragility is durability. If something is fragile, that means it’s easily broken. Therefore, if something isn’t easily broken, logically that should mean it’s the opposite of fragile. However, there’s another step beyond durability: something that actually gets stronger under stress. Since there isn’t an established English word for such a thing, let’s call it antifragility—not just the lack of fragility, but its true opposite.
Antifragile discusses:
The fact that there’s nothing in the dictionary to describe antifragility doesn’t stop us from experiencing and using it in our everyday lives. For instance, every time we exercise, damaging our muscles so that they’ll become stronger, we’re taking advantage of our inherent antifragility.
To truly understand the difference between fragility, durability, and antifragility, consider some figures from mythology. First, in the tale of Damocles, the titular character is seated at a feast with a sword hanging over him. The sword is suspended from the ceiling by a single horsehair. This is fragility: It’s only a matter of time before that hair breaks and Damocles dies.
Now think about the phoenix, the legendary bird that’s reborn from its own ashes after it dies. This is durability: No matter what happens to it, the phoenix will always return to the same state as before.
Finally, consider the hydra, a monster from ancient Greek mythology that grows back two heads if one is cut off. Unlike the phoenix, which is reborn into an identical body, the hydra actually gets stronger by being hurt. This is antifragility.
While each book in Taleb’s Incerto series can be read on its own, they have some common themes and build on one another. In particular, Antifragile builds on ideas from The Black Swan, which is about the unexpectedly large impact of unpredictable and unlikely events. Antifragile teaches us how to turn such events to our advantage.
(Shortform note: You can read our summary of The Black Swan here.)
We live in an unpredictable world. The models and theories we use to try to predict the future invariably fall apart as unforeseen events prove them wrong and, in turn, destroy the plans we made based on those models. Clearly, systems based on such flawed models are bound to be fragile—easily broken.
The solution to this problem is antifragility. Instead of a never-ending search for more accurate models and better predictions, all we need to do is make sure that we’re in a position to benefit from uncertainty and volatility instead of being harmed by it.
This is hardly a new concept; nature exhibits antifragility in nearly everything she creates. An organism can strengthen itself through minor damage in the form of exercise, while a species can strengthen itself through minor damage in the form of natural selection, which leads to evolution.
However, unlike nature, humans try to control the world through models and rules. We think we can perfectly predict the future and avoid any shocks that would cause our fragile systems to fall apart. We think we can outsmart millions of years of evolution and antifragility, and we’re almost invariably wrong.
Instead of trying to predict the future, we should assume that there will be major events we can’t see coming—because, sooner or later, there will be. If we’re prepared for them, using the methods and practices explained in this book, we can make sure that such events work to our advantage instead of hurting us. By avoiding fragility and embracing antifragility wherever possible, we can set ourselves up to thrive in an uncertain world.
We can find another form of antifragility in the legend of Mithridates IV, king of Pontus. According to the story, after his father’s assassination, Mithridates began taking small doses of various poisons to build up tolerance for them. This practice came to be called Antidotum Mithridatium—in English, we could call it Mithraditization.
Nero’s mother Agrippina also used Mithraditization when she suspected that her son would try to poison her. It worked, and Nero’s assassins were unable to kill the queen by poisoning; unfortunately for Agrippina, that just meant that they had to resort to blades. Mithraditization is seen even in modern medicine: Vaccines could be considered a form of it. However, Mithraditization isn’t true antifragility. It makes you more durable, less likely to be harmed in the future, but you don’t actually improve yourself by taking it.
More relevant to the current topic is hormesis, a phenomenon where a small dose of a toxic substance actually stimulates an organism’s growth or improves its health. German toxicologist Hugo Schulz noticed that small doses of a particular poison would cause yeast to grow faster, while larger doses would slow the growth or kill off the yeast.
There are also claims—unconfirmed—that restricting calorie intake causes a variety of healthy reactions in the body. Note that this is not about cutting down on overeating, but reducing calories to starvation or near-starvation level, at least temporarily. However, another possible interpretation is that too much food, eaten too regularly, is bad for you, and reinstating the stressor of hunger helps return humans to a more natural and healthier state. In this case, hunger itself is the “toxin” that makes the organism stronger.
As a side note, Hormesis lost a great deal of clout in the scientific community after the 1930s, when it became conflated with homeopathy. Homeopathy is the belief that diseases can be cured by introducing extremely diluted parts of what causes the disease: viruses, poisons, and so on. However, homeopathy uses such small amounts of these agents that it can’t possibly cause hormesis—or do much of anything, for that matter. Homeopathy, unlike hormesis, has very little scientific evidence backing it up.
The key point is that certain types of stress are actually good for you, and taking them away can actually be harmful. For a classic example, consider exercise: You slightly damage your body so that it will become stronger by repairing itself.
People struggle to identify concepts outside of the context in which they originally learned those ideas. This concept is called domain dependence, with “domain” referring to a particular subject or type of activity. If you’ve ever bumped into a coworker or teacher in the supermarket and had trouble recognizing each other, then you already understand this problem.
Regarding antifragility, the idea of certain systems needing stressors to function isn’t a new one, but people who understand this fact in some aspects of life fail to transfer it to others. That’s why, for example, a scientist who’s familiar with hormesis might not be able to recognize the same sort of antifragility in economics, or the process of invention and discovery.
No matter the field, occasional problems provide valuable information and experience that strengthen our overall understanding and ability in that field. However, the average person will only see these incidents as setbacks and failures.
Doctors have observed extreme cases of antifragility in humans, which they call post-traumatic growth. In simple terms, it’s the opposite of post-traumatic stress disorder (PTSD). While those who experience the much more well-known PTSD continue to suffer from past events, those who experience post-traumatic growth are somehow improved by it. In fact, there’s a short test that suggests various forms that improvement could take: the Post Traumatic Growth Inventory.
However, while many people may not have heard of post-traumatic growth, almost everyone has heard some variation of the expression “what doesn’t kill me makes me stronger.” Perhaps this is just another example of domain dependence at work.
Another phrase, “necessity is the mother of invention,” points to another, more immediate, sort of antifragility seen in humans. Basically, what happens is that people overreact to setbacks—they’re spurred to use more energy and effort than is needed to compensate for the problems. That excess energy goes on to become innovation and progress.
The hardship does not have to be extreme nor dramatic—although dramatic hardships have certainly led to some remarkable developments, such as John Hetrick inventing the airbag after being in a near-disastrous car crash with his wife and daughter. However, minor difficulties can also spur people on to greater results.
A speaker who is quiet or hard to understand will capture his audience’s attention more effectively than one who sounds like a trained actor—by straining to hear and understand him, the audience will inevitably pay more attention and retain the information better. Naturally, though, this only works up to a point. An audience that can barely understand the speaker will pay closer attention; an audience that can’t hear him at all will simply give up.
The opposite is also true: A lack of challenge causes people to undercompensate. For example, the automation of airplanes at first actually led to an increase in preventable flying accidents. The pilots were simply becoming complacent and—more dangerously—bored. Their skills and their attention waned, and as a result they got into accidents that more alert pilots wouldn’t have.
Many social scientists will talk about equilibrium, or balance between opposite ideas like supply and demand. In equilibrium, any shift in one of those ideas will be countered by a shift in the other—so if there is suddenly a greater demand for particular goods, the companies who make those goods will start producing more to meet it. Equilibrium is commonly thought of as a desirable state, if not the absolute goal of a healthy economy.
However, if we consider an economy to be a complex, living system (which we’ll discuss in more detail later), then equilibrium could mean death. If the economy is never subjected to stresses or shocks, then it’s deprived of the chance to benefit from antifragility. A system that becomes too stable, too fixed, becomes weak and may be badly damaged or destroyed when such a shock inevitably happens.
Redundancy is a form of overcompensation and therefore, of antifragility. Physically, humans (and many other organisms) have redundant systems to improve their chances of survival. For example, we have two lungs and two kidneys, though we could survive with only one of each. Humans developing an extra kidney is no different from the hydra growing an extra head after being injured—it’s a preparation for worse events that may happen in the future, but in the meantime, it makes us stronger and more efficient at filtering waste from the blood.
In fact, all organisms follow a similar pattern of redundancy and overcompensation. If you were to trace evolutionary trends throughout history, you’d find that natural selection tends to favor species that overcompensate for their environments, rather than those that adapt to meet the exact challenges of their surroundings.
We can also see redundancy in many places outside of organisms. Countries that stock supplies of food or oil in preparation for a future disaster are practicing redundancy, storing away more than they need now in case hard times hit later. Those supplies can also, if the opportunity arises, be sold at great profit to other countries who find themselves in dire need of them—this shows again that the overreaction to hardship can strengthen the system in the long run.
Even abstract ideas can demonstrate antifragility. Riots and rebellions, for example, respond to any attempt to put them down with force by becoming stronger—the people are outraged about their situation, and oppressing them more only fuels their rage.
At the other end of the emotional spectrum, there are countless stories of love demonstrating antifragility. Romeo and Juliet is perhaps the most famous example—the young lovers’ feelings become stronger with every attempt their families make to keep them apart.
Information, too, is antifragile. Attempts to ban books invariably make those books more popular—just look at The Adventures of Huckleberry Finn or the Harry Potter series. Similarly, harsh criticism of books or ideas only serves to draw more attention to them; millions of people have read Ayn Rand’s extreme libertarian works despite—or because of—all the negative attention they receive.
Even some (though not all) careers are antifragile. There are accounts of actors paying journalists to write about their performances; many actors would pay for positive reviews, but the truly clever ones would pay for negative reviews, knowing that those would attract much more attention. On the other hand, someone like a bank manager needs to have a clean reputation and would suffer terribly from bad press.
A humorous rule of thumb is that the more flamboyantly someone dresses, the more antifragile his or her job is.
A basic guideline is that anything living—whether literally, as with organisms, or figuratively, as with an artist’s growing and changing popularity—will have some degree of antifragility. Inanimate objects like glasses, cars, and the like, will be durable at best; they might be able to stand up to some amount of stress, but they can never be strengthened by it. Living things can grow stronger after being damaged.
It’s true that, in spite of their ability to repair and strengthen themselves, organisms will eventually age and deteriorate. However, it’s possible that what we observe as “aging,” particularly in humans, is a combination of natural senescence and an environment without enough stressors to keep people strong.
(Shortform note: In biology, senescence means the loss of a cell’s ability to divide and replicate itself. More generally, it means the natural degeneration of an organism that comes with age.)
Senescence might be unavoidable, but many of the problems aging people experience come from being poorly adjusted to a too-comfortable environment. As people age, in many cases, caregivers (whether professionals or family members) see to more and more of their needs. The elderly then suffer the effects of too much comfort—with nothing to keep themselves physically or mentally strong, they decline. In short, too much caregiving stifles their antifragility.
“Modern” societies are focused on making life last longer and longer, even as the people living those lives get sicker and sicker. However, in “primitive” hunter-gatherer societies, the effects of old age didn’t take hold until very near to the end of life.
Instead of thinking about living versus nonliving things, perhaps a better way to think about antifragility is in terms of simple versus complex systems. A light switch is a simple system: You flip the switch and the light turns on. If part of the system is damaged—there’s a faulty wire or a burned-out bulb—the system works less effectively or not at all.
A human body and a civil rights movement are complex systems: They have many parts that depend on and communicate with each other. For example, the various parts of your body don’t receive information from logical observations and thoughts, but from chemical stressors in the form of hormones. In a civil rights movement, of course, the various people in it will communicate with each other with all the usual methods that people use.
Political and economic systems are similarly complex, made up of many parts interacting with one another. To Adam Smith, the economy was a clock, a complex mechanism that could be wound up and left to run on its own. Plato, on the other hand, talked about the now-famous ship of state, likening a country to a ship that needs constant tending and a competent captain.
Regardless of who—if anyone—is in charge of the system, communication and interdependency are two of the keys to antifragility. When part of a system is able to communicate that it’s been damaged, and other parts of that system are able to compensate for that damage—or overcompensate, as discussed earlier—antifragility is the result.
Modern life, in many ways, tries to remove the stress and randomness from life. We schedule our every move: when we work, when we eat, when we sleep, not to mention how we do all of those things. Even our entertainment is scripted and scheduled, like going to the bar after work and to movies on the weekends.
This process could be called touristification: Tourism is like adventure with all of the risk and unpredictability removed, and modern society tries to make us tourists in our own lives. Someone who doesn’t take well to the touristification, who finds herself unhappy or unable to focus, will probably seek out medication to help herself fit in.
The side effect of this lack of struggle is a lack of personal growth, of artistic expression and in many cases, of valuable lessons. For example, one learns a language best by being immersed in it, by making mistake after mistake before learning how to communicate effectively. However, to avoid that embarrassment and struggle, we teach languages in classrooms using books and rules, and we get much worse results.
As complex living things, humans benefit from—and secretly hunger for—some amount of risk and chance. Someone who can’t find the energy to go to a preplanned gym session could lift a car off of a trapped child, if need be. Someone who can’t stick to a diet will skip meals quite easily if there’s simply no food to be had.
A chronic stress injury happens when someone repeats the exact same movement too many times in a row, like walking on a randomness-free treadmill. A great deal of modern life consists of chronic stress injuries, both literal and metaphorical—such as long-term dissatisfaction with one’s job.
In short, people will rise to naturally occurring challenges and grow stronger from them; conversely, they become weak and unfocused when the environment doesn’t provide such challenges.
While a complex system may be antifragile, that antifragility often depends on the fragility of individual pieces of it. People couldn’t become stronger if muscles didn’t first break down, and species couldn’t evolve if individual organisms didn’t die.
There’s a sort of hierarchy of antifragility, wherein individuals can become stronger by destroying the weaker parts of themselves, and populations can become stronger as the weak individuals are culled. However, no matter what level of that hierarchy you’re studying, antifragility functions on the same basic principle: Harm at the micro level leads to improvement at the macro level.
In the case of the species, it’s the genetic information that is preserved and strengthened as individuals die; the biologist Robert Trivers explored this conflict between individuals and genetics with his concept of the “selfish gene.”
To see the necessity of evolution, which is fueled by danger and unpredictability, consider a hypothetical organism that doesn’t reproduce and never ages. Sooner or later, it would be bound to encounter a situation—perhaps a predator, an accident, or a disease—that it’s not prepared for. Unless this organism could somehow predict the future with perfect accuracy, or was prepared for literally any possibility, something would eventually kill it and that would be the end. There would be no chance for antifragility.
However, if we consider an organism’s genetic information rather than the organism itself, we can see antifragility at work. Organisms that aren’t well suited to their environment will die out, while the ones that are suited to it will survive and pass on their genes. Therefore, the population as a whole will become stronger.
(Shortform note: To further explore the ideas of selfish genes, read our summary of Richard Dawkins’s book The Selfish Gene; Trivers wrote the original foreword for that book.)
We can see here that there are actually two kinds of randomness at work: randomness in the species and randomness in the environment. The random environmental stressors will see which of the random differences in the species is most fit, and those will be the ones to survive.
These concepts also extend beyond biology. For example, a town known for its great restaurants could only get to that point if the mediocre restaurants went out of business. Environmental stress—in this case, in the form of competition—is necessary to keep the local restaurant scene strong.
Therefore, while many entrepreneurs and small business owners will see their businesses fail (possibly at great cost to themselves), we should admire them for taking the risk and thank them for making the economy as a whole stronger by their failure.
Whether organisms or businesses, the core concept of evolution is that individuals die, and as a result the “species” becomes more fit for the environment. Therefore, evolution is a sort of higher-level antifragility; rather than helping the individual facing the challenge, it helps the population as a whole.
This chapter started with the point that antifragility at a higher level depends on fragility at a lower level. Therefore, it stands to reason that the opposite is also true: Protecting the lower level from harm will weaken the system as a whole—or, more accurately, prevent it from getting stronger. Science has no way—yet—to prevent cells from dying and being replaced by stronger ones, or to make individual organisms immortal, but we can see this principle function in modern economics.
When a government bails out companies to protect them from their own mistakes or weaknesses, it drains resources from everyone else and stops other companies from learning from their competitors’ mishaps. For example, consider the enormous bank bailout that the U.S. Federal Reserve paid for in the late 2010s: An estimated $7.77 trillion went to artificially stabilize the economy, rather than allowing the banks to collapse and the economy to naturally strengthen itself by recovering from the shock.
In many ways, this struggle between the group and the individual is strictly a human phenomenon, and it’s fairly new even to us. In the past, mainly prior to the Enlightenment, individual people were practically irrelevant. Groups—kingdoms, churches, and so on—were all that mattered. If those groups ended up slaughtering huge numbers of individuals in wars, or witch trials, or what have you, then so be it.
People even considered families more important, by far, than the individual members of those families. It wasn’t unheard of for someone to wipe out his savings to clear the debt of a distant cousin, just to protect the family name. In fact, doing so was seen as a duty. This same lack of self-interest can be seen by soldiers who willingly march off to war to die for their country, or rioters who lose all fear of authority when caught up in the energy of the group.
While the modern focus on the individual’s security and happiness is, of course, helpful and healthy at the personal level, individuals can’t survive without the system that they live in. Therefore, valuing the individual too much—that is, at the expense of the system as a whole—will ultimately be harmful to everyone.
In the case of the restaurants that go out of business, thereby strengthening the local restaurant scene—or if the U.S. banks had been allowed to collapse as they should have—we can see a particular type of weakness: the kind that strengthens others. It doesn’t have to be companies that go out of business—imagine a person who watches someone else slip on the ice and fall, and so knows to be careful on that spot. That’s an example of this type of error. So is a plane crashing due to a systems failure, thereby providing valuable information about a weakness in the systems that the manufacturer can then (hopefully) correct.This is antifragility at work.
However, there’s another type of weakness, a systemic weakness where one point of fragility can bring down the entire system. This happens when, rather than letting a fragile individual die, the system tries to bring in more and more resources to protect it. Imagine if a prey animal were being attacked by a predator and, instead of simply letting it die, other prey animals tried to save it. They would likely be slaughtered, and the damage to the population would be much worse.
Therefore, to reiterate the key point, antifragility of a system or group depends on the fragility of its parts.
In spite of its great potential, antifragility naturally has limits. Something that takes damage too quickly, or too frequently, won’t be able to repair itself. For example, a boxer who takes some hits will toughen up to withstand those kinds of hits in the future—however, someone who gets shot through the heart will die almost immediately, with no chance to heal or grow from the damage.
Similarly, for evolution—that is, higher-level antifragility—to function, some of the species must survive the damage. For that to happen, there must be diversity within the species. If a single stressor wipes out the entire population (of organisms, businesses, or whatever), then there’s no way for that population to evolve; some members of that population have to be resistant to that stressor in order for the population to strengthen itself.
For example, consider the phenomenon of antibiotic resistance: A complete round of antibiotics should wipe out the bacteria population totally (or close enough that the immune system can clean up what’s left). However, an incomplete round of treatment can actually make the infection harder to cure, as the bacteria with some degree of antibiotic resistance would be the ones to survive and reproduce. This is the ideal situation—for the bacteria, not for the diseased organism, of course.
The ideal form of stress is short-lived and followed by plenty of time to recover. This allows the system to respond to the new information and compensate, or overcompensate, for the stress. If the organism has no chance to recover, or the population is wiped out entirely, then antifragility fails.
Of course, even if an entire population does disappear, there is yet another level of antifragility that we could consider: the world as a whole. If a species goes extinct, or a certain type of business is no longer needed (for example, now that we have alarm clocks there’s no need for someone to go around tapping people’s windows in the morning), the world benefits from the loss of this unfit “species.” In the case of the window-tappers, the benefit is that people no longer need to pay for that service.
To understand the difference in stability between fragile and antifragile systems, consider a hypothetical pair of twin brothers. They grew up in the same house, live in the same area, and have fairly comparable lives except for their careers. One is a middle manager at a large bank, the other is a cab driver.
The banker seems to have a perfectly stable income. He makes the same amount of money every month, which is enough to cover his expenses with a bit to spare. However, this apparent stability is an illusion; at any moment, an upheaval in the market could render him jobless, with no income at all. His banking career is fragile.
Now consider his brother, the cab driver. He has good days and bad days, so there’s some fluctuation in how much money he brings in per month, but annually his income is comparable to the banker’s. On the surface, his income seems to be less stable than his twin’s, but the key is that the cab driver is his own boss.
There’s no chance that a minor upheaval in the cab-driving market will leave him unemployed because nobody else employs him. If there’s a dip in his income, then he updates either his routes or his driving skills—in other words, he improves from the damage. This career is antifragile.
The cab driver’s career seems less secure because there’s a small element of chance to it—a bit of variability in daily or monthly income, even though his annual income tends to average around the same as his brother’s. However, that variability is exactly what makes it resilient.
A small shock to the economy might see a bank clerk unemployed, but to a cab driver, it’s just an opportunity to improve. Unless the upheaval is so extreme that people aren’t taking cabs anymore, it’s unlikely that the driver will suffer too much from it.
The key difference between the banker and the cab driver is the size of the system each belongs to. The banker is a fragile piece of a much larger system ruled from the top with a focus on eliminating risk and randomness. Those attempts to eliminate chance are another reason the system is so fragile—shielding the system from minor mistakes makes larger mistakes inevitable.
To illustrate this point, let’s move away from the bank job for a moment. Instead, imagine if a mechanic, rather than fixing a damaged vehicle, just slapped a fresh coat of paint on it and put it back on the road. It would only be a matter of time before the vehicle failed in some much more catastrophic way. That’s what the large, top-down banking system is doing: hiding and protecting itself from minor issues, but setting itself up for larger failures down the road that may cost our poor clerk his job.
On the other hand, the cab driver works within a very small system. Instead of having a single employer, he’s employed by many different people for short periods of time—just long enough to get them to their destinations. More crucially, none of them has the sort of power that the bank has over his twin; at worst, they cancel their own fare, and he moves on to another client. The variability—perhaps a better word would be flexibility—grants his income a stability that the bank clerk’s income lacks.
Another example of the antifragility of small, self-governed systems is the country of Switzerland. Switzerland has a unique system of government—or, perhaps more accurately, lack of government. The country is made up of cantons, small regional governments that work together in a loose sort of coalition. Naturally, the cantons have conflicts with each other, but they tend to be of the small, boring variety: who has the rights to precisely which bits of land, and so on.
This is significant because Switzerland is, without doubt, the most stable country in the world. It has come through all kinds of global upheavals, both economic and military, practically untouched. It’s known as the best place for the wealthy to store their riches and the safest country for political refugees to flee to.
In both cases, career and government, what works for a small system doesn’t scale to a large one. People used to live in small tribes or family units, and that’s what we’re wired for. We handle conflicts and disagreements very differently when it’s matters of nations and millions of dollars, rather than cantons and cab fares.
Those large issues tend to seem abstract, just dealing with numbers on a page rather than people, and yet the fallout from them can be much more severe. A squabble between cantons might result in a couple of angry meetings; a squabble between nations might result in economic sanctions or even war. Even so, people are more swayed by a single suffering person in front of them than by mass adversity on the other side of the planet. As the old saying goes: One death is a tragedy, a million deaths is a statistic.
When power becomes centralized, as in a large bank or a large government, we inevitably end up with people making decisions based on abstract concepts, rather than having to face the people those decisions affect. Furthermore, they become weak points in the system: A corporate lobbyist, for example, can sway a single house of Congress much more easily—and to much greater effect—than a hundred small municipalities. Large systems, centralized power, and abstraction of the people and concepts in the system all create fragility.
We’ve said repeatedly that variations and randomness lead to stability, but the key to this is that the variations are small. Let’s consider two hypothetical countries: Extremistan and Mediocristan. In Mediocristan, there are constant, small variations in the economy, political parties, and so on. These changes might seem significant, even frightening, but over time they tend to average themselves out, and not much changes.
You could consider various things in your own life to be “from” Mediocristan. One example might be your eating habits: While there are inevitably small (or large) variations in how many calories you eat in a day, on the whole they will average out to a reasonable number. Put another way, no one day’s calories will have a significant impact on your overall average calories. Even if you vastly overindulge one day—on Thanksgiving, perhaps, if you’re American—it’ll still be a tiny fraction of the calories you eat in a year. This is the core concept of Mediocristan: small variations that have little impact in the long run.
Extremistan, on the other hand, has relatively few changes, but the changes that happen are extreme. The economy suddenly soars or plunges, or major political parties are overthrown and replaced. Between these events are periods of apparent peace and stability but, as with the bank clerk’s income, that stability is an illusion. It’s only a matter of time before another major event upends everything all over again.
One example of something “from” Extremistan would be novel sales. Over half of all sales come from 0.1 percent of novels. Therefore, if we consider each novel to be an “event,” we’ll see something that looks very stable until one of these exceptionally successful novels comes along and completely skews the data.
The other problem with Extremistan is that it’s unpredictable. There’s no telling when an extreme event will happen, and they can be devastating when they do. Consider novels again—it’s often hard to say which novels will be the super-successful ones, and when such a book does come along, it pulls sales away from thousands of other books.
Therefore, when studying something that’s prone to sudden, extreme jumps, one can’t try to predict what will happen based on evidence—by the time you have that evidence, the event has already happened. Instead, you have to look at the potential damage such an event could cause.
For perhaps the most extreme example of this, think about the proliferation of nuclear weapons. Someone predicting the future based on evidence might say that nuclear weapons are quite safe—after all, they haven’t killed a single person in the better part of a century. In fact, that’s the crux of the “nuclear deterrence” argument: Nuclear weapons are safe because everyone has them and therefore, no one would dare to use them.
However, someone predicting the future based on potential damage will see quite a different picture: Never in history has the world been in so much danger. If those nuclear weapons are ever used, the damage would be incalculable—and it’s only a matter of time before they’re used. It’s the nature of Extremistan that the extreme events will happen sooner or later.
Another perfect example of Extremistan is the life of a turkey raised by a butcher. The turkey spends months being fed and cared for every day, without fail. The turkey trusts, perhaps even loves, the butcher. Every day the turkey gets more confident that the butcher will keep feeding and caring for it forever; and it’s right, until the fateful day when it’s wrong.
The irony here is that the “system” falls apart right at the moment when the turkey is most confident in it. The same thing happens with large corporations, authoritarian governments, and even overprotected children—a child who lives in a bubble (literal or metaphorical) may be very confident in her own invulnerability, only to learn that she’s actually dreadfully vulnerable once that bubble is punctured.
The turkey and the child both made the same crucial error: They saw no evidence that something was dangerous, and took it as evidence that the thing was not dangerous. The turkey was blindsided by the butcher’s knife, and the child by disease, or injury, or the general cruelty of the outside world. Both used their past experiences to try to predict the future, only to learn in the harshest possible ways that the future is unpredictable.
(Shortform note: For more of Taleb’s thoughts on unpredictable events and the unusually large impacts they can have, read our summary of The Black Swan.)
The key to avoiding being a turkey—or a child in a bubble—is to recognize the difference between genuine and artificial stability. Is a system stable, or even antifragile, because it can absorb shocks and adapt to them? Or does it only seem stable because outside forces are temporarily keeping it that way?
The previous chapters have shown the trends of antifragile systems: They improve after encountering some small degree of randomness, and they get stronger from (minor) damage. We’ve also discussed how controlling such a system too tightly will eventually backfire, perhaps catastrophically. In this chapter we’ll further explore the benefits of adding randomness to a system and, conversely, the dangers of over-intervening in naturally antifragile systems.
The philosopher Jean de Buridan proposed a thought experiment: A donkey who is equally hungry and thirsty, placed at an exactly equal distance from water in one direction and food in the other, won’t be able to decide which way to go. It’ll be stuck in that spot until it dies of hunger or thirst.
However, we can help Buridan’s donkey by adding a small dose of randomness to the system. Giving the donkey a small push in one direction or the other will put it closer to either the food or the water, allowing it to finally make a decision and thereby saving its life. Which direction you push doesn’t matter—as long as something changes, the balance is broken and the donkey is saved.
In metallurgy, annealing is a process used to make metal stronger and remove impurities. It involves heating the metal and then carefully cooling it. The increased energy from heating the metal excites the atoms and breaks them out of their current positions, then the controlled cooling gives them the chance to find new, stronger arrangements. In other words, introducing a short period of randomness to the system produces a stronger product.
Mathematicians have developed a computer simulation technique inspired by annealing and called—appropriately enough—simulated annealing. Rather than immediately trying to find the best solution to a given problem, the computer will also accept less effective solutions, because they might be near to the actual best solution.
To visualize simulated annealing, imagine a graph with numerous peaks and dips, some higher than others. A computer looking for the highest point on the graph might find one of those peaks, check to either side of it and find that the graph is going down, and determine that it’s found the best solution. However, a computer using simulated annealing would randomly pick various points on the graph and search for the highest point to either side of those points. By doing so, it’s more likely to find the actual highest point.
The annealing process applies to politics, too, and has since ancient times. Athenian assemblies, for example, were chosen by lottery. Much more recently, the physicist and mathematician Alessandro Pluchino used a computer simulation to show that a certain number of randomly selected members can make a parliament function more effectively, perhaps because those people bring different perspectives and ideas with them.
Nor was politics the only area where ancient Greeks took advantage of randomness; they used various methods of getting random results and called it divination. An Athenian faced with a difficult choice might open Virgil’s Aeneid to a random page and interpret the first line she sees as advice from the gods. This would allow her to proceed with confidence that her path had been ordained by a higher power, without anxiety over the outcome—after all, even if it’s disastrous, it’s not her fault.
Another way to reshuffle political structures is the time-honored tradition of assassination. Killing an important political figure often shakes others loose from their established roles as they scramble around to fill the sudden void in their power structure. One very public example of this was when mafioso John Gotti killed the capo of his family and took his place.
In both metals and politics, long periods of stability—or, worse, enforced stability, as we discussed earlier—allow impurities to gather beneath the surface and destabilize the whole structure. In the case of metals, this simply leads to a weaker material; in the case of politics, it often leads to violent and messy blowups, up to and including civil wars.
We touched on how modern society enforces stability back in Chapter 3 with the concept of touristification—removing the risk and randomness from everything, thereby making us tourists in our own lives. But how does this happen?
Remember the Athenians and their reliance on divination. They believed that it was the gods, not humans, who had ultimate control. They may not have truly believed that it was randomness running the world, but they attributed control to some vast, unknowable, and inhuman power. Functionally, it was the same thing.
However, this changes with the onset of rationalization: the idea that the world, and society itself, can be understood and controlled in every facet. Rationalization replaces reliance on the gods with reliance on science and predictive models, and it replaces worshipping at altars with worshipping at flags. It is, in short, humans trying to take power and agency away from random chance (or the gods, or whatever you want to call it).
Once people start thinking that they can control society, they start working to force everyone into the roles they imagine. They reduce people to what is useful, and they try to optimize every aspect of civilization to run as efficiently as possible.
The problem is that such people don’t understand antifragility. Rather than seeing randomness and minor problems as ways to ultimately strengthen their society, they see only inefficiency and obstacles. They try to suppress or eliminate any such obstacles that get in the way of their society’s gears turning smoothly, forever. Of course, as we’ve said before, that only leads to a buildup of invisible problems that eventually tear their orderly society apart.
The real downfall of modernity and rationalization is people’s inability to distinguish between major and minor problems. That’s not a personal failing, it’s simply a human limitation. The sheer amount of information, conflict, and general noise in the world makes it impossible for anyone to properly track and categorize it all; therefore, instead of calmly and methodically dealing with the most serious issues, people try to fix all of them. In doing so, they undermine society’s natural antifragility, and they drive themselves crazy in the process.
Modernity and rationalization are born from the naive idea that human intervention can improve on natural randomness. One example of how this is fatally flawed is the field of iatrogenics: the harm caused by medical intervention.
Throughout history, many people who would have survived unattended have died under the care of doctors. Part of the problem was medical error, but iatrogenics reached its height around the late 19th century with the onset of large-scale clinics and hospitals. Before people knew what germs were, patients were subjected to unsanitized beds, floors, and hands, and died in droves from so-called “hospital fever.” It was only from the 20th century onward that medicine, on average, did more good than harm, largely thanks to antibiotics.
However, modern medicine in particular has a different form of iatrogenics, which arises from conflicts of interest between the professionals and their patients. The most familiar example of this may be the perceived overprescription of painkillers, psychiatric meds, and so on. A doctor who gets kickbacks from drug companies for prescribing their drugs has a clear conflict of interest, one that is obviously harmful to his or her patients.
This is called the agency problem: The one making decisions (the agent) is doing so for his or her own benefit rather than the benefit of the clients. While iatrogenics originated in medicine, it and the agency problem occur in numerous fields. No matter the field, though, it starts with modern, “rational” people thinking that their intervention will make things better.
Iatrogenics also has an opposite; that is, when attempting to harm something makes it stronger. An example is the fierce critics of Ayn Rand who, as we mentioned earlier, ended up spreading her ideas by attacking them. There’s not a word for this specific situation, but it’s clearly an application of antifragility.
Scientific theories are a key underpinning of rationalization and a dangerous one. Scientists frequently promote theories without stopping to consider the damage that might happen if those theories are wrong.
Think back on the clinics from the previous section: The theory (albeit not a scientific one) was that having a large number of medical professionals and equipment together in the same place, and bringing patients there, would make treatment more efficient and effective. Instead, countless patients died of “hospital fever.”
Furthermore, science—that is, rigorous testing and observation—can be done without resorting to theories at all. There’s even a name for it: phenomenology, the study of phenomena that currently have no theories explaining them. While theories are fragile and often vanish as quickly as they appear, phenomena are durable; they happen the same way every time.
Social sciences are especially vulnerable to flawed theories, and even the theories themselves seem to vary wildly from one school of thought to another. In the cold war years, the University of Chicago was pushing laissez faire economics while the University of Moscow taught the complete opposite. Even calling such ideas “theories” stretches the definition a bit because theories are supposed to be closely examined and supported with concrete evidence.
At any rate, it seems clear that our modern world needs a method to deal with the fragility of theories and the actions we take based on them. Such errors were harmful enough in medicine, where the risk and cost were spread out over large numbers of people. In social sciences and politics, where power is so highly concentrated, things tend to go wrong quickly and explosively.
All this discussion of modernity and rationalism may have given the false impression that human intervention is always harmful, but that’s not the case. We just need to think more carefully about where, when, and how we intervene.
In medicine, doctors who overprescribe drugs and order excessive tests are over-intervening—and, ironically, under-intervening at the same time—because all that extra work has minimal results. It would be better to save those resources for where intervention is truly needed, like emergency situations.
The general argument here is that some areas need human intervention, while others are harmed by it. For instance, we need strong legal interventions to keep large corporations from ruining both the economy and the environment. However, minor injuries and illnesses don’t need medical intervention, and in fact, seeking it may cause more harm.
So, clearly, the next step is to identify exactly what areas we need to intervene in and when. As a rule of thumb, limiting size, concentration, and speed are helpful methods of harm reduction.
A corporation that gets too big has an unreasonably large impact on the economy; therefore, breaking up monopolies helps keep prices down and wages up. We’ve already discussed how concentrating too much power in one place is dangerous and ultimately harmful (a powerful top-down government versus Switzerland’s collection of local cantons). Finally, imposing speed limits on highway drivers has proven to reduce the frequency and severity of accidents. These are only examples—there are many other ways in which limiting size, concentration, and speed are helpful forms of damage control.
However, while intervening in highway driving has proven to be helpful, intervening in city driving may be just the opposite. A town called Drachten, in the Netherlands, ran an experiment where it removed all of its street signs. Surprisingly, this deregulation actually led to increased road safety. Although, perhaps it isn’t actually surprising, but just an example of the antifragility of attention. Drivers performed better when subjected to the stress of observing and reacting to their surroundings, rather than placidly obeying street signs.
The difference is, again, a matter of speed. Regulations or no, it’s simply not possible to reach highway speeds while driving on surface streets, and therefore highway driving comes with different (and increased) risks.
In short, what we need is a methodical and organized protocol that we can use to determine where, when, and how much to intervene. Given the massive potential (and actual) iatrogenics of modern life—environmental damage, stifled antifragility, and the inevitable threat of nuclear war, to name a few—the time will come when we need to intervene. Intervening in the right ways might be the difference between a living, thriving world and annihilation.
Given modern humans’ tendency to intervene in everything, we view hesitating to intervene—that is, procrastinating—as some kind of defect or sickness. The supposed “cures” range from changing your environment, to pursuing a new career, to simply powering through it. The problem is compounded by the fact that inaction is hardly ever rewarded; you get a bonus for the problem you solve, not the one that never happens in the first place.
However, in many cases, procrastination is your body trying to tell you something important. For example, an author who procrastinates on writing a book may be doing so because it’s not what he or she should be writing about, either because of lack of expertise or lack of interest. For a more dramatic example, the Roman general Fabius Maximus irritated the conqueror Hannibal to no end by repeatedly delaying and avoiding military engagements that he was sure to lose. In Maximus’s situation, procrastination was the best strategy and indeed the only viable one.
In other situations, most notably emergencies, there’s no sense of procrastination at all. A firefighter doesn’t “get around to” putting out a house fire, and someone who’s severely injured doesn’t put off going to the hospital. When immediate action is needed, people will act.
All of this suggests that procrastination is a natural risk-management mechanism, an ingrained instinct to let antifragile systems work out their own problems. Millions of years of evolution have given us the tendency to procrastinate, so there must be a good reason for it. Usually that reason is not that the procrastinator is defective, but that his or her environment is.
Forecasting or predicting the future is notoriously hard to do. Consider the infamous inaccuracy of weather forecasts, and then realize that those are based on much more concrete information than, for example, a large company’s financial projections for the coming year.
So much of our modern obsession with control is based on forecasting—predicting upcoming events so that we can be ready for them or, better still, prevent them. However, actions taken based on these inaccurate forecasts are often more harmful than helpful. If you’ve ever trusted a forecast that says the day will be clear and sunny, then gotten rained on because you didn’t bring an umbrella, you understand how that can happen.
Fragile systems, such as modern societies, rely almost exclusively on forecasting. They can’t withstand shocks, so they must know how to avoid them. Of course, given how unreliable those forecasts are, the systems are inevitably damaged by unforeseen problems. That’s one of the main reasons why hidden instabilities build up in tightly controlled systems.
However, durable and antifragile systems don’t have any such reliance. For example, a country that always keeps surplus food stores—remember that antifragility often works by overcompensation—won’t be ruined by a famine. However, a country that waits until it thinks a famine is coming to start storing extra food will, inevitably, be caught off-guard sooner or later and suffer terribly for it.
Since predicting the future is difficult, if not impossible, it makes much more sense to seek out and embrace durability and antifragility. In other words, we don’t need to know exactly what will happen next, we just need to minimize the damage we suffer from unforeseen events. The question mustn’t be why we didn’t see a disaster coming, but why we built a system that was so vulnerable to it. The fragile country from the previous example might ask why it never saved any food in case of an emergency—the answer, most likely, is “efficiency.” It was more efficient (in the short term) not to waste resources growing extra food, or to sell off any surplus.
On a similar note, the answers to society’s problems can’t be to make people “better,” by eliminating greed, for example. Humanity has been trying to do that for millennia and gotten nowhere. Rather, we must rethink society so that it is immune to, or even strengthened by, people’s shortcomings. Notably, that’s what capitalism was supposed to do: turn human greed into a driving force for improvement by letting people create and sell what they could, as best as they were able to.
At any rate, this naturally raises the question of how we can create something that is resistant or antifragile in the face of failure. The answer is to plan with failure in mind. Nuclear engineers realized this after the Fukushima disaster, and they started building smaller reactors with better protections around them. They designed these new reactors under the assumption that they would fail and melt down, and therefore they made sure that the damage would be minimal when that happened. The added protections are costly—inefficient, one might say—but more than worth it.
This section moves away from talk of science, economics, and politics. Instead, it brings the topic down to a personal level by introducing two characters: Nero Tulip and Tony DiBenedetto, also known as Fat Tony. Nero and Tony are two men with almost nothing in common, except a shared love of going out for lunch and a lack of anyone else to dine with.
Nero is an intellectual. He comes from old money and spends his days buying, reading, and discussing books about nearly everything. Tony, on the other hand, might never have read a book in his life. He’s an outgoing man with a powerful personality who makes friends easily and, more importantly, reads people easily.
The one thing these two agree on, aside from the importance of lunch, is the fact that predictions and the people who rely on them are bound to fail. Nero came to this conclusion through long years of study and observation, while Tony had a more intuitive understanding of the situation—in his words, that people who rely on predictions are suckers.
These two men made fortunes during the worldwide recession of 2008 by using a sort of meta-prediction: They didn’t foresee the details of the economic crisis, but they knew that sooner or later, the predictive models that the economic world ran on would fail catastrophically.
They didn’t know that it would happen in 2008 specifically, nor exactly what form that failure would take. They didn’t need to. Their strategy was simply to do the opposite of what the fragile capitalists and economists were doing at any given time. They might lose small amounts of money along the way, but when the predictions inevitably collapsed, Tony and Nero would win big.
In 2008 the wager paid off, and they raked in millions of dollars. In short, by recognizing and rejecting fragility, they became antifragile.
One of the great fallacies people fall prey to is assuming that they know what they want, and where they want to go in life. We can call this the teleological fallacy (from telos: “based on the end”).
The teleological fallacy is related to the earlier topic of touristification—after all, the entire premise of tourism is that you know where you want to go and what you want to see. A tourist, especially one traveling with a tour group, has a set schedule for every moment of the trip, designed to maximize how much she can see and experience while she’s there.
However, people who lock themselves into such rigid plans become fragile; unexpected changes or problems can have disproportionately large impacts, simply because the plans can’t adapt. If a destination on the tour suddenly becomes unsafe, or is under construction, or can’t be visited for whatever reason, there’s no opportunity to go someplace else instead. The chance is simply lost, and the time and money wasted.
Flexibility is more important than efficiency. Someone who plans the next step of her trip while enjoying the current one is much better able to adapt to changing circumstances and make the most of every situation as it comes up. This holds true in business, economics, politics, and almost every other area of life—though not in personal relationships, where loyalty and commitment should be the order of the day.
In short, avoiding the teleological fallacy requires embracing its opposite: optionality, the ability to consider and choose from among the available options at any given time. This concept will be explored in detail in the next chapter.
A couple thousand years before Nero and Tony, the philosopher Lucius Annaeus Seneca, better known as Seneca the Younger, found antifragility in a different way. Seneca was a prominent Stoic, from a school of philosophy whose followers try to remain indifferent in the face of hardship and good fortune alike.
His method for doing so was to consider his material goods already lost, even though he was fabulously rich. That way, if his fortunes suffered a slight decline, he wasn’t concerned with the loss because he’d mentally and emotionally written off those goods long ago.
This set him in contrast to most people who, once they reach a certain level of wealth, become obsessed with holding on to as much of it as possible. At this point the pain of losing their money far outweighs the satisfaction of making more; they have much more to lose than they have to gain. Therefore, they are fragile, vulnerable to chance and misfortune. Seneca, by considering his possessions already lost, avoided this trap.
However, rejecting his goods—and therefore fragility—would only bring Seneca to the level of durability, not antifragility. How could he actually benefit from the random whims of fate? As it turns out, that question goes hand in hand with another: Why, if he placed no value on his riches, did he bother to keep them?
The answer is, simply, that he wanted to experience the good that his riches would bring him. The fact that he was (supposedly) able to do so without the fear of losing them speaks to his success as a Stoic. Therefore, Seneca benefited from chance without being harmed by it. He always had more to gain than to lose. In short, he was antifragile.
This gets to the very nature of antifragility. Something or someone who has more to gain than to lose will, on average, benefit from volatility and random chance. The next section will address how to put this method of reducing downsides while increasing upsides into practice.
Consider a fragile package that gets dropped and broken. It doesn’t magically repair itself when you pick it up off the ground. The damage is irreversible; therefore, getting the package safely to its destination requires performing the right sequence of tasks in the right order. You have to pad and protect it, carry the package carefully, and store it securely. Doing any of those things out of order greatly increases the chance that the object inside will break.
A situation like this, where it’s not just the destination but the entire sequence of events that matters, is called path-dependent. For another example of path-dependence, imagine getting surgery first and then receiving anesthesia. Clearly, your experience would be a lot different—and a lot worse—than if those events had happened in the other order.
Therefore, the first step toward reducing downsides in business, economics, or anywhere else, is to consider the path that your venture is going to take. If your country’s economy grows rapidly, but is left vulnerable to a sudden collapse and recession (like America in 1929), then your growth is meaningless. It would be better to grow more slowly—or not at all—and put safeguards in place to make sure that a sudden downturn doesn’t wipe away all of your profits.
To see how this is path-dependent, consider that a country can very easily grow its GDP by saddling future generations with immense debt (again, America provides a good example with student loans). However, this will inevitably cause harm in the long term when those debts need to be repaid by people who can’t afford to do so. On the other hand, GDP growth in Europe before and during the Industrial Revolution was quite slow—relatively speaking—but it wasn’t fragile. The continent is still reaping the benefits of the growth it experienced, and it’s one of the dominant powers in the world even today.
A good antifragile model—again, for business, economics, or what have you—is in the shape of a barbell. A barbell is a long pole loaded with weights near either end; similarly, an antifragile model is a combination of extremes with an appropriate amount of distance between them.
One end of the “barbell” is extremely safe measures, the other is high-risk, high-reward behavior. For example, if you keep 90% of your savings safe in the bank, and invest 10% in extremely volatile stocks with a chance of massive payoff, then you can never lose more than 10% of your money—while, at the same time, having the chance to earn a lot more. However, if you put all of your money into medium-risk, medium-return stocks (putting your weights in the middle of the barbell, as it were), you run the risk of losing everything to a market downturn, or simply an unexpectedly bad investment.
Seneca used this same model to manage his emotional investments, though he never used the term “barbell” to describe it. By emotionally detaching himself from his wealth (risk-avoidance) while still spending and enjoying it (risk-taking) he was able to maximize his upsides and minimize his downsides. A Yiddish proverb gives similar advice: Prepare for the worst, and let the best take care of itself.
Many artists and writers also adopt a barbell strategy, often without even realizing it. They invest most of their time and energy into some stable, non-artistic job that will provide them a steady income. With whatever time and energy they have left, they pursue their art and hope to hit it big (or just enjoy the process, depending on whom you ask).
In contrast, pursuing some middle-of-the-road strategy, which generally means creating their art by someone else’s standards and for someone else’s benefit, inevitably changes and corrupts their artistic visions—what some people would call “selling out.”
An effective barbell strategy is any combination of two extremes, with strict avoidance of the middle. Another way to think of it is, antifragility equals aggressiveness plus paranoia. Protect yourself from disaster first, then take any resources you have left and use them to chase after the greatest gain possible. With this strategy, you can make sure that random chance will, on average, work in your favor. “Cautious optimism,” where you put all your resources into something that “should” work and then hope for the best, is to be avoided at any cost.
This concept also applies to social policies. The ideal social policies would protect and take care of the poor and weak, those most in need of protection, while letting the rich and strong get on with their business. Policies that protect the middle class while ignoring the poor inevitably lead to economic harm, and stifle growth.
Finally, remember that antifragility needs stress and danger in order to work. Therefore, while putting the weights on your barbell, make sure you’re not being too paranoid. Protect yourself from disaster, but don’t stress yourself out about small dangers. If, for example, your investments lose you a few dollars instead of getting you more money, then so be it. The investments are worth the risk.
Think about the old adage of the child who touches the hot stove once, and only once. That small amount of danger was necessary in order for the child to learn, to become stronger and more prepared for the future. Remember that trying too hard to manage small risks actually increases your fragility. Therefore, the barbell model isn’t about getting rid of uncertainty, it’s about taming it—making sure that it can’t hurt you (much), and that it’ll generally work to your advantage.
The barbell model is meant to maximize gains and minimize losses. Think about an area of your life where you could apply this model. It could be in business, fitness, or something else.
What aspect of your life did you choose?
What are your resources in this case, and how will you protect them?
What high-risk, high-reward opportunities will you pursue using the resources you can afford to lose?
As stated in the previous chapter, optionality—having a variety of choices available—is one key aspect of antifragility. An anecdote in Aristotle’s Politics shows this, although, oddly, Aristotle seems to have gotten exactly the wrong point from his own story.
The philosopher Thales of Miletus was known to be poor and unconcerned with money. However, those around him suspected that the reason for his apparent lack of interest in money was that he simply didn’t have the skill or intelligence to make any. Thales decided to prove them wrong.
He rented out every olive press in the area, which he was able to do at a fairly low cost. The olive harvest that year was especially good, and there was great demand for olive presses. Therefore, Thales was able to make a huge profit by renting the machines back to their owners, on his own terms. Having made his point (and his money), Thales returned to philosophizing.
Aristotle argues that Thales’s knowledge was what made this possible—that he studied the stars and learned through astrology that the harvest would be a bountiful one. However, the truth is the exact opposite; it didn’t matter how good the harvest was, because Thales was practicing optionality.
With a relatively small investment, he gained temporary control of every olive press in the area. If the harvest that year had been poor, he could have simply held onto the presses and wouldn’t have been out too much money. Since the harvest happened to be excellent, he sold the rights back and made a tidy return on his small investment. In short, Thales had options, and because of that, the randomness of the harvest worked in his favor.
This is one example—possibly the oldest one recorded—of optionality leading to antifragility.
Remember that the basic pattern of antifragility is to have more upside available than downside—that being the case, on average you’ll gain more than you lose from random changes. The best way to make sure that happens is to have options available to you, so you can make the best choice at any given time.
This principle applies even in very mundane situations. For example, someone invited to a dinner party that isn’t especially interesting, but is better than eating alone, might call around first to see if any other friends are available that evening. If a more interesting opportunity presents itself, the person can skip the party and do that instead. If not, the party is available as a fallback option, with the key word being option. This person has very little to lose—at worst, he winds up at a party that’s rather less exciting than he’d hoped. At best, he finds something more fun to do and has an excellent evening.
Another person who benefits from optionality is any tenant in a rent-controlled apartment. This person can stay in the apartment for as long as he wants, largely protected from inflation. On the other hand, if rents in town somehow drop—or if the tenant suddenly decides to start a new life in Mongolia, or what have you—leaving the place is a fairly easy matter. All he has to do is notify the landlord a given number of days in advance and start packing.
The key in both scenarios is that the person in question has the option—but not the obligation—to stay in his current situation. There’s no risk that things will get worse, and if he happens to find better offers elsewhere, he’s able to take them. Also in both situations, there’s minimal cost for the options. The person looking for a fun evening is only out the time and effort of calling some friends, while the tenant only needs to keep paying the same rent he always has. Therefore, in both cases, there is a significant chance for improvement with little risk or cost.
Because of the high chance of benefits and limited downsides, options love dispersion—that is, they like being distributed across a large number of possibilities and outcomes. The negative outcomes don’t matter nearly as much as the positive ones, because when you have options available you simply don’t choose the negative outcomes.
This is another reason why things like art and ideas are antifragile: There’s no downside to people not liking them. People who hate a certain book can’t give the author negative sales, and people who find an idea distasteful can’t rip it from the heads of other people. For artists and authors, therefore, it’s actually better to have a small but devoted following, regardless of how many haters their work also attracts.
Having people’s opinions distributed in this way, from total contempt to adoration, is a net benefit for the artist. The worst possible thing for such people is to have everyone find their work acceptable, or “just okay,” because people don’t tend to buy things that are just okay.
It’s likely that dispersion is also how society will make progress. Societal improvements won’t happen by raising the average intelligence or education level of the population; they’ll occur because a few rare people have the imagination and courage to make that progress happen. Their outcomes will be distributed from utter failure to resounding success, and this is the key: The failures will not harm society at large, while the successes will benefit it.
Making proper use of options doesn’t take any great intellect. Remember Thales and his rented olive presses: He didn’t have any brilliant insight or foreknowledge about the olive harvest, he simply had the options available to take advantage of the opportunity when it arose. Similarly, Fat Tony didn’t come out on top of the market crash because he was a scholar or some great thinker—he was quite the opposite—he simply set up his investments with an eye toward minimizing losses while he waited for the right moment to seize his profits.
Nature itself uses optionality to always find the best outcomes—or at least, the outcomes that will allow life to continue. Countless mutations and evolutionary trends fail, and the organisms carrying them die out. However, these failures don’t harm nature itself. At the same time, the beneficial mutations and trends propagate, making it ever-more-likely that life will continue to exist on Earth. You wouldn’t call nature intelligent, but there’s no arguing that it’s rational.
The point is this: Rationality is simply the ability to choose the best option at any given time, and that’s all that’s needed for antifragility. You don’t need to be smart or learned, you simply need to know what your options are and which ones will benefit you.
In fact, in many ways it seems like intelligence and education are uniquely unsuited to recognizing and choosing options. The trouble is that scientists and scholars look for things that are exciting and newsworthy, things that will get their names printed in important journals and awards hung around their necks. However, when it comes to rational, practical discoveries—and, even more so, applications—these highly intelligent people often miss what’s staring them in the face.
For example, consider the invention of the wheeled suitcase. It’s an incredibly simple application of an incredibly simple tool. Wheels have existed for thousands of years, and suitcases for hundreds. However, it wasn’t until decades after we put people on the moon that wheeled suitcases became common. Furthermore, comparing wheeled suitcases to the moon landing, the suitcases have a much more immediate and obvious impact on the average person’s life.
The point is this: For all of humans’ vaunted intelligence and imagination, we’re often incredibly shortsighted and stupid. The obvious only becomes obvious in hindsight, like putting wheels on heavy suitcases. Until then, it’s a matter of random chance that drives invention and discovery. Who can say what random event sparked the insight that led a luggage maker—not some NASA genius—to put wheels on his creations?
The role of randomness naturally means that, once again, antifragility is preferable to intelligence. This is why having many options, and the rationality to see their potential, is so much rarer and more important than traditional intelligence and book learning.
Related to the intelligence versus rationality point is the idea of the long shot. Many intelligent people will point out that lotteries and casinos—which offer very small chances of very large payouts—aren’t worth it. And they’re correct; the options that you get from lotteries and casinos are too expensive, giving you ever-increasing downsides that are unlikely to ever be offset by your winnings.
Where these intelligent-but-irrational people go wrong is in applying the same logic to any kind of long shot. For example, it’s extremely unlikely that buying stock in any given company will make you rich. However, if you invest a few dollars each in a lot of different companies—including, say, Amazon or Apple when they were first starting out—you’ll have limited cost with the potential for immense profits.
Therefore, it’s better to keep your options open (like Thales renting the olive presses) rather than try to pinpoint the most profitable company through superior knowledge and reason, as Aristotle might have done. By limiting your costs and maximizing your chance to profit, you can be all but certain that at least one of these so-called long shots will pay off.
With all this talk of options and chance, one could be excused for thinking that the world progresses almost completely randomly. However, that’s not the case. Whether you’re looking at evolution, politics, or scientific discovery, the world progresses through trial and error.
While it can look a lot like pure randomness, trial and error must be guided by rationality. For a mundane example, if you’ve misplaced your wallet, finding it is a matter of trial and error. You start with where it’s most likely to be, and work your way through ever-less-likely locations until you find it. You also don’t tend to look in the same place multiple times, which could easily happen if your search pattern were completely random.
The treasure hunter Greg Stemm used a more codified version of the same strategy to make his finds, including the sunken Spanish frigate Nuestra Senora de las Mercedes, which was carrying cargo worth a billion dollars in today’s money. Stemm would start by outlining the area where his target could possibly be, and break it down into smaller chunks by how likely he thought it was that the ship would be there. He’d start with the most likely spot, and search each of those subsections extremely thoroughly, only moving on when he was completely certain the ship wasn’t there.
In both situations, the key point is that every error yields useful information. While Stemm might have spent weeks searching a particular area and coming up (seemingly) empty, he would then know that the treasure isn’t in that spot. With that information, every subsequent place he checked had a greater and greater chance of paying off. The time and money he spent searching those places wasn’t lost, it was invested.
Searching for a lost item is an easy way to illustrate this concept, but it applies to practically all forms of discovery and invention. Everything that doesn’t work gets you closer to figuring out what does. The key to making it work is, as we’ve discussed, minimizing your costs and downsides while maximizing your profits and upsides.
Another common mistake that intelligent people make is falling victim to epiphenomena—the incorrect belief that one thing causes another. In other words, confusing correlation for causation. For example, if you knew nothing about ships, you might come to the conclusion that the ship’s compass is directing it rather than simply telling which direction it’s going.
One epiphenomenon that pervades modern life is the belief that greed causes economic crises. Intelligent people note that economic depressions and extreme wealth inequality are often seen together, and they conclude that people hoarding money is the reason for the depression. This leads to the conclusion that greed is a new problem, and that if we could eliminate it, then our economies would be stable.
However, greed goes much further back in history than our current fragile economic systems. We see greed mentioned in texts from more than two millennia ago: The ancient Roman poet Virgil talked about greed of gold, and the Latin version of the New Testament warns that greed is the root of evil—radix malorum est cupiditas.
In all the time since, nobody’s found the cure for human greed. The much simpler and more practical approach is to build systems that won’t be destroyed by it. Remember that one major cause of fragility in a system is size; our economic systems today are gargantuan, complex systems that are terribly fragile as a result.
The simplest way to debunk epiphenomena is to look at sequences of events, and check whether one always comes before the other. If so, you still can’t positively conclude that one causes the other—however, if the pattern doesn’t hold, you can prove that one doesn’t cause the other.
One place to look carefully for such patterns is in academia—and specifically when academic theories are used to run the real world. There’s an old joke about a Harvard professor who lectured birds about how to fly, and he was convinced that his brilliant teaching was why they were so good at it. That’s clearly ridiculous, but people often credit scientists and mathematicians for systems that worked perfectly well without them.
For example, the mathematician Michael Atiyah, best known for his work on string theory, once came to New York on a fundraising mission. He made a speech listing numerous ways that math had been usefully applied to the real world, such as in traffic signals. Notably absent from his speech, however, were times when mathematics had caused immense harm, such as grim economic forecasts leading to self-fulfilling panics.
A side note: The kind of cherry-picking that Atiyah used in his speech is yet another example of options leading to antifragility. The one doing the picking gets to choose which facts to include and which to leave out. The wider the spread of facts is—in other words, the more options there are—the better (or worse) the cherry-picker can make the situation look. The cherry-picker’s story becomes stronger thanks to the randomness of the information he can choose from.
These two chapters cover a wide range of topics, and they go into more detail about some that have already been touched upon in earlier chapters. Chapter 14 continues the discussion of epiphenomena. It focuses especially on the false belief that formal education leads to practical skills and economic prosperity.
Next, Chapter 15 offers several rules of thumb for finding and leveraging antifragile situations.
One popular epiphenomenon is that widespread formal education boosts the economy. Strong economies and formal education tend to be seen together, so some people draw the conclusion that education leads to prosperity. However, it’s the other way around: Rich countries tend to institute formal education, not become rich because of formal education.
There are, of course, many good reasons to educate the populace. Education tends to reduce income inequality, to instill good values into people, and to make them more interesting conversation partners. However, to say that these things cause economic growth is a fallacy.
It’s also a fallacy to think that formal education makes people skilled in practice. Author Taleb relates a time when he was fresh out of college and entering the professional world for the first time. He’d studied risk and probability, and he was, at the time, focused on exchange rates between currencies.
Coming from a polished Ivy League environment, Taleb was shocked by the money changers. Far from the refined, educated, politically aware people he expected, they were (to use Taleb’s words), “Street. Very street.” Many of them spoke English with so much slang or such heavy accents that it was barely recognizable as English. A man introduced as one of the biggest traders of Swiss francs in the world didn’t know the first thing about Switzerland.
Taleb recalls feeling his formal education vanishing in front of his eyes—a sign of its fragility. He’d been trained to think that knowledge and education were crucial for success, but these uneducated, barely literate men were handling enormous sums of money with ease.
Now recall Fat Tony from Chapter 9. He didn’t get rich from any fancy formal education, but from the simple understanding that sooner or later, the economy would take a hit—in other words, by betting on fragility.
Specifically, he made his fortune in 1991, when the U.S. attacked Iraq near the end of the Gulf War. Economists, analysts, and journalists were all predicting that the price of oil would rise if the U.S. went to war. However, Tony bet the other way. He reasoned that, if everyone was predicting that the price of oil would go up, then the market must have already adjusted for that. Therefore, a sudden war would indeed raise the price of oil, but not a planned war.
Tony’s prediction turned out to be right. Rather than skyrocketing, the price of oil dropped by half, and Tony turned a $300,000 investment into $18 million. When asked how he’d known, he simply responded that war and oil were not the same thing. In other words, he avoided the epiphenomenon that war causes oil prices to rise.
So there were two parts to Fat Tony’s good fortune, and neither had anything to do with education or intelligence. First was his innate understanding that people are suckers, and economics are fragile. Second was the rationality to apply that understanding to current events. With everyone hoarding oil, waiting for the expected price boom, they were suckering themselves; there was suddenly much more supply than demand, and the price crashed instead.
Fat Tony’s lesson that war and oil aren’t the same thing can be generalized to fit many situations. In the most general sense, theory isn’t the same thing as practice, and the two shouldn’t be conflated. The theory was that oil’s price would go up in the event of a war; in practice, it plummeted.
Put another way, reality and narratives aren’t the same things. Reality is what happens, and narratives are the stories we tell ourselves after the fact. In reality, rising oil prices and wars often happen at the same time; the narrative became that war caused the increased oil prices, and it always would.
However you look at it, the main difference is optionality—thus antifragility. Theories and narratives are fragile, and very easily destroyed by information that doesn’t agree with them. Worse, they make the ones who believe in them fragile, as seen in the oil situation. Reality and practices, on the other hand, are antifragile. They develop based on a sort of natural evolution, as the following anecdote will show.
The economist Ariel Rubinstein tells a story about when he learned the difference between theory and practice. He was in the Middle East, in the area that was then called the Levant, and was trying to haggle with a vendor in the marketplace. Rubinstein tried to get the vendor to haggle based on game theory, a field of mathematics that’s based on interactions between purely rational parties and practiced through thought experiments and computer simulations.
The vendor played along for a little while, but Rubinstein’s experiment didn’t get them to an agreement that was acceptable for either party. The vendor then questioned why Rubinstein thought he could change the methods that merchants had developed over generations. Rubinstein was embarrassed, and he left.
The merchant’s haggling methods were based on generations of natural evolution, keeping what works and discarding what doesn’t. Recall that evolution (biological or otherwise) also tends to build in redundancies and fail-safes—even something that works well most of the time will eventually be squeezed out of the population, unless it’s backed up by something else for those times when it doesn’t.
Rubinstein’s game theory could only proceed in certain ways—it was rigid and fragile. However, the merchant had many different options to fall back on if things weren’t going the way he hoped—his sales techniques were flexible and antifragile. For all of Rubinstein’s formal education, his theories couldn’t hold up against the merchant’s practice.
If fragile theories and narratives don’t hold up against antifragile practices, one might wonder why the theories and narratives are still so prevalent. However, it’s natural that the ones who like to write theories and tell stories are the ones best able to spread their ideas, much more so than those who practice a trade or craft.
Academics like to claim that they create theories, which people then put into practice. The truth is a bit more complicated, and it goes in the opposite direction. Practitioners perfect a craft through countless generations of trial and error-style evolution, then academics create theories based on it. Those academics then turn around and claim that the practitioners are using their theories.
We can see evidence of this in accounts going as far back as the Elizabethan era of the 1500s and 1600s. There were two competing schools of medical practitioners back then—although “school” may be the wrong word for one of those, since it consisted of practitioners who learned how to treat people simply through trial and error. The more organized, more formally educated doctors had any number of names for such practitioners: quacks, charlatans, and so on.
However, these “quacks” had a great deal of support among the common folk. Naturally, the trained doctors didn’t appreciate the intrusion into their business, and so they used the weight of their education and titles to denounce the ones who didn’t share their training and theoretical knowledge. It’s worth noting that the academics didn’t have significantly better outcomes than the ones they called charlatans. They were simply the ones with the authority—and the mentality—to control the discourse and write the medical books.
The problems begin when the next generation of practitioners believes the academics, and bases their practice on (inevitably flawed) theories. When this happens in an important field—say, stock trading—it leads to rigidity and fragility in the practice that ends up damaging the whole system, perhaps even causing a recession. In short, theories aren’t put into practice, they grow from practice.
Any number of supposedly scientific topics show, on a closer look, that they developed from trial and error rather than theories building upon theories. For instance, people building jet engines needed the original engineers present because they didn’t have a solid theory of how the things worked until much later. Many people believe that precise mathematics is responsible for great works of architecture like ancient cathedrals, but the architects relied almost entirely on their tools and some rules of thumb.
Most “scientific” pursuits look more like cooking than physics. Cooking might be the perfect example of a craft that evolved naturally, based on optionality. If you like an ingredient, you keep it; if not, you don’t use it in that recipe anymore. It’s truly antifragile because every mistake makes future efforts better. We have thousands of years of cooking history—mistakes and improvements—worked into our cultures. Finally, at almost no point does pure theory enter into cooking. You can’t invent an entire new dish based on theories about ingredients you’ve never used.
There are some exceptions to the “cooking” model of science. Physics, and its offshoots like astronomy, are often advanced through pure theory. The astronomer Le Verrier determined that Neptune existed long before anyone observed it, simply based on the behavior of the astrological bodies around it. Similarly, the Higgs Boson particle was believed to exist because it had to exist—our understanding of physics simply wouldn’t work without it.
However, note that these are exceptions. In general, science advances through mistakes and improvements, much like an old family recipe.
The main mistake that investors, particularly government investors, make is looking for a particular outcome. They start with a goal in mind and try to put money toward reaching it, rather than supporting the tinkering that’s led to so many unexpected breakthroughs.
Consider many of the advancements that led to the Industrial Revolution. For example, the flying shuttle revolutionized the textile industry, but it wasn’t invented by a scientist working with mechanical theories in a sterile lab. It was invented by John Kay, a craftsman looking to improve the productivity of his factory.
Similarly, the power loom was invented by Reverend Edmund Cartwright—neither a scientist nor a craftsman, merely a tinkering hobbyist. In fact, a surprising number of clergymen have made their marks on the sciences: Reverend George Garrett helped pioneer submarine design, while Reverend Jack Russell bred the hunting dog that shares his name.
Time and time again, it’s hobbyists and tinkerers who invent and discover exceptional things. In spite of that fact, governments insist on funding scientific research. Even worse, they fund scientific research aimed at a particular goal.
There are some examples of this method getting the desired results—the Space Race being one notable example—but on the whole this outcome-focused investment has a terrible rate of return.
The U.S. provides a perfect illustration of the failures of results-focused research. In the early 1970s, President Nixon declared a “war on cancer.” Morton Myers, a doctor and researcher, gives one account of the results in his book Happy Accidents.
Myers writes that over 144,000 plant extracts were screened and tested over two decades. During that time, not even one plant-based cancer drug was approved by the FDA. In contrast, an important group of anticancer drugs called the Vinca Alkaloids were discovered in the 1950s purely by chance. Going even further back, doctors discovered the potential of chemotherapy by examining soldiers who had been exposed to mustard gas—the chemical agent had a noticeable effect on soldiers suffering from various blood cancers.
John LaMatina, a pharmaceutical industry insider, published statistics showing that nine out of 10 drugs were developed by private industry. The National Institutes of Health, which is tax-funded, found that less than 10 percent of drugs on the market were developed with government funding.
Time after time, we see that government-funded, outcome-focused research fails to give the results that privatized tinkering delivers.
The reason for this goes back to the previous discussion about theory versus practice. By its nature, tinkering has a lot of options available at any time, and the tinkerer can pursue whichever one looks most promising. However, if people are trying to work toward a particular outcome, their methods tend to be rigid and theory-based—in other words, fragile. If their theory doesn’t pan out, then all the work they put into it was for nothing.
Especially notable is the fact that, as our theoretical understanding of medicine increased, the number of new drugs coming out actually decreased. By using theoretical knowledge to pursue specific results, scientists and researchers made themselves less effective in developing and discovering new treatments.
Therefore, rather than backing ideas that sound good, one should back good people. Invest in the type of people who never stop tinkering, who stumble onto good ideas and then recognize them for what they are. Better yet, invest in a lot of people like that. This operates on the same theory as stock market investments from Chapter 11: A little money invested in a lot of places will have a much better rate of return than a lot of money invested in a few “safe” options.
The reason is, all you need is for one of those inventors to hit upon something great, and you’ll make massive returns on your small investment. You might lose a small amount of money from other investments, but you don’t want to miss betting on the winning horse, as it were. You may not end up funding the cure for cancer, but who’s to say you won’t happen to invest in the next Viagra?
Recall the so-called Turkey Problem from Chapter 5. In short, the farm-raised turkey believes that it’s being well cared for and doesn’t foresee anything changing. It bases its life decisions on that idea. Every day that passes strengthens the turkey’s belief that it’s in a good situation, right up until the day when the farmer kills it for meat. The turkey’s problem is that it doesn’t consider that there may be an unseen, rarely encountered, but devastating downside to its situation.
Fragile situations like the turkey’s tend to have all their positive attributes on full display, and they hide the enormous downsides. Though not quite as dramatic as a turkey having its neck wrung, this is also the situation with results-focused research: The potential to develop a new life-saving drug is enticing, but it’s also possible to dump nearly limitless money into the search and have nothing to show for it.
Now, what would the opposite of the turkey’s situation be?—something that looks unappealing, but has the potential for almost unlimited gains. This is how many antifragile things look: long chains of errors and failures, but once in a while there’ll be a significant return on investment, as with the chance discovery of the Vinca Alkaloids.
Remember the key difference between fragility and antifragility: For fragile things, random events are a net negative. They have more to lose than to gain from rare or unforeseen events. For antifragile things, it’s the opposite: They have more to gain than to lose, so random events are a net positive. Therefore, even though the fragile situation seems more appealing—and may look that way for quite a long time—in the long run, the antifragile situation is more desirable.
Many people have hobbies that they enjoy tinkering with. For example, maybe you enjoy experimenting with new recipes, or you like to build things in your garage. As we’ve just seen, it’s often the tinkerers—not the professionals—who make exciting new discoveries.
What do you like to tinker with?
What might you do with more funding and free rein to indulge your hobby?
Is there a way that you could either obtain that funding, or proceed without it?
Chapter 16 takes a closer look at why modern, formalized education doesn’t lead to the expected outcomes of practical skills at the individual level and economic growth at the national level. It comes back to the earlier concept of touristification, or removing randomness from life.
People who learn from textbooks and worksheets will have rigid, fragile knowledge that they can’t apply in real-world settings. To illustrate the point, Taleb discusses his own learning, which was split between formal schooling and voraciously reading whatever interested him at any given moment.
Chapter 17 discusses the important difference between knowledge and probability—or “truth”—and outcomes. The key point is that people base their decisions on outcomes, no matter how unlikely those outcomes may be. Furthermore, every outcome has either more upside than downside, or vice versa; in other words, we’re fragile or antifragile to the result. Seeking good outcomes and avoiding bad ones guides our actions far more than any concept of objective truth.
There are two main types of learning: ludic, which is arranged like a game with rules and scorekeeping; and ecological, the natural method of learning by doing. Notably, there’s very little crossover between these two areas—skills that one develops through a game often don’t carry over into real life. For example, there’s little evidence that chess grandmasters think or strategize any better than anyone else when away from the chessboard. However, many people don’t apply that same idea to skills learned in school, or through closely structured activities.
The biologist E.O. Wilson was quoted as saying that the biggest threat to children’s development was the soccer mom. Wilson said that soccer moms repress children’s natural love of living things; they stop their kids from playing in the dirt, picking up insects, and so on.
However, the real problem with soccer moms is that they try to eliminate trial and error from their children’s lives. They make a map and demand that the kids follow it exactly, which might turn them into good students, but makes them unable to handle the ambiguity and changeability of real life.
The best kind of education is the one kids pursue themselves. Ideally, through browsing a library at home, picking out whatever topics interest them, and augmenting that with trial and error in the real world. Children are natural autodidacts—self-teachers—and that should be encouraged rather than repressed.
In discussing his own experience with education, Taleb first describes himself as an autodidact, but quickly amends it to note that he does have formal degrees. He describes his learning in terms of the barbell model: He devoted exactly as much time and energy as he needed to pass his classes, and he used the rest to study whatever caught his interest.
Taleb quickly noted the limited material available in schools and how schools tried to force students to study certain topics and authors and cultivate specific ways of thinking. This is harmful and discouraging to a lot of students—those who aren’t interested or able to focus on the material might grow discouraged with the entire act of learning, instead of simply with those subjects and methods. Taleb reads voraciously, and always has, but the moment he feels his attention drifting he closes the book and looks for another. He allows natural interest and avoiding boredom to guide both his study and his life.
In his 20s, Taleb became a very wealthy man (getting what he describes as “f** you money). His father attributes Taleb’s success to the breadth of education he allowed and encouraged Taleb to pursue.
In the end, Taleb was drawn to probability and risk studies. He spent years reading everything he could about it, and nothing else. As always, he was guided by passion and boredom avoidance.
Five years after starting on that new topic in earnest, he was set for life; his barbell method of learning paid off in a big way. Moreover, his fortune came largely from what he learned outside of the classroom, which left him with the impression that what people really need to know about a topic is found outside of the prescribed literature.
Socrates, one of the most famous and influential thinkers of all time, was obsessed with defining things to determine what they really are. Many of his debates—or at least his debates as framed by Plato—involve him hounding someone to define things, such as in Euthyphro where he pesters a supposed religious man to give a definition for the word piety. These conversations invariably end with Socrates’s opponent giving up and Socrates concluding that the other man had no idea what he was talking about.
Socrates’s dialogues are at the very heart of philosophy: the search for truth and meaning in the world. However, what if someone more practical—and less prone to being suckered by leading questions—were to engage Socrates in conversation? What if Fat Tony were to take the place of Socrates’s debate partner in Euthyphro?
They would reach a point where Fat Tony says that he can’t define piety, and that doesn’t matter, because he knows what it is anyway. All that Socrates is doing is confusing people by forcing them to take what they know and put it into words. It’s not necessary; a baby doesn’t need to know the definition of “milk” in order to drink it, and nobody needs to have a textbook understanding of aerodynamics in order to ride a bike.
In fact, far from expanding knowledge in the search for truth, Socrates was intentionally limiting it. He was trying to force people to use only knowledge that could be explained with concrete definitions. Like scientific theories and outcome-focused research, such knowledge is fragile. That was demonstrated plainly enough as Socrates himself repeatedly broke the knowledge that people offered, simply by questioning it.
Truths are less important than results. Perhaps a holy man can’t define piety in a way that satisfies Socrates, but that doesn’t make him any less pious. The merchant who defeated Ariel Rubinstein’s game theory probably couldn’t make a scientific flowchart of his haggling process, but that didn’t make it any less effective.
In short, people live their lives based on the outcomes of their actions, rather than any compulsive need to define those actions. Furthermore, outcomes are almost always asymmetric: They have more upside than downside, or vice versa. In other words, we are fragile or antifragile to the results.
Consider what happens at airports. It’s almost certain that any given passenger isn’t a terrorist; that’s the truth. However, the outcome of a single terrorist is so bad that we check everyone for weapons anyway.
No doubt you could think back on the past week and recognize the same thing in your personal life. Perhaps your decisions were partly based on how likely it was for something to happen, but probably they were based mostly on the outcome if that thing did happen. Did you brush your teeth each night because they’re likely to rot and fall out of your mouth if you skip a night, or did you do it simply to avoid that small risk?
In Chapter 18, we start with a graphical representation of fragility and antifragility. Using that simple illustration as a guide, we revisit exactly why fragile systems hate random events while antifragile systems love them.
After that, we return to the discussion of how size causes fragility, now with an added dimension: concentration. A centralized system is much more fragile than a decentralized one, even if they add up to the same size. For example, a single large bank is more vulnerable to mistakes and bad deals than 10 banks that are each a 10th the size; the simple reason is that the large bank has more resources, and therefore has more to lose.
We’ll also touch on the idea that large, influential systems can cause damage even outside of themselves. When that large bank got itself into trouble, the global stock market took close to a 10% hit—one of the 10 hypothetical smaller banks doing something similar would have caused a much smaller shock to the market, if any at all.
Chapter 19 expands on the fragility of size, and how large systems like banks cause harm to those who rely on them. It also explores how we could mitigate the damage by dividing up our investments and our consumption. For example, large tuna fisheries cause harm to the environment by overfishing, but they only do so because people keep demanding tuna. If we returned to a more natural method of consuming what’s readily available, our large systems wouldn’t cause as much damage.
Both fragility and antifragility have exponential effects. In other words, as the significance of an event increases, the effect of that event increases even faster. For example, if you punched a window you could easily break it; however, you could drum your fingers on the glass all day without damaging it. The thousands of tiny impacts from your fingers wouldn’t add up to the same effect as one large impact from your fist.
As we’ve previously discussed, when those significant events have negative effects, the situation is fragile. When they have positive effects, the situation is antifragile. We can roughly sketch this general concept as seen below:
A fragile situation has a limit to how good an outcome can be, but no limit (or almost no limit) to how bad an outcome can be. A graph of the situation has a concave shape: it flexes outward in the middle. An antifragile situation is exactly the opposite, and the graph has a convex shape: it flexes inward in the middle. An easy way to remember this is that fragility makes a frown, and antifragility makes a smile.
The two graphs also illustrate why fragility dislikes randomness, and antifragility loves it. Imagine picking random points on each of the graphs; depending on where the point falls on the significance axis, it may have a positive or negative outcome. Now imagine that you keep picking such random points over and over again. Eventually you’re going to land on a point with enough significance that the outcome is either hugely negative (for the fragility graph) or hugely positive (for the antifragility graph).
A side note: Any line on a graph can be represented by an equation. Putting a negative sign in front of that equation will result in the same graph, but upside-down. This also holds true for our graphs of fragility and antifragility; the opposite of concavity is convexity. Fat Tony made his fortune in the oil slump by putting a minus sign in front of the banks’ equation, so that whenever they lost a dollar, he made one.
An intrinsic property of concave and convex graphs is that, the farther along the horizontal axis you go, the steeper the slope becomes. In other words, the outcome changes by greater amounts over the same horizontal distance.
This gives us an easy way to check systems for fragility (or antifragility). Take, for example, travel time from point A to point B on the thruway. If there’s no traffic, you’ll drive from A to B quickly and easily. Now let’s say, hypothetically, you note that when there are 10,000 cars on the thruway, travel time increases by 10 minutes. If traffic increases by another 10,000 cars, your travel time now increases by 30 minutes. Add another 10,000 and you may be stuck in traffic for hours.
Though the increases in traffic are the same, the increases in travel time get bigger and bigger. Since the change is undesirable, we would say that the degree of harm is accelerating. This is a fragile system. If more traffic meant you somehow got to your destination sooner, then it would be an antifragile system; it would have accelerating benefits as the amount of cars on the road increases.
We’ve talked before about the fragility of relying on forecasting. Whether in finances, weather, votes, or anything else, trying to predict the future is notoriously unreliable. However, these forecast models could be made a great deal more accurate—and the decisions based on them made much less fragile—by subjecting them to a simple acceleration of harm test.
In short, take the model and ask, “What if it’s wrong?” Change some of the key assumptions by small increments and see what happens to the results. If the positive changes outpace the negative ones, then you’ve got a model for an antifragile system.
One key thing that many models get wrong is that they rely on averages. The problem, in light of the acceleration of harm effect, is that averages don’t take devastating extremes into account.
For example, imagine you booked a hotel room that’s kept at an average of 70° Fahrenheit. No doubt that sounds pretty reasonable. However, it could be that the room is 0° half the time, and 140° the other half. The average temperature is still 70° but, far from being comfortable, the room is downright dangerous. The variability is more important than the average.
Finally, imagine that there was a third graph with a straight line; for every “unit” that the significance of an event goes up by, the effect also goes up by one “unit.” This would represent an “average” situation, as hypothesized by any number of predictive models. If you were running a business, for example, your sales model might predict that more sales equals more money in a linear fashion, represented by such a graph.
First of all, that model is probably extremely inaccurate. Does it account for bulk buying and production? Supply shortages? In short, are there extraordinary benefits or downsides to selling much more product than usual?
However, aside from that, it’s plain to see that a convex model would quickly outpace a linear one. An antifragile system will, in the long term, perform better than the hypothetical average. Similarly, sooner or later a fragile system will fall victim to random chance or unforeseen events, and it would underperform.
We’ve said before that fragility increases as a system gets bigger. That’s because a larger system naturally has more resources available, but it also needs and uses more of them. The end result is that larger systems can go farther down the “significance” axis on the above graphs, and therefore encounter larger effects from random events.
For example, in January 2008, a Parisian bank called Société Générale realized that one of its employees had been playing in the stock market with enormous sums of money and hiding that fact from their systems. As a result, they had to rush to sell an incredible amount of stock—about $70 billion worth—and took billions of dollars in losses on the sales. At the same time, stock markets worldwide dropped by almost 10% as a result of the sudden event.
A smaller bank in a similar situation would have had much less of an impact, both on itself and on the world market. Imagine a bank that’s a 10th the size of Société Générale doing a similar emergency sell-off: $7 billion worth of stock and losses measured in millions rather than billions. It still wouldn’t be a good situation for the bank, of course, but the losses would be much smaller, and the effect on the global stock market would be much less—perhaps there wouldn’t even be one, as the market can absorb shocks of that size quite easily.
Aside from the fragility of size, the example of Société Générale shows one other important concept: squeeze. Squeeze is when you’re forced into a bad situation because it’s the only option available. The bank had no choice but to sell off those stocks that it didn’t know it had; its records would be egregiously wrong otherwise.
Squeeze is another type of unexpected event that reveals fragility in a system. For the reasons explained above, larger systems are also more vulnerable to squeeze—simply put, they need more resources and have more to lose than smaller systems.
Think about an elephant during a drought. The elephant needs a great deal of food and water to survive, both of which are in short supply. It’s forced to starve, possibly to death—that’s the squeeze. With no food to be had, there’s no other option available.
Now imagine a mouse in the same situation. The mouse needs a tiny fraction of the food and water that the elephant does, so it’s much less likely to be squeezed by the drought.
One way to offset the size and squeeze effects is to decentralize or defocus. As we mentioned, a bank one 10th the size of Société Générale wouldn’t have been as vulnerable to the unexpected event of a backroom trader gone rogue. So, if the bank were to split up into 10 smaller banks, the system as a whole would be much more stable.
Similar effects are seen in major projects, ranging from construction to military (and no doubt many other areas as well). Construction projects become disproportionately more expensive as the size of the project increases. However, if the project can be broken up into many smaller tasks, managed individually, that exponential cost increase doesn’t happen.
For example, building a tunnel through a mountain is a single, large-scale project—there’s no way around it. Unexpected delays or problems hugely increase the cost, because now you have lots of people who need to be paid while the project isn’t progressing, increased supply and maintenance costs, and so on. However, repairing a city’s roads is something that can be easily broken up and assigned out to various project managers. If one manager’s project runs into problems, it’s a relatively small cost increase and doesn’t hold up progress anywhere else.
One of the most dramatic modern examples of runaway costs is the U.S. invasion of Iraq under George W. Bush. It was expected to cost somewhere between $30 billion and $60 billion—now the price tag has risen over $2 trillion. The enormous size and complexity of the U.S. military leads to chain reactions of increasing cost, compounding each other almost without limit.
Interestingly, excessively large systems can also cause fragility—or, more simply, damage—to other systems. For example, in our modern lifestyle, people tend to get more or less the same things every time they go shopping. On top of individuals repeatedly buying the same things, many individuals will purchase the same “staples” as everyone else; things like bread, milk, and butter.
However, it’s well known that overconsumption of a particular resource—say, tuna fish—can cause harm to the environment. Overfishing tuna disrupts the ecosystem. Other animals that prey on tuna have their food source depleted, while the organisms that tuna eat suddenly have explosive population growth.
This problem didn’t exist for our hunter-gatherer ancestors, and examining what records we have of them shows why. In simple terms, they weren’t as picky as we are today; they would eat whatever was available. If they overhunted a particular animal and had trouble finding more of it, they’d simply switch to hunting something else. This would naturally let the formerly hunted population recover, while keeping down the numbers of whatever animals were benefiting from its absence.
In this chapter we introduce the idea of making progress by taking away what’s fragile and flawed, rather than by adding new ideas and inventions to our systems. In other words, we can move toward antifragility by eliminating fragility.
This chapter also discusses people’s illogical obsession with new things. Looking through the lens of antifragility, it’s clear that something that’s withstood the test of time is almost guaranteed to be better than something new, whose fragility hasn’t yet been tested. Old ideas and technologies will generally outlive new ones. In spite of that, many people are constantly chasing after the “hot new thing.”
Readers may have noticed that, for all the talk of antifragility, most of the examples we’ve discussed so far have been fragile situations. The reason is that, in many cases, antifragility is more about what you avoid than what you seek out.
This is hardly a new idea, though the label “antifragility” was never attached to it before. The Arab scholar Ali Bin Abi-Taleb said that staying away from foolish people was as good as keeping company with wise men. The author Jon Elster, in his book Preventing Mischief, wrote that the purpose of legislation is to prevent things that might harm the people. In a more modern example, Steve Jobs said that innovation and focus aren’t about saying yes to the thing you’re working on, but saying no to everything else.
In short, antifragility can be about getting rid of fragility. Remember one of our earliest examples of antifragility: how people get stronger by exercising. It’s not done by somehow seeking out stronger muscle tissue. Instead, weak muscles break down and stronger muscles grow in their place.
People tend to have a problem with the concept of negatives, the idea that what isn’t there is as important as what is. That’s why people who solve problems are praised, but people who avoid them in the first place often go unnoticed. In the same vein, people like being given concrete, actionable advice. That’s why there are countless self-help books out there advertising that they’ll give you a 10-step process for whatever you think you need help with.
However, some of our most concrete advancements—personal, scientific, or what have you—come from negation. That is, getting rid of what isn’t working or what we know to be wrong. We, as a species, know much more about what’s untrue than what’s true.
For example, if someone thinks that all swans are white, seeing a single black swan is enough to disprove that theory. However, seeing a thousand white swans isn’t enough to prove it. No matter how many white swans that person sees, there’s always the chance that a swan of another color could be out there somewhere.
We can also improve our own health and happiness by negating—that is, getting rid of—things that negatively impact them. Just like antifragility is often about avoiding fragility, happiness is often about avoiding unhappiness. Consider what people think is necessary for happiness: good health (that is, lack of sickness), money (lack of worry about material goods), and purpose (lack of boredom), to give a few common answers.
As for health, the doctor Druin Burch wrote in Taking the Medicine that getting rid of smoking would be a greater net benefit to humanity than curing every type of cancer in the world. That would be an enormous example of progress by negation.
Modern society, corporations, and governments have a tendency to look for ever-more-complicated predictive models and solutions to the problems that they foresee. However, this isn’t necessary—in fact, it’s actively harmful.
Such models lead people to over-prepare for the wrong things (like when the predicted oil boom turned into an oil crash). At the same time, they completely miss the rare events that’ll have the greatest impact; the ones far out on the “significance” axis.
Less is more is a common adage that’s remarkably easy to apply. Instead of relying on complex models and calculations to predict when and how problems will occur, just make sure that you’re prepared for when they inevitably do. For example, if you’re in business, don’t waste your effort trying to predict the next market downturn; simply ensure that you’ve got money set aside for when (not if) it happens.
When there is a problem that needs to be solved, small-scale solutions can also have enormous effects if applied in the right places. For instance, economist and author Bent Flyvbjerg once demonstrated that most corporate cost overruns are due to large-scale technology projects. Therefore, rather than obsessing over models and predictions, these corporations could easily just not take on such projects, thereby avoiding the financial hit.
“Less is more” can, and should, also be applied to making personal decisions. If you need to make lists of reasons why you should do something, then you shouldn’t do it. Something worth doing should only need a single, convincing reason. Similarly, people who talk themselves up with long biographies and lists of achievements should be avoided; if they’d done something worthwhile, they’d lead with that, and it would be enough to get your interest.
Antifragility, and the related concept of progress by negation, naturally lead to the conclusion that the old is stronger than the new. There’s a reason that we talk about things “standing the test of time.” Time is, in essence, random events. The more time passes, the more random events will happen. Fragile things will, sooner or later, be destroyed by randomness. Therefore, things that survive for a long time must be durable or antifragile.
The next time you eat a meal, think of how many discoveries and technologies go into making it possible and how ancient many of them are. Silverware has been saving our hands from getting greasy and burned since ancient Mesopotamia. Wine has existed for around 6,000 years and glasses to drink it from for around 3,000. The fire to cook the food is typically considered one of the earliest discoveries that humans ever made.
Progress by negation is counterintuitive. People tend to be drawn to the newest and shiniest things. If asked to imagine the future, they’ll think about what new technologies and discoveries might be present; rarely do they think about what won’t survive the test of time.
Any number of science fiction novels prove this point. Such books are almost always about what’s new, not about what’s still there from the old days. Almost all of them envision a world that’s almost unrecognizable, instead of one that still closely resembles the time when the book was written—though admittedly, that may just be because it wouldn’t make for a very interesting story.
Since the old is stronger than the new, it leads to another counterintuitive point—things that have survived for a long time already will outlast things that haven’t been around as long. People often overlook this fact for two reasons:
It’s true that individual items, or people, or many other things will break down over time. Even items we don’t usually think of as perishable, like cars or books, will eventually rust or rot and stop being usable. However, the ideas and technologies behind the automobile and the printed word are much older than any individual car or book, and they will survive much longer. That’s not to say that all ideas and technologies are immortal—only that the ones most vulnerable to time are probably already dead.
It’s also not to suggest that this holds true 100% of the time. Of course there are counterexamples: The Internet, which is newer than landline phones, seems likely to survive long past the time when the last phone cord is unplugged. The point is that, on average, things that have existed for a long time will outlive the newest fads.
A teacher named Paul Doolan once asked how he could teach children the skills they’d need in the 21st century, when he didn’t know which skills those would be. In light of the reverse-aging effect, the best answer is to have them read the old classics; the future will be found in the past.
In spite of antifragility all but ensuring that old ideas and techniques are better than new ones, many people seem to have an obsession with everything new. This obsession is called neomania.
One reason for neomania is that humans in general are highly aware of changes and differences. Someone whose phone was top-of-the-line a month ago might see people using sleeker models with slightly bigger screens and suddenly feel that her own (still perfectly good) phone is outdated and due to be replaced. It’s not about the phone she has; it’s about the changes that have happened since she bought it.
This constant desire to have the newest thing leads to what’s been called a treadmill effect. People feel an initial rush of joy when they buy the hot new thing. However, that joy fades quickly, and they grow bored with their new toys. Then a new model hits the market, and the cycle continues.
For some reason, people seem especially prone to neomania when it comes to gadgets, but not as much with things that we don’t consider to be “tech.” Someone who upgrades her television every chance she gets might have the same oil painting hanging next to it for years. She might sit on the same couch for decades and only replace it if it wears out—not because a new model with slightly larger arms showed up in the store.
The reason for this could be related to fragility and antifragility. Gadgets tend to be fragile, both literally and in the sense that they quickly become outdated and replaced by new tech. Many other items have some inherent antifragility, like designer shoes that only become comfortable after wearing them for a while.
Scientific fields are especially prone to neomania. That’s not surprising, given that much of science is devoted to finding new information, but the people touting each new discovery don’t take antifragility into account. Hundreds of thousands of scientific papers, many lauded as “visionary” or “groundbreaking,” end up just being so much background noise. Their exciting new ideas amount to nothing.
One disappointing example was the work of Judah Folkman, who was developing a new method to treat cancer. His theory was that, since cancer cells need a great deal of blood and tend to build new blood vessels to get it, one could cure cancer by simply cutting off the blood flow and starving the tumor. It looked brilliant on paper, but it had virtually no impact on cancer treatment. However, it did lead to new treatment methods for macular degeneration. In a way, this is an example of both the fragility and antifragility of information. When Folkman’s research was shiny and new, it amounted to nothing. However, over time, people found a different application for his work that was quite effective.
When it comes to science, we should apply the same rule of thumb as we do with any kind of technology or information: The old is, on average, better than the new. New innovations and theories are usually fragile, just waiting for a single piece of data that disproves them. Only science that has withstood the test of time should be relied upon.
That doesn’t just apply to science, but all kinds of information. You could take any high school or college textbook and flip to a random page; odds are that the information you find there is still relevant. However, if you looked up a conference from five years ago about the same topic, featuring all the heady young professionals of the day, chances are that there won’t be anything of interest there. Whatever they were discussing will have either become obsolete or irrelevant in the time since then.
If the old will, most of the time, outlive the new, then what might the future look like? Presumably, much like the present. Perhaps there will be even more of the things that define our lives now: more books, more computers, more artisans and craftspeople (based on the assumption that they’ll outlive the modern obsession with cheap, mass-produced goods).
However, if we’re to progress by negation, then certain fragile things must be destroyed or weakened in the years to come. Remember what makes something fragile: size, over-efficiency, and top-down leadership—which are inevitably based on flawed theories and models. We’ll likely see the decline of major corporations, which are unusually vulnerable to random events as shown by the fragility graph. Powerful nation-states and centralized banks may still exist in name, but it’s likely that they’ll be severely weakened.
As with Fat Tony’s prediction of the oil crash, foresight is less about specific predictions and more about spotting vulnerability. In fact, that was the classical role of the prophet: not to look into the future, but to examine the present and issue warnings based on that. The future was more the domain of soothsayers and fortune-tellers—and the classical wisdom was not to listen to such people.
To branch into philosophy for a moment, consider the story of Empedocles’ dog. Someone asked the philosopher Empedocles why his dog seemed to prefer to always sleep on the same tile of his floor. Empedocles answered that there had to be some kind of match between the dog and the tile—something that couldn’t be measured or defined in any traditional way, but that was nonetheless real.
It seems likely that there’s a similar sort of match between people and the parts of our culture that have survived the longest, like reading, writing, and possibly religion. We may never understand it in a way that would satisfy Socrates’s obsession with definitions, but the test of time will reveal where that connection lies.
In Chapter 21 we explore how and why human intervention so often causes more problems than it solves. In short, it’s because we’re pitting our fragile models and theories against nature’s countless years of evolution and antifragility. We also explore the idea of exponential benefit and harm—effects that rapidly outpace the size or apparent significance of the events that caused them. Medicine provides excellent examples of both points.
Chapter 22 is about how human efforts to live forever are doomed to fail—and why that’s a good thing. Without individual fragility, there can be no societal antifragility. This argument grows from the points laid down in Chapter 21 about how human interventionism does more harm than good.
Remember the definition of iatrogenics: unintended harm from medical treatment. Iatrogenics is common due to two logical flaws.
The first flaw is the human need to do something. Even if someone has a minor injury or disease that will heal perfectly well on its own, many people—especially doctors—feel like they have to intervene. Someone with a mild fever may take aspirin to bring it down to normal, or put ice on a swollen nose to take the swelling down—even though there’s no evidence that doing so helps it to heal faster.
The second flaw is mistaking a lack of evidence for evidence. For example, smoking was once considered to have mental and physical health benefits. The harm that it caused wouldn’t become obvious for many years, and people mistook that lack of evidence for evidence that smoking isn't harmful.
These two tendencies combine to create a situation where harmful drugs and procedures—which we don’t yet know are harmful—are given to people who would have been just fine without them.
Statisticians have estimated that reducing medical expenditures (only up to a point and only on elective treatments) would actually increase the average lifespan in wealthy countries like the U.S. by reducing iatrogenics. This is one large-scale application of the “less is more” principle.
This isn’t an argument that medical care should never be given, just that we should be much more discerning about when and how much we intervene. The iatrogenics of any given drug or treatment are linear—they increase or decrease consistently with how much of the treatment is given. However, the benefits of that treatment can be convex (having accelerating benefits) based on how severe the patient’s condition is.
For example, there’s a particular drug that treats high blood pressure. When a patient suffers from mild hypertension, the effectiveness rate of this drug is only 5.6%. However, when the patient’s blood pressure is in the “high” range, the effectiveness rate is 26%; in the “severe” range it climbs to 72%. However, the side effects of the drug are consistent across all of those categories.
Clearly, then, the trick is to only intervene when the benefits outweigh the risks. In the case of the heart medication described above, it doesn’t make sense to give it to someone who only has mild hypertension; the patient will get all the downsides and probably none of the benefits. However, for someone who would die or have severely reduced quality of life without medical intervention, the iatrogenics are relatively small. The person has little to lose, so there’s not much harm that can be done.
However, instead of this rational approach, people will try to intervene whenever there’s the slightest hint of a problem. Someone whose blood pressure is perfectly normal may, at times, be slightly above or slightly below normal just through random chance and variation. If it happens to be high when he’s at the doctor’s office, that doctor might prescribe totally unnecessary medication to lower it. In the end, this will probably cause more harm than good. This is a classic example of being fooled by randomness.
People are fooled by randomness when they mistake a single data point, like an elevated blood pressure, for a trend. Perhaps the man had too much coffee that morning, or he was nervous about something; that one reading isn’t proof of a trend of high blood pressure, but the doctor took it as such.
A larger-scale example of being fooled by randomness would be the oft-cited statistic that the average lifespan used to be just 30 years, up until the 19th century or so. The key word there is average: It’s skewed heavily by people who died young, either at birth or in early childhood.
There’s also a case of epiphenomena here: confusing correlation with causation. It’s true that medicine has advanced greatly in the last few hundred years, and it’s also true that the average lifespan is much longer than it used to be. However, that doesn’t mean that one necessarily caused the other. There are many other possible explanations, such as improved sanitation and increased law enforcement, that could also account for (or at least contribute to) people’s longer lives.
The reason iatrogenics are so common—and why everything that humans try to control from the top-down tends to fall apart—is because people are trying to outsmart nature. Nature proceeds by evolution, which has redundancies and fail-safes built in as we’ve discussed before.
People tend to think that science is rigorous, while nature isn’t. In fact, the exact opposite is true. Things that exist in nature have been subjected to countless random events over thousands, if not millions, of years, allowing fragile things to break and antifragile things to become stronger. Though we may not always understand how or why something natural works, we can rest assured that it does.
Compared to that, human attempts to control the world through mathematics, models, and science are hopelessly flawed. So many major catastrophes, from wars to global warming, are the result of humans trying to impose their will where it isn’t wanted nor needed.
Therefore, we need to reverse the burden of evidence. When people introduce some unnatural thing into the world, like smoking or nuclear power plants, there will always be people who argue that it’s dangerous. The burden of evidence currently rests with those people; they have to somehow prove their point that it’s dangerous, when really only time can do that. Instead, the burden should be on the ones pushing the innovation to prove that it isn’t harmful—or, at least, that the benefits outweigh the harm.
Some ways to do that are what we’ve discussed in previous chapters. For example, tinker with the models a bit; see what happens if you change some of the fundamental assumptions, and whether the benefits increase more quickly than the damage. Alternatively (or even better, additionally), rather than trying to eliminate risk—which is impossible—make sure that you’re fully prepared for the worst-case scenario.
It’s not just in medicine per se where human intervention seems likely to have some iatrogenics. As previously mentioned, our current trend of global warming was an unforeseen effect of industrialization. However, there are also many personal choices people make that go against what nature has concluded from millennia of trial and error.
For example, modern people have access to a steady supply of food. They have no chance to enjoy the benefits of intermittent fasting unless they specifically avoid food for a while. Even more than fasting, people tend to avoid variety in their diets; ancient humans would have had to eat whatever they could catch or forage, while modern people suffer from eating the same over-processed foods day after day.
Nowadays, people also tend to avoid walking as much as possible. Their lives are lived in front of monitors, with (perhaps) occasional breaks to exercise. However, our ancestors were persistence hunters; they would unhurriedly and easily walk after their prey, possibly for days at a time. There may be some unknown benefits to easy, stress-free walking that vary from the benefits of power walking or jogging.
These things may have iatrogenics or they may not. The point is simply that humans have an abysmal track record when they ignore what natural evolution has designed them for.
One major way we could avoid such risks is to take an empirical, rather than theoretical, approach. In other words, to acknowledge that we know what happens, and it doesn’t matter why it happens.
Consider what happens when you lift weights: Your muscles become bigger and stronger. One explanation is that your muscles experience microtears and increase in size as they heal. A more recent idea has to do with hormonal signaling in response to stress. No doubt at some point in the future, there will be other theories. However, none of that changes the empirical observation that lifting weights makes you stronger.
Proceeding based on what we know to be true, rather than what we think are the reasons behind it, would avoid the problem of relying on fragile theories and models. For example, people found that eating fats and carbohydrates together led to weight gain. That was the empirical observation.
However, upon an analysis of that fact, doctors and mathematicians decided that the fats were to blame. This led to the “fat-free” craze in food service and marketing. However, they made an elementary statistical mistake: When two factors are jointly responsible for an observed outcome, sometimes it seems that only one of them is to blame.
In short, we need to accept and understand that there are—and probably always will be—things we don’t know. In trying to reduce calories in food, scientists developed and introduced all kinds of artificial sweeteners, only to learn later that they caused everything from weight gain to cancer. The theory was that, since calories are associated with gaining weight, reducing the calories would make the food healthier. However, the things they didn’t know—the side effects—outweighed those benefits.
Another modern trend—only dating back to the Enlightenment—is the fear of death. Ancient literature is filled with stories of great heroes seeking not immortality, but a good and honorable death.
People back then saw themselves as part of the larger whole of humanity, defined by what they contributed to the world and the children who survived them. The idea that the individual is the most important thing, and that each individual should be preserved as long as possible, is quite a recent one.
An individual life is naturally a fragile thing. It has to be, in order for the species to be antifragile—remember that antifragility can only occur after damage. Though scientists continue to artificially lengthen life, and some even seek the keys to immortality, we’re not meant to live forever as the sick, fragile animals that we are. We’re meant to live well, and then die to make room for others.
Chapters 23 and 24 are about ethics and its connection to fragility. Specifically, they discuss modern situations where people are able to split benefit and harm—in other words, fragility and antifragility. Wealthy bankers and business executives, for instance, are often able to rake in great profits for themselves while shunting costs to the less fortunate.
Chapter 25 is the conclusion of this long essay. It starts by restating the thesis: Everything is either helped or harmed by random events and damage. It then goes on to briefly show how every point that we’ve discussed grows out of that original concept.
We’ve talked a lot about benefit and harm and how many situations have some of each. The trick, as we’ve said before, is to get yourself into situations where you’re likely to be benefited more than harmed.
Unfortunately, modern society especially makes it possible to give the benefits to one person or group, and the harm to another. The problem is imbalanced agency; some people have the power to make decisions that affect many others, and the others have no choice but to accept the outcomes of those decisions. Taken to the extreme, this imbalanced agency is how we end up with countries that have a handful of billionaires and millions of people in poverty.
When we say agency, what we really mean is optionality. The people who have the power are the ones who have all the options and therefore, all the antifragility. They can shunt the fragility onto those who are less powerful, generally the poor and the marginalized.
The best way to correct this imbalance is by making sure that everyone has something to lose—some kind of fragility that they need to defend against. Rather than enforcing this principle with complex rules and massive enforcement agencies (remember, size and rigid controls make systems fragile), it could be addressed with simple laws. In fact, the Code of Hammurabi found an ideal solution almost four millennia ago.
One tenet in the Code stated that any injuries or deaths caused from the collapse of a house would also be inflicted on the one who built that house. If the owner of the house died, the builder would be executed; if the owner’s son died, the builder’s son would be executed, and so on.
It might sound brutal to our modern-day sensibilities, but the punishment wasn’t the point. The point was to give the builder a stake in what he was building. A craftsman knows his craft better than any government-sponsored team of safety inspectors, so this is the simplest and most effective way to make sure that what he makes is safe and of good quality.
Fat Tony has a more modern take on this issue, seen in his two rules of thumb about flying:
The first rule is about avoiding the transfer of fragility—in simpler terms, making sure the pilot has something to lose before getting on the plane yourself. The second rule goes back to redundancy—making sure there’s a backup plan in case something goes wrong anyway.
One major source of fragility in our society is that we give a free pass to the talkers: the people who spread ideas but do nothing with them personally. Even if the ideas turn out to be harmful, only the people who act on those ideas face consequences—in other words, they take on the fragility of the situation. This effect is especially pronounced in the Information Age, where ideas can spread to millions of people almost instantly.
One extreme example of this transfer of fragility from the talkers to the doers can be seen in the work of New York Times journalist Thomas Friedman. His op-eds about terrorism contributed a bit to the start of the Iraq war in 2003. However, it was other people who faced the consequences of Friedman’s ideas. Friedman himself got to sit safely behind his computer and continue writing.
Many of our systems operate on punishment. For example, someone who drives drunk could face heavy penalties like fines and jail time. If the charges are severe enough, that person might never be allowed to drive again. However, for some reason, the Thomas Friedmans of the world don’t face any consequences at all.
The talkers have another advantage as well: They get the benefit of hindsight. While prediction is a notoriously unreliable art, postdiction (talking after an event has happened) makes one look infallible. It gives the chance to cherry-pick events or outright lie about them.
Given the chance, an armchair expert will tell you that he saw it coming (whatever it was). Furthermore, he’ll say that given his great track record, you’d be foolish not to listen to everything he has to say in the future. Of course, any future predictions he makes will cause the usual fragility.
Our modern preoccupation with knowledge plays into this source of societal fragility. Knowledge means talk; academics, journalists, and everyday people can just talk about their ideas and their predictions, and there’s little (if any) need to back up their statements with evidence. Winning a debate nowadays isn’t about being correct, it’s about being charismatic. The weight of an argument is measured in Facebook likes.
Like the builder in Hammurabi’s time, everyone should have a stake in what they put into the world, whether that’s concrete goods or words on paper. For example, if you write an article calling for war, you should be the first one volunteering to fight it.
We’ve talked about journalists and talkers transferring fragility onto those who put their ideas into practice. However, there’s no single greater transferer of fragility than the stock market. The U.S. stock market in particular gives all the benefit to corporate executives and managers, while shunting all the harm to the taxpayers.
The problem is that the corporate types have no stakes. This seems counterintuitive, since they work for and represent the companies in question, but we’ve seen time and again how major corporations get bailed out (with taxpayer money) anytime the market takes a serious hit.
As a result, the corporations—and especially their high-ranking executives—become highly antifragile. They benefit directly from random events and shocks to the market. Even if something so significant happens that the company lets an executive go, you can be sure she’ll get a hefty severance package. They get all of the upside and none of the downside.
That’s because the downside falls to the ordinary folk, the ones who don’t have so many options available. The U.S. stock market has cost retirees trillions of dollars, while corporate managers have gotten richer by hundreds of billions over the same time period. The banks are even worse—they’ve lost more money than they’ve ever made, but the cost falls to the taxpayer while the CEOs rake in enormous bonuses.
The reason for all this is that the largest corporations and banks have disproportionate control over the government through their lobbyists. With the state more or less in their pockets, they have free reign to take our money with their products and then take it again when the stock market falls.
Clearly, something needs to change to give bankers and executives some stakes in their own companies. Back in the 1300s, Catalonia had a tradition of beheading bankers whose banks failed; however, modern people may find that distasteful. Still, in some countries like Brazil, bankers are held personally liable for any losses up to the value of their own assets. If U.S. bankers were held to similar standards, perhaps the market wouldn’t crash so often.
As it stands, the problem is that these transfers of fragility from executives and bankers to the common people are perfectly legal (though certainly not ethical). Insiders can take advantage of the system’s enormous size and complexity to exploit weaknesses and loopholes—remember, size and complexity increase fragility.
For example, a previous vice chairman of the U.S. Federal Reserve Bank—now employed by a private bank—came up with a way to get around the usual limits on investment insurance by breaking up a large investment into numerous different accounts. This would benefit his clients, and therefore himself, while putting the risk on taxpayers who might have to bail out the bank again.
It was legal within the rules of the system, but by no means ethical. To make matters worse, this same person used the prestige of his former position to argue against the government increasing the insurance limit on individuals.
Again, these ethical issues arise because of stakes. In this case, it’s actually worse than the thinkers who have no stake in their ideas. This former vice chairman had an inverse stake: He had nothing to lose but a lot to gain.
However, you’ll often see justifications after the fact about why these actions were ethical. With the options available through hindsight and cherry-picking, it’s easy to fit selfish actions to a narrative about the common good, or what have you. This is simply applying the benefits of optionality to ethics.
We see this commonly from American gun lobbyists, for example, who claim that widespread ownership of guns improves public safety and benefits the country (despite all evidence to the contrary). More generally, people often fit their beliefs to their actions rather than their actions to their beliefs.
Another legal-but-unethical argument that comes up a lot is “everyone’s doing it.” This is especially common in academia: Teachers have to teach based on the textbooks and the curriculum, even if they know what they’re teaching isn’t accurate.
Then students take that incorrect knowledge into their professional lives, where companies expect them to use the same theories and rules that everyone else uses—again, even when those ideas are wrong and dangerous. That happens a lot in economics, with many of the results we’ve previously discussed in this section.
In short: Legal doesn’t mean ethical, and people often put their ethics aside when they have something to gain. However, those people might use the options granted by arguing after the fact to make their actions seem ethical. One of the most common after-the-fact arguments is that it’s natural to want to make money, as if that somehow excuses the harm done to others in the pursuit of wealth.
As in many other areas, we should progress by negation. If we removed these sources of fragility—that is, the huge and complex organizations managed by people with nothing to lose and a lot to gain—we’d create a much better system.
Everything that exists is either helped or harmed by randomness and uncertainty. Every idea and example in the preceding essay is derived from that one precept. If there are cases that don’t seem to flow from it, that’s only because nature often operates at a level that baffles our limited understanding—incidentally, the same reason why our predictions and models fail so often.
Systems that are large, rigidly controlled, and grounded in theories and predictions are fragile. They break easily under stress, and often cause harm to the people who rely on them. Large corporations, governments, and the stock market are all fragile.
Systems that grow and evolve naturally—systems that are alive, whether literally or metaphorically—are antifragile. Rather than breaking, they’re strengthened by random shocks that eliminate the weak parts and allow the strong parts to grow.
Antifragile systems tend to be smaller, like the loose collection of cantons that make up the government of Switzerland. While they may cause some apparent instability on a small scale, on a large scale they’re much more stable and less prone to collapse than large systems with top-down controls.
Perhaps the best way to seek antifragility is progress through negation; rather than trying to seek out new, antifragile ideas and practices, we can start by simply eliminating the fragile ones.
Modern society has attempted to take the risk out of life and, in doing so, has accidentally introduced a great deal more. We should remember the barbell model—a combination of protection and risk-taking in our endeavors—and reject our modern ideas of perfect efficiency and prediction.
Risk and variation are the essence of life. Food only tastes good because of hunger; joy is meaningless without sadness; results are worthless without effort. Finally, ethics only exist when there’s some personal risk to holding them. The best way to check whether you’re still alive is to ask yourself if you like variety—are you made stronger by random events, or do they break you?