In Algorithms to Live By, Brian Christian and Tom Griffiths make the case that computer science, a field that’s typically seen as highly specialized, actually contains a wealth of practical knowledge we can use to improve our lives. Computers can process tasks with blinding efficiency and quickly come up with creative solutions to complex problems. The authors of Algorithms to Live By argue that, by utilizing the same strategies as computers, we can do the same.
The authors assert that this is true because humans and computers face very similar problems. Both humans and computers are motivated to use their limited resources (which include memory, attention, and time) as optimally as possible. Consequently, many of the algorithms, or sets of instructions, that computers use to solve their problems work just as well in our own lives.
We’re going to discuss all eleven of Christian and Griffiths’s “algorithms to live by,” which we’ve divided into four categories: First, we’ll take a look at algorithms intended to help you make better decisions. Second, we’ll detail some algorithms to help you organize your life. Third, we’ll show off algorithms to help you solve difficult problems. Finally, we’ll discuss a couple of miscellaneous algorithms that don’t fit into the other categories.
Is the Brain Really a Computer?
Christian and Griffiths aren’t the first to compare the brain to a computer as the basis for their argument—researchers have been using this analogy for decades. Today, the question of whether or not we should view the brain as a complex computer is at the heart of a fierce debate.
Some experts claim that the metaphor is limiting insights on the cutting edge of neuroscience more than it’s aiding them. They poke holes in the metaphor, pointing out ways in which the brain behaves unlike a computer and arguing that such inaccuracies will lead researchers to misguided assumptions.
On the other hand, other experts argue that the brain-as-computer metaphor has not yet outlived its usefulness. In their eyes, it doesn’t matter if the brain doesn’t act like a computer—what matters is the fact that the brain accomplishes many of the same functions as computers do: It intakes, processes, and exports information. The fact that your brain computes makes it a computer.
Christian and Griffiths’s first algorithm is: To choose the best from a series of options, explore without committing for the first 37%, then commit to the next top pick you see. This algorithm is designed to solve something mathematicians call an “optimal stopping problem”—when faced with a series of options, when do you settle down and commit to the opportunity in front of you if you don’t know what opportunities will be available in the future?
For example, imagine you’re looking for a job and know your skills are in high demand. After a couple of days of searching, you receive an offer out of the blue that’s better than any of the available positions you’ve seen so far. However, it doesn’t have everything you’re looking for. Do you take it or keep searching for better options?
According to Christian and Griffiths, statisticians have determined that the optimal way to solve this problem is to initially reject all opportunities, exploring your options to get a sense of what quality looks like. Then, at a certain point, commit to the next option that’s better than any you’ve seen so far. By calculating the probability that you pick the best option available for every possible “pivot point” from exploration to commitment, researchers have determined that you should explore for the first 37% of options, then commit to the next best opportunity.
This Optimal Solution Still Falls Short
Mathematician Hannah Fry pokes holes in Christian and Griffiths’s strategy, demonstrating how likely it is to fail: If, following the algorithm, you’re unlucky enough to encounter the best available option during your exploratory period, you’d have to reject it and go on to reject every other option available, as none will be better than what you’ve encountered already. Even though Christian and Griffiths are offering a mathematically optimal algorithm, the odds of you finding the best option, she states, are a dismal 37%.
Fry does, however, offer a solution. Christian and Griffiths define success as claiming the best opportunity available, but if you’re willing to accept an opportunity that’s good, but not the best, you can vastly increase your chance of ending up satisfied.
If you’re okay with an option in the top 5%, for example, you should begin your commitment period just 22% of the way through. According to Fry, this raises your chance of success from 37% to 57%. If you’re willing to accept an option in the top 15%, you can pivot 19% of the way through for a whopping 78% chance of success.
Christian and Griffiths’s next algorithm is a broader directive that applies to any area of your life you want to improve: To optimize your life, pursue whatever opportunity has a chance to be the greatest.
The authors frame life as a complex “multi-armed bandit” problem, referring to a model computer scientists use in machine learning. The multi-armed bandit is a theoretical experiment in which a decision-making agent is presented with a row of slot machines (“one-armed bandits”) and must try out different machines, learning from the outcomes to figure out which will pay off the most.
Christian and Griffiths explain that the multi-armed bandit problem’s optimal solutions are called “Upper Confidence Bound” algorithms, which recommend making decisions based on your options’ best-case scenarios. Pursue whatever opportunity in life has the potential to pay out the most, even if you think it’s extremely unlikely, since the only way to know for sure whether or not it’ll pay off is to test it yourself. Then, if you’ve given something a shot and determined that it’s not worth your while, adjust accordingly and shoot for the moon somewhere else.
Preparing for Black Swans
Nassim Nicholas Taleb supports Christian and Griffiths’s advice to pursue opportunities with a low chance of outrageous success, pushing this logic even further in The Black Swan. Taleb offers a version of this idea that’s more extreme than Christian and Griffiths, asserting that you should entirely ignore an opportunity’s track record and expected gains, instead focusing on the boundaries of possible outcomes. This includes considering extremely negative outcomes, which Christian and Griffiths don't focus on as much as positive ones.
For example, if a bank has consistently made millions giving out loans over the last forty years, you might claim that it has proven to be a well-paying “bandit” worth investing in. However, Taleb would argue that this track record means nothing and that the nature of loans comes with the ever-present devastating risk that borrowers will default. In other words, even if an opportunity presents an extremely good best-case scenario, you shouldn’t invest if it carries an equally extreme worst-case scenario—a point Christian and Griffiths neglect to consider.
The next algorithm posed by Christian and Griffiths addresses the problem of an unpredictable future: To make better predictions, first, use your prior knowledge of the situation to estimate the chances of something happening, then adjust based on observable data. This strategy of using your prior beliefs to analyze the evidence you have is called “Bayes’s Rule.”
For example, if you want to predict when you’ll receive a raise at work, you might begin by asking a coworker how long it took for them to get a raise, then adjust that estimate based on how you think your boss views your performance.
Focus Only on the Information That Matters
In Superforecasting, Philip Tetlock and Dan Gardner agree that proper application of Bayes’s Rule is necessary to make accurate predictions. However, most people are bad at this kind of thinking because to use Bayesian inference, you not only need accurate knowledge of the situation, but you also need to know how impactful each piece of knowledge is. This is where many people trip up: They’re bad at determining what information actually matters. In our example above, you might overestimate how much your job performance hastens your pay raise and assume you’re due for a raise much sooner than you actually are.
Tetlock and Gardner explain that the best “superforecasters” make significantly smaller adjustments in light of new information than the average predictor. In most cases, only a few key facts will have a major impact on your forecast—so, when adjusting your prediction, ignore the vast majority of observable evidence.
Christian and Griffiths’s final algorithm to aid decision-making is as follows: To make better decisions, consider less information.
With this algorithm, the authors address the problem of overfitting. In statistics and machine learning, “overfitting” occurs when a model takes too many variables into account, resulting in faulty understanding. Christian and Griffiths argue that, in the same way, if you consider too many variables when making a decision, you’ll “overfit,” overestimating the impact of insignificant information and underestimating the details that really matter.
According to Christian and Griffiths, the trick to conquering overfitting is to consciously restrict the amount of information you consider when making decisions. Identify one or two factors that matter the most and ignore everything else. For example, you may decide what job to take solely based on how much you expect to enjoy the work.
Minimalism: Stop Overfitting Your Life
Christian and Griffiths assert that to conquer overfitting, you must focus on what matters and ignore everything else. In Minimalism, Joshua Millburn and Ryan Nicodemus apply this logic to life itself.
Modern humans have a tendency to overfit, trying to make themselves happier by adding more to their lives instead of focusing on the few factors that matter. Goods like luxury cars, fancy homes, and picturesque vacations do nothing but distract us from the things in life that offer the most value, like personal health, loving relationships, and a sense of contribution to others.
In general, removing things in your life that don’t add value is a more sustainable path to happiness than constantly trying to add bigger and better new pleasures.
Unlike in other chapters, Christian and Griffiths don’t offer a single algorithm to handle scheduling. Computers use algorithms chosen for their specific needs to determine what tasks to focus on first. Likewise, the authors state that your optimal scheduling algorithm differs based on your goals and priorities.
Christian and Griffiths assert that most of the time, your highest priority is to complete whatever tasks earn you the most value. They advise you to assign a “weight,” a numerical measurement of value, to every item on your to-do list. By dividing this weight by the amount of time it’ll take you to complete the task, you can easily calculate how much value you’re generating every hour that you’re working. Then, you should simply work at any given time on whatever gives you the most value per hour.
Another scheduling algorithm Christian and Griffiths recommend is “Shortest Processing Time,” which tells you to work on whatever task will take the shortest time to complete. The authors argue that you may choose to use this algorithm if you require motivation, or if you’re stressed and overwhelmed by a large quantity of tasks.
Do Your Hardest Tasks First
In the productivity bestseller Eat That Frog!, Brian Tracy places even more importance on the need to weigh your tasks by value. He argues that your most valuable tasks are almost always the most difficult to complete—as a result, most people procrastinate on these major tasks, filling their time with easy busywork that ends up accomplishing very little. Tracy’s thesis is that unless you intentionally tackle difficult high-value tasks first, life will hand you a never-ending supply of easy low-value tasks, and you’ll never get around to doing what’s truly important.
Tracy’s advice conflicts with Christian and Griffiths’s Shortest Processing Time algorithm in that he doesn’t find much use in tackling the shortest tasks first. He argues that, by breaking your most important tasks down into a series of shorter steps, you can translate everything you have to do into tasks that take approximately the same amount of time. Then, all you need to do is rank them by importance.
Christian and Griffiths’s next algorithm is intended to give you easy access to the things you need: To efficiently access any collection, segment it based on frequency of use.
Computers can efficiently search their vast stores of data by grouping what needs to be accessed most frequently and searching these “caches” first. In the same way, Christian and Griffiths recommend you “cache” your physical belongings by creating small piles of your most frequently used clothes, books, and files within arm’s reach of where you’ll need them.
Marie Kondo Rejects This Algorithm
In The Life-Changing Magic of Tidying Up, Marie Kondo argues that sorting your belongings by frequency of use is a common organizational mistake. In her eyes, the seconds you may save by storing everything in “caches” within arm’s reach incur a greater cost: the clutter of countless piles around the house.
Kondo asserts that this kind of “organization” is really disorganization in disguise. More often than not, we’ll drop our belongings wherever we are, then build our routines around the location of these new caches. Additionally, this system lacks a way to easily memorize where everything is, so if you need something that’s stored in an unusual place, you’ll struggle to find it.
Christian and Griffiths’s final organizational algorithm dictates the most efficient way to sort a group of items into a specific order. They argue that we should use the same sorting algorithms computers use to arrange files to efficiently sort physical collections in our own lives.
The best sorting strategy Christian and Griffiths have to offer is to first divide the collection into small categories, then rearrange individual items—a computer algorithm known as “Bucket Sort.” This algorithm is based on the fact that sorting gets more difficult with scale. Sorting a large group takes significantly more time than sorting four groups one-fourth of its size.
For example, if your boss were to ask you to sort twenty years’ worth of old archived meetings on VHS by date, you would want to use a Bucket Sort: First, divide them into piles by year, then arrange the smaller piles by hand.
How Other Sorting Algorithms “Divide and Conquer”
Since sorting gets more difficult with size, many of the most efficient sorting algorithms involve separating the collection into smaller groups, just like Bucket Sort. These are known in computer science as “divide-and-conquer algorithms.”
One of the most popular divide-and-conquer algorithms that Christian and Griffiths chose to exclude is called “quicksort.” With quicksort, you pick an item to be your “pivot” and divide the entire collection into two groups based on whether they should be before or after the pivot. You repeat with a new “pivot” within each group until the whole list is sorted.
Divide-and-conquer algorithms come in handy in situations where Bucket Sort doesn’t work well. If many of the items in your collection are too similar, you won’t be able to come up with buckets that evenly divide them. Additionally, if the buckets you create don’t evenly divide your collection as well as you expect them to, the time you spend Bucket Sorting is a waste.
The world is incredibly complex, and many problems are literally impossible to precisely solve. Christian and Griffiths argue that the best way to solve problems like these is to strategically embrace imperfection.
Even when experts use computers to make exact calculations, they often trade away accuracy to save time. From this, Christian and Griffiths conclude that simply lowering your standard for success is often necessary to keep moving forward. In some cases where it’s impossible to find the perfect solution, getting close is just as good.
In other cases, Christian and Griffiths suggest you employ the mathematical problem-solving strategy of “constraint relaxation.” By removing some constraints and solving an easier version of your problem, you spark new ideas to help solve the original problem.
Quantum Computing Could Solve Impossible Problems
Christian and Griffiths argue that imperfect strategies such as constraint relaxation are necessary because some problems are simply impossible for us to solve. However, in the near future, we may not need to make this compromise. Some believe that we’re quickly approaching a watershed moment in computer science in which many of the problems we see as impossible will become solvable—thanks to “quantum computing.”
Instead of processing information in ones and zeroes like a traditional computer, quantum computers can perform calculations on data that are neither ones nor zeroes—each digit has a chance of being either value and behaves like something totally new. This isn’t just theory—quantum computers already exist, and they’re extremely efficient. For now, they make too many errors to be of any real use, but engineers are working hard to fix this in the near future.
Christian and Griffiths’s next algorithm is all about the power of randomness: To move past dead ends, act randomly.
The authors explain that computers use something called the “hill-climbing” algorithm to solve problems. They calculate a solution, then slowly improve it by testing out small adjustments. When developing their problem-solving strategies, people naturally follow a similar process. However, both computers and humans run into the same issue with hill climbing: Eventually, they hit a “local maximum”—a solution that can’t be improved by small adjustments yet is far from the best solution available.
According to Christian and Griffiths, the way to escape local maxima is an injection of irrational randomness. By making a few random, intentionally suboptimal decisions, you can discover new solutions you couldn’t see before and get unstuck. Feeling stagnant and directionless in life? Pick up a random new hobby or move to a random new town.
Enlightenment Through Extreme Randomness
In How to Live, Derek Sivers takes Christian and Griffiths’s argument to the extreme, advocating for a fulfilling life entirely built around randomness.
Like Christian and Griffiths, Sivers points out that random decision-making lets you encounter valuable experiences that you never would have intentionally chosen. According to Sivers, these random experiences will transform you. You’ll no longer base your identity or self-worth on your career or the way you dress since you didn’t choose them.
In fact, he argues that by making your decisions randomly, you can live a life entirely without ego. You never need to worry about whether or not you’re making the responsible choice, or if you’re doing everything you can to ensure a good future. Instead, you’re free to live wholly in the present, enjoying life as it is instead of how it could be.
This next algorithm shows us how we should view the rules that govern our society: To prevent collective harm, design the rules of the game to create win-win scenarios.
Christian and Griffiths explain that we can view many systems in our society as competitive games and analyze them using game theory. In game theory, a “Nash equilibrium” occurs when every player has settled into the best possible strategy available to them, stabilizing its outcome. According to the authors, it’s the job of policymakers to ensure that the system’s Nash equilibrium results in an outcome that benefits all players—a process known as “mechanism design.”
For example, laws that regulate overfishing are meant to adjust the Nash equilibrium of the “game’ of commercial fishing. If unregulated, the optimal strategy for each fisher is to catch and sell as many fish as possible. Unfortunately, the fishers in this Nash equilibrium may drive a population of fish to extinction, which harms all players. By penalizing overfishing, the laws cause sustainable fishing to become the new optimal strategy, creating a new Nash equilibrium.
Nash Equilibria Are Imprecise Tools
Christian and Griffiths frame the Nash equilibrium as a useful tool for policymakers. However, some argue that Nash equilibria are nearly useless for this purpose. Even Christian and Griffiths admit that most of the time, Nash equilibria are impossible to predict or calculate algorithmically, hindering their practical use.
On the other hand, the concept of Nash equilibria is, at the very least, useful as a general intellectual framework, even if they can’t be precisely calculated. The concepts and vocabulary of Nashian game theory help decision-makers ask the right questions and glean new insights. For example, a lawmaker introducing a new policy doesn’t need to mathematically calculate its exact equilibrium—all game theory needs to do is spark the question: “Will following this policy be the optimal strategy for everyone?” If not, others will find a way around it, and the policy likely won’t function properly. In short: Vague, imprecise game theory is still useful.
To conclude, we’ll cover an algorithm that Christian and Griffiths draw from the Internet’s networking protocols: To communicate effectively, listeners need to signal that they’ve received the message.
The authors explain that when a computer connects to a server, both sides exchange “acknowledgment packets,” or “ACKs,” to ensure that the connection is stable. These are short messages that tell the other computer that its message has been received. These ACKs are a vitally necessary part of the communication process, and they make up a huge portion of all uploaded data.
Christian and Griffiths assert that, similarly, acknowledgment is an extremely important part of human communication, and it’s one that we often overlook. Recent research in the field of linguistics has put renewed focus on “backchannels,” a listener’s short interjections that acknowledge a speaker’s message without ending their turn to speak. It takes more than being quiet and polite to be a good listener—if you don’t give active feedback, communication falls apart.
What Does a Good Listener Really Look Like?
Common advice on being a good listener often contradicts Christian and Griffiths’s perspective—for example, Dale Carnegie’s classic How to Win Friends and Influence People argues that the best conversationalists do nothing but listen attentively while the other person talks. However, recent research indicates that this isn’t the whole picture: As Christian and Griffiths maintain, the best listeners are much more active in conversation than Carnegie claims.
What makes the situation confusing is the fact that many kinds of listener interjections are unwelcome. For example, a common criticism of bad listeners is that they attempt to solve problems right away instead of just listening—unlike in network transmission, not all “acknowledgment packets” make your conversation partner feel heard.
A good rule of thumb is to verbally confirm you understand the situation and how the listener feels before offering any suggestions or advice. By rephrasing the speaker’s message to confirm your understanding, you not only demonstrate that you were listening with care, but you also help the speaker better understand their situation. Your conversation partner will appreciate you on both counts.
Algorithms to Live By is an instruction manual for life: a collection of unconventional wisdom drawn from the field of computer science. Computers and humans share many of the same problems, and the same solutions that have allowed us to optimize the field of computing may be the ticket to optimizing our own lives. Bestselling author Brian Christian and cognitive scientist Tom Griffiths team up to create a library of “algorithms to live by”—sets of instructions to help you make smarter decisions, organize your life, and get the most out of your day.
Brian Christian is one of the rare few to find critical success in both scientific and artistic fields. His academic work on human biases has been published in periodicals such as Cognitive Science, and his popular science writing has been featured in Best American Science & Nature Writing. His poetry, on the other hand, has been nominated for the Pushcart Prize and adapted into a short film.
Christian has received the most public recognition for his non-fiction writing career—along with Algorithms to Live By, he is the author of the award-winning books The Most Human Human (2011), an exploration of what it means to be human in light of rapidly advancing artificial intelligence, and The Alignment Problem (2020), a diagnosis of what Christian sees as the most serious danger posed by the rise of artificial intelligence—the threat that computers will do what we tell them to do, but not what we truly want or need.
He holds a bachelor’s degree in computer science and philosophy from Brown University, as well as an MFA in poetry from the University of Washington.
Connect with Brian Christian:
Tom Griffiths is an esteemed career academic specializing in human cognition. Most notably, Griffiths served at UC Berkeley for 12 years, entering as an assistant professor in 2006, becoming director of their Institute of Cognitive and Brain Sciences in 2010, and getting promoted to full professor in 2015. He took a job at Princeton University in 2018 and currently heads the Computational Cognitive Science Lab there.
Griffiths has a Ph.D. in Psychology as well as master’s degrees in Psychology and Statistics from Stanford University.
Connect with Tom Griffiths:
Publisher: Macmillan Publishers
Imprint: Henry Holt and Co.
Algorithms to Live By was published in 2016 and would eventually become Brian Christian’s most popular book, gaining more attention than The Most Human Human and The Alignment Problem. It’s Griffiths’s first and only published book.
Algorithms to Live By is the odd one out in Christian’s bibliography—it’s far more prescriptive and less exploratory than his other works. Additionally, Tom Griffiths’s contributions make Algorithms feel more academic, like an entry-level computer science course, compared to the creative nonfiction style of Christian’s other books.
MIT Technology Review named Algorithms to Live By one of their best books of the year. The book also received positive reviews in The Guardian, The New York Times, and Forbes (earning a spot on the magazine’s list of the “Must-Read Brain Books of 2016”). The book became Amazon’s #1 bestselling book in Science, as well as Audible’s #1 bestselling nonfiction audiobook.
Audiences praise the book for its accessible approach to computer science, giving the authors credit for effortlessly deconstructing complicated ideas without oversimplifying or talking down to the reader. Many readers, even those who were initially skeptical of the premise, claim they were surprised by how practical and useful the authors’ algorithms proved to be. Additionally, audiences enjoy the broad range of topics which keep the book consistently fresh and interesting.
Those who dislike the book feel that the authors’ claims don’t translate well into the real world, arguing that the book works better as an introduction to computer science than as a practical guide to live by. Some found Christian and Griffiths’s tone too dry to hold their attention for the entirety of the book. Finally, some disagreed with the book’s assumption that productivity and efficiency are universal human goals, arguing that Algorithms’s advice focuses less on how to live a good life and more on how to complete tasks like a machine.
Algorithms to Live By draws its ideas from the field of computer science, as well as adjacent mathematical disciplines such as statistics and game theory. Christian and Griffiths describe the strategies and solutions that scientists in these fields use to solve computational problems, then speculate as to how humans can apply these same strategies and solutions to their own lives.
That being said, Christian and Griffiths approach Algorithms to Live By as a primer in computer science first and a collection of life advice second. Within each chapter, the authors trace their way through a concept from computer science, connecting it to real life when convenient but prioritizing a coherent lesson in computing. Chapters 3 (“Sorting”) and 10 (“Networking”) especially emphasize computer science over real-life practicality.
Each chapter of Algorithms to Live By is centered around a different area of computer science with very little overlap. We’ve grouped the chapters into thematic parts based on the function of each algorithm in your life to make them easier to apply:
Within each part, we’ve reordered the chapters to create a more logical progression, beginning with the topics that are the most practical and usable in everyday life and advancing to those more specialized to the field of computer science.
Each chapter in this guide follows the same format: We’ll begin by explaining the chapter’s “algorithm to live by” and establishing the computer science background necessary to understand it. Then, we’ll examine the complexities of each algorithm with “additional instructions” for you to follow to best apply them to your specific circumstances.
Along the way, we’ll compare Christian and Griffiths’s computer science life advice to more traditional self-help perspectives on the same topics—for example, we’ll show how Marie Kondo disagrees with Christian and Griffiths’s organizational methods in The Life-Changing Magic of Tidying Up. We’ll also enrich Christian and Griffiths’s lessons in computer science with relevant topics not found in the book, such as quantum computing.
In Algorithms to Live By, Brian Christian and Tom Griffiths make the case that the field of computer science has given us mathematically-proven sets of instructions, or algorithms, that show us how best to live our lives.
We’ve divided this guide into parts, grouping these algorithms according to purpose. We’ll begin with Part 1, a collection of algorithms to help you make better decisions. In Part 2, we’ll discuss algorithms to help you organize your life; in Part 3, we’ll detail algorithms you can use to solve difficult problems; and in Part 4, we’ll cover a couple of miscellaneous algorithms that don’t fit into our other categories.
Within each chapter, we’ll start by defining an algorithm and providing necessary context from the field of computer science. Then, we’ll offer a set of additional instructions to help you best apply the algorithm in various situations.
In this chapter, we’ll include an introduction to the idea of algorithms to live by and explain why we’d want to use them. Then, we’ll discuss our first decision-making algorithm.
Christian and Griffiths anticipate an objection to their argument for algorithms to live by: Why would we want to live our lives like computers?
The authors assert that humans and computers face very similar problems. Both humans and computers are motivated to use their limited resources (which include memory, attention, and time) as optimally as possible. Consequently, many of the strategies computers use to manage their resources work just as well in our own lives.
Some may argue that making all our decisions with the cold logic of a machine sounds unappealing. We don’t want to become robots—we’d rather be happy than “efficient,” right? Christian and Griffiths reply that computer science has become quite sophisticated, and computers think more like humans (and less like robots) than we would imagine.
When dealing with complex problems, computers sacrifice accuracy to save time and guess when they don’t know the answer, just like we do. In a way, we’ve taught computers to operate on “instinct.” Thus, consciously adopting algorithms like the ones in this book is closer to tweaking our existing habits than obeying the dictates of an all-knowing machine.
In fact, Christian and Griffiths point out that we already operate according to algorithms every day. Any set of instructions, whether it be sage wisdom from your grandfather or a YouTube video on how to fold a paper airplane, is an algorithm. The algorithms offered here are the same—they’re just based on mathematics.
Christian and Griffiths assert that the understanding of computer science you’ll gain from their book will equip you to think critically and better navigate the world, even outside of their prescribed “algorithms to live by.”
Is the Brain Really a Computer?
Christian and Griffiths aren’t the first to compare the brain to a computer as the basis for their argument. Researchers have been using this analogy for decades, and it’s an accepted framework used in thousands of scientific papers every year. Today, the question of whether or not we should view the brain as a complex computer is at the heart of a fierce debate.
Some experts claim that the metaphor is limiting insights on the cutting edge of neuroscience more than it’s aiding them. They poke holes in the metaphor, pointing out ways in which the brain behaves unlike a computer and arguing that such inaccuracies will lead researchers to misguided assumptions. For example, scientists have referred to brain processes being dictated by computer-like “neural code” since the 1920s. However, interpreting the brain through this lens leads researchers to focus on the parts of the brain that are easier to “decode” and ignore what they can’t as easily see.
On the other hand, other experts argue that the brain-as-computer metaphor has not yet outlived its usefulness. In their eyes, it doesn’t matter if the brain doesn’t act like a computer—what matters is the fact that the brain accomplishes many of the same functions as computers do: It intakes information from your environment, processes that information, and sends out signals based on that information to the rest of your body. Even if we don’t know how it works, and even if it doesn’t work like any kind of computer we’ve built, the fact that your brain is computing makes it a computer.
Needless to say, Christian and Griffiths would fall into this latter camp. In their eyes, brains and computers are more alike than they are different. This makes it possible for you to successfully apply algorithms from computer science in your own life and reap the benefits, like enhanced critical thinking, that they claim doing so provides.
Christian and Griffiths’s first algorithm is: To choose the best from a series of options, explore without committing for the first 37%, then commit to the next top pick you see. This algorithm is designed to solve something mathematicians call an “optimal stopping problem”—something we often encounter in daily life.
As Christian and Griffiths explain, optimal stopping problems require you to find the ideal stopping point—to know when to settle down and commit to the opportunity in front of you, without knowing what opportunities will be available in the future. For example, imagine you’re looking for a job and know your skills are in high demand. After a couple of days of searching, you receive an offer out of the blue that’s better than any of the available positions you’ve seen so far. However, it doesn’t have everything you’re looking for. Do you take it or keep searching for better options?
Christian and Griffiths argue that the crux of an optimal stopping problem is the trade-off between the information you gain from exploring your options and the increasing risk of passing up the best opportunity. If you take the job and quit searching, you might miss out on a dream job you didn’t know was available. If you pass on the job, you may waste months searching and never find anything better. It doesn’t seem like there’s an easy answer.
Algorithms Like This May Reduce Decision Fatigue
Using a precisely calculated algorithm to make these kinds of life decisions may seem overly restrictive—it doesn’t give you much flexibility to adapt to unusual circumstances. However, Christian and Griffiths’s strict instructions may benefit you by reducing the number of decisions you have to make. Research shows that the decisions you make throughout the day may drain your mental energy, making it more likely that you’ll give in to unhealthy impulses—for example, watching TV instead of going out for a nighttime jog. By cutting decisions out of your life, you preserve mental energy for the decisions that matter.
This theory of “decision fatigue” supports Christian and Griffiths’s overall mission with this book: to make life better through algorithms. Once you’ve chosen to follow the predetermined instructions of an algorithm, your decisions have been made for you—in theory, eliminating the mental burden and making your life easier.
Imagine you’re faced with the optimal stopping problem of which job to accept. Instead of spending stressful hours weighing an exhaustive list of pros and cons, you can trust that the algorithm will make the best possible decision for you and move on. This way, you’ll be left with more mental energy to spend on other areas of your life.
Christian and Griffiths explain how to solve this problem: First, estimate how many opportunities you’ll be offered. If you plan to be job hunting for no more than three months, you can use that timeframe as your baseline. Next, calculate the point exactly 37% of the way through that range. This separates your exploratory period from your commitment period. This would be 34 days into your three-month job hunt.
For the duration of your exploratory period, refrain from committing to any opportunity, no matter how good it seems. You don’t yet have the perspective necessary to determine if it’s truly a good opportunity. Once you enter your commitment period, commit to the next option you find that’s better than any you’ve encountered so far. Christian and Griffiths assert that this opportunity has the statistically highest chance of being the best in the entire range.
This Optimal Solution Still Falls Short
Mathematician Hannah Fry pokes holes in Christian and Griffiths’s strategy, demonstrating how likely it is to fail: If, following the algorithm, you’re unlucky enough to encounter the best available option during your exploratory period, you’d have to reject it and go on to reject every other option available, as none will be better than what you’ve encountered already. Even though Christian and Griffiths are offering a mathematically optimal algorithm, the odds of you finding the best option, she states, are a dismal 37%.
Fry does, however, offer a solution. Christian and Griffiths define success as claiming the best opportunity available, but if you’re willing to accept an opportunity that’s good, but not the best, you can vastly increase your chance of ending up satisfied.
If you’re okay with an option in the top 5%, for example, you should begin your commitment period just 22% of the way through. According to Fry, this raises your chance of success from 37% to 57%. If you’re willing to accept an option in the top 15%, you can pivot 19% of the way through for a whopping 78% chance of success.
Why do the authors pick 37% as the point to pivot? Christian and Griffiths explain: First, understand that at any point, you’ll want to pick the best option that you’ve seen so far. The more options you consider, the better an opportunity needs to be for it to be the best—you raise your standards, and your final choice will be higher quality. However, since you can’t go back to previously rejected options, your chance of finding good opportunities goes down as the number of options remaining drops.
Christian and Griffiths argue that this situation requires a balance—a point where you’ve explored enough to know high quality when you see it but have reasonable standards that keep you from rejecting your best available options.
By calculating the chance of finding the best option available for every possible time to pivot from exploration to commitment, statisticians have found that 37% of the way through is the best time to pivot no matter how many options there are. This provides a remarkably precise rule of thumb to use whenever you’re unsure when to commit to something.
At the Beginning of Your Career, Don’t Be This Choosy
Christian and Griffiths argue that you should avoid committing until the 37% point and only accept opportunities that meet your lofty standards, as this is the most mathematically optimal way of doing things. In your professional life, however, this may not always be the best advice: Experts agree that, early on in your career, any opportunity is better than nothing, and you should say “yes” to everything you can. At this stage, it’s better to quickly settle for any opportunity that will help you build up experience and connections rather than spending time holding out for the perfect opportunity.
At a certain point, however, you should begin saying “no” to everything except the best opportunities. As you gain experience and skills, you become more valuable, and as a result, fewer professional opportunities become worth your investment as time goes on. It’s at this point that Christian and Griffiths’s commitment advice (to wait until you come across an opportunity that’s better than anything you’ve seen so far) is the most useful. By accepting subpar work early on, you gain access to better opportunities down the line.
Christian and Griffiths explain that if the opportunities in front of you aren’t a sure thing, lower your standards and begin committing sooner. If there’s a 50/50 chance you’ll be refused—for instance, after a job application—start attempting to commit after 25% of your options instead of 37%. The greater the chance you’ll be rejected, the earlier you should start trying to commit.
(Shortform note: Christian and Griffiths stick to the math here and don’t consider that in real life, you can reduce your chance of future rejections by changing your approach, in turn reducing your need to commit sooner. After every rejection, give yourself some time to emotionally recover, then deliberately reflect on what went wrong and try to understand what you could have done differently. Seek advice from others if you need a more objective perspective. If you learn and grow from your rejections, you transform them from a tragedy into a gift.)
Christian and Griffiths explain that if you have a metric that allows you to rate each opportunity against an objective scale, your decision becomes much easier. If you’re trying to decide what airline to fly with, for example, you can know the price range typically offered for your destination before exploring your options. This way, you can tell if a ticket price is good without having to directly compare it to other options.
In this case, Christian and Griffiths state that the best course of action is to set a threshold at the beginning of your search and accept the first opportunity that beats that threshold. If the first airline you consider is offering an objectively low price, you don’t need to have an exploratory period at all—buy it!
The Downside of Objective Metrics
As Christian and Griffiths explain, using an objective measurement makes it easier for you to compare options. However, this strategy can easily backfire. If the metric you’re using misaligns with what you’re trying to accomplish, you could end up chasing the wrong goals and sabotaging yourself. For example, if you judge airline tickets based on their price and nothing else, you may find yourself on a cheap but miserable sixty-hour-long string of connecting flights, ruining your vacation.
This kind of misalignment is very difficult to spot—humans have a powerful cognitive bias that causes them to equate the metric used to measure a strategy’s progress with the strategy itself. Psychologists have termed this phenomenon “surrogation”: Instead of thinking through what airline ticket will result in the best vacation, you implicitly assume that the cheapest airfare means the best vacation.
How can you prevent this from happening to you? Research has shown that using multiple metrics instead of just one greatly reduces the risk of surrogation. When you’re evaluating a series of options, rating them on multiple scales keeps your brain focused on the overarching goal instead of getting attached to any one measurement.
Finally, if every additional option you consider comes with a cost, Christian and Griffiths advise you to commit earlier if you expect the best option will cost more to wait for than it’s worth.
For example, if a farmer’s trying to sell a cow, every offer he turns down means he’ll have to continue to pay for its food and care until the next offer comes in. If the expenses of waiting outweigh the additional cash expected from the next better offer, the farmer should commit to selling the cow before the “optimal” time. This is easier to calculate if you have a way to objectively measure your options, as we just discussed.
Christian and Griffiths note that in real life, all optimal stopping problems involve some exploration cost. Every week you go without accepting a job costs you significant living expenses. Additionally, even if there’s no other cost involved, you always incur the cost of lost time. Christian and Griffiths theorize that this perceived time cost is why most people in the real world commit earlier than 37% of the way through their options.
Memento Mori
Many use this idea of the ever-present cost of waiting as the center of their life philosophy. Memento mori, Latin for “remember death,” has in recent years become a rallying cry for those who wish to live life to the fullest. This idea is a central tenet of ancient Stoic philosophy and as such has gained recent popularity alongside the modern Stoic movement. By continually reminding themselves of their mortality, adherents to this philosophy estimate a high cost of waiting and consequently try to commit to the life they want to live as soon as possible.
Christian and Griffiths might argue that this attitude has its weaknesses. If you do everything you can to live life to the fullest every day, you could be missing out on other opportunities with value that aren’t immediately apparent. For example, if you remind yourself that you could die at any moment and use that as justification to stay up all night partying seven nights a week, you may find your life less satisfying than if you had taken more time to explore your options, perhaps discovering a love of science and going to grad school.
Compare your habitual style of commitment to Christian and Griffiths’s “optimal” standard.
Think back to a time where you had to commit to one thing from a series of choices—for example, choosing a project at work or a vacation destination. Are you more the type of person to get excited and rush to commit too soon, or do you spend too much time painstakingly deliberating before committing to something? Explain your answer.
Do you tend to commit before or after the 37% point that Christian and Griffiths recommend? Why do you think this is—do you over or underestimate the cost of time, perhaps?
Do you feel the need to change this habit? Why or why not?
Describe a decision, big or small, that you’ll have to make in the near future. How do you anticipate Christian and Griffiths’s algorithm will impact this decision, and why? (For example, if you note that you tend to commit far earlier than Christian and Griffiths recommend, you might decide to spend more time comparing the reviews of different laptop computers before buying one.)
Christian and Griffiths’s next algorithm to help you make decisions is a broader directive that applies to any area of your life you want to improve: To optimize your life, pursue whatever opportunity has a chance to be the greatest.
Christian and Griffiths frame life as a complex “multi-armed bandit” problem, referring to a model computer scientists use in machine learning. The multi-armed bandit is a theoretical experiment in which one decision-making agent is presented with a row of slot machines (“one-armed bandits”) and must determine how to maximize their winnings without knowing the odds for any machine. The agent must try out different machines and learn from the outcomes to figure out which will pay off the most.
To do so, the agent strikes a balance between using the machines that have proven to pay out in the past and trying new machines to see if they pay out more—balancing “exploitation” and “exploration,” as they say in computer science.
Christian and Griffiths argue that life works in much the same way. The only way to know for certain if something will make you happy is if you try it for yourself. This could be a place to live, a relationship, or a career—life is about spending more time on the things that make you happy and less on the things that don’t.
Just like in the multi-armed bandit problem, you have to find a balance between trying new things and staying in your comfort zone. The question is: How much of each is best?
Can We Really Measure Our Own Happiness?
Some challenge Christian and Griffiths’s claim that humans can quantify and systematically maximize their happiness like a computer following a multi-armed bandit algorithm. Christian and Griffiths assume that people know what makes them happy and will rationally spend more time on the activities that make them happier. In reality, this isn’t necessarily true.
In Flow, psychologist Mihaly Csikszentmihalyi describes a study in which his team asked participants to report what they were doing and how they were feeling at random times throughout the day. When the participants were engaged in challenging situations, they reported feeling “motivated” and “strong”—a positive state of mind Csikszentmihalyi terms the “flow state.” Conversely, when they lacked challenges, the participants felt “weak” and “dissatisfied.” Overall, the participants reported being far happier in the flow state.
On average, participants reported being in flow 55% of the time at work and only 18% during leisure hours. However, even though they reported being happier at work, the participants universally expressed a desire for more leisure time. Arguably, if these participants got their wish and no longer had to work, these results indicate their overall life satisfaction would decrease. In other words, these people didn’t know what made them happy—so much so, they wanted less of it.
This is one example of how human problems (“How do I live a good life?”) may not be as straightforward and solvable as computer problems, despite Christian and Griffiths’s claims to the contrary.
Christian and Griffiths describe the closest thing to an optimal solution to this problem that computer science has to offer. In 1985, some mathematicians developed a new approach to the multi-armed bandit problem with the goal of minimizing regret, that is, making it the agent’s first priority to prevent valuable missed opportunities.
Christian and Griffiths explain that the most successful algorithms at accomplishing this goal are known as “Upper Confidence Bound” algorithms. These algorithms recommend making decisions based on your options’ best-case scenarios. Pursue whatever opportunity in life has the potential to pay out the most, even if you think a jackpot is extremely unlikely, since the only way to know for sure whether or not it’ll pay off is to test it yourself. Then, if you’ve given something a shot and determined that it’s not worth your while, adjust accordingly and shoot for the moon somewhere else.
Christian and Griffiths suggest you practice this by putting greater faith in the unknown. When you know nothing about a situation—for example, if you’re set up on a blind date or offered an intriguing but unfamiliar job—act as if the best-case scenario is going to happen.
(Shortform note: By this logic, another way you should put faith in the unknown is by persevering when the odds seem stacked against you. If there’s even a slim chance that some opportunity will radically improve your life, you should test the odds for yourself to minimize regret. A friend asked you to start a company with them? Give it your best effort—you may end up millionaires.)
Preparing for Black Swans
Nassim Nicholas Taleb supports Christian and Griffiths’s advice to pursue opportunities with a low chance of outrageous success, pushing this logic even further in The Black Swan. The thesis of The Black Swan is that some influential events are impossible to predict (events he calls “Black Swans”). With this in mind, Taleb advises you to take advantage of “positive Black Swans” by putting yourself in situations where unpredictable events are more likely to help you than harm you.
Taleb offers a version of this idea that’s more extreme than Christian and Griffiths’s, asserting that you should entirely ignore an opportunity’s track record and expected gains, instead focusing on the boundaries of possible outcomes. This includes considering extremely negative outcomes, which Christian and Griffiths don't focus on as much as positive ones.
For example, if a bank has consistently made millions giving out loans over the last forty years, you might consider it a well-paying “bandit” worth investing in based on this success. However, Taleb would argue that this track record means nothing and that the nature of loans comes with the ever-present devastating risk that borrowers will default. In other words, even if an opportunity presents an extremely good best-case scenario, you shouldn’t invest if it carries an equally extreme worst-case scenario—a point Christian and Griffiths neglect to consider.
One exception to this rule depends on the interval of time you have left to make decisions. Christian and Griffiths state that the value of new discoveries decreases as you run out of time to take advantage of them. Old favorites, on the other hand, are never going to get any worse.
As a result, Christian and Griffiths argue that you should eagerly explore the unknown until a certain point, then do nothing but revel in the favorites you’ve already discovered. If it’s the last day you have with your best friend before they move away, you’re better off spending time with them instead of meeting up with a stranger from a dating app.
Christian and Griffiths note that this correlation between experimentation and time is reflected in the stages of human life. Young children do nothing but explore and experiment. Parents and caretakers provide kids everything they need so they can be free to discover what activities and behavior are most likely to make them happy.
In contrast, elderly people typically quit experimenting altogether. They don’t usually try to meet new people, instead spending most of their time with the close friends and family that they already know will make them happy. Christian and Griffiths argue that this is why studies show that older people are more satisfied with their social lives and overall well-being than young adults. They’re reaping the rewards of a lifetime of successful experimentation.
The Enduring Value of Exploration
Christian and Griffiths make the case that at a certain point, you should stop exploring because you don’t have as much time to take advantage of new discoveries. However, some argue that the act of exploration is fulfilling in and of itself, even if you’re unlikely to discover something life-changing.
Earlier, we discussed the flow state—the feeling of total immersion in the task at hand—and how it’s one of the chief sources of overall life satisfaction. Getting in the flow state requires a certain degree of difficulty: the learning curve and uncertainty involved in trying something new. Unlike machines, we can’t maximize the value in our lives by doing the same thing over and over again—so, despite what Christian and Griffiths say, it may be best to never stop exploring.
Is the Tranquility of Old Age Really Due to Less Exploration?
The lifelong shift away from experimentation that Christian and Griffiths describe isn’t the only existing theory as to why the elderly report greater life satisfaction. Research indicates that older people may simply acquire a healthier mindset as they age which makes it easier to cope with stressful situations.
In the wake of the 2020 Covid-19 pandemic, all age groups were uniformly faced with the same struggles—cut off from loved ones and fearful of the dangers of a worsening pandemic. Researchers theorized that if the elderly were happier because of their chosen lifestyle, as Christian and Griffiths claim, then they would report being just as stressed as younger adults once the pandemic put them in the same, totally new (arguably “experimental”) situation. However, the older population maintained their relatively high feelings of life satisfaction throughout the pandemic, indicating that the source of this emotional stability is entirely internal.
Christian and Griffiths cite studies showing that humans tend to explore and experiment beyond what is optimal, unduly trusting what is new and neglecting to take full advantage of opportunities that have proven to be reliable in the past. For example, if the average person were trying out a new pop-up clothing shop, comparing it with the stores they already shop at, they would typically flip-flop between the two instead of figuring out which is better and sticking with it—even if the new store sells them cheaply-made clothes.
However, the authors propose that this human tendency may be more rational than the studies indicate, given the fact that we live in a constantly changing world. It’s often worth giving previously disappointing experiences a second chance in case things have gotten better. Just because one item from the clothing store fell apart after a week doesn’t mean that all of them will.
Despite this, Christian and Griffiths make the case that in today’s world, the payoffs for various options in our lives change less than ever before. Mass production and corporate standards mean that the products and services you choose to buy will be remarkably consistent. According to the authors, your instinct to over-explore is more likely to harm than help you.
Consequently, when you’re making decisions that matter, it’s worth overriding your instincts and trusting the objective records of your past experience. If you got salmonella from a food truck, it’s unlikely you’ll have a satisfying experience eating there again.
The Treadmill of Endless Exploration
According to Christian and Griffiths, the human instinct to over-explore explains our bias toward what is new. In Antifragile, Nassim Taleb similarly criticizes this bias, which he calls “neomania.” According to Taleb, neomania inevitably leads to a “treadmill effect”—the constant pursuit of whatever is newest. Instead of searching for lasting value and sticking with it, you crave the satisfaction of upgrading to something new. As such, you can never be satisfied.
This is another reason why Christian and Griffith’s suggestion to trust your past experience may be good advice: Focusing on the good things you’ve already discovered keeps you from endlessly obsessing over inconsequential changes.
Part of the reason modern companies stick to the same corporate standards for years, as Christian and Griffiths note, is that they too have realized the value of escaping the treadmill of endless change. For example, despite massive company growth, Jack Daniels whiskey maintains the same distillation process responsible for years of their product’s success, sourcing their water from the same Tennessee spring. On the other hand, companies with a winning product that seek change for its own sake (the treadmill effect) often end up shooting themselves in the foot—as the legendary failure of Coca-Cola’s 1985 “New Coke” illustrates.
The next decision-making algorithm posed by Christian and Griffiths addresses the problem of an unpredictable future: To make better predictions, combine prior knowledge with existing data.
One of the biggest obstacles preventing us from making good decisions is our inability to reliably predict the future. For example, it might be easy to know whether or not you should quit your job if you knew you would get a raise at work within the next three months. As it is, the uncertain world prevents us from making decisions with much confidence. However, Christian and Griffiths argue that by properly utilizing probability theory, you can come up with a surprisingly reasonable prediction in any situation. Let’s explore this idea in detail.
The authors use a theorem of statistics called “Bayes’s Rule” to make predictions. Instead of diving into the formal mathematics behind the theorem, Christian and Griffiths simply use it to express the idea that you should first use your prior knowledge of the situation to estimate the chances of something happening, then adjust based on observable data. For example, if you want to predict when you’ll receive a raise at work, you might begin by asking a coworker how long it took for them to get a raise, then adjust that estimate based on how you think your boss views your performance.
Focus Only on the Information That Matters
In Superforecasting, Philip Tetlock and Dan Gardner agree that proper application of Bayes’s Rule is necessary to make accurate predictions. However, most people are bad at this kind of thinking because to use Bayesian inference, you not only need accurate knowledge of the situation, but you also need to know how impactful each piece of knowledge is. This is where many people trip up: They’re bad at determining what information actually matters.
For example, one of the most common mistakes forecasters make is overreacting to new information, adjusting their predictions more than the extraneous details warrant. The best “superforecasters” make significantly smaller adjustments in light of new information than the average predictor. In most cases, only a few key facts will have a major impact on your forecast—so, when adjusting your prediction, ignore the vast majority of additional details.
Christian and Griffiths explain that all random events fall into one of three categories, and by first determining what kind of event it is, you have a much greater chance of predicting its outcome. Random events can either have:
We’ll take a look at each of these in turn, explaining when and how to use them in your predictions.
As Christian and Griffiths explain, events that follow a normal distribution create a “bell curve”—the overwhelming majority of outcomes fall within a small range, with rare extremes falling on either side of that range. Shoe size, IQ, and human height all follow a normal distribution.
Normal distributions are the easiest to predict. Christian and Griffiths advise that since the majority of outcomes are the same, simply predict the average, and you’ll be right the majority of the time. For example, if you’re trying to guess someone’s IQ, you should just estimate the average score of 100.
If evidence indicates that an outcome might not fit the average, you should adjust your prediction while still keeping it heavily weighted toward the average. To expand on our example, if you discover that someone has a PhD, you might reasonably guess that they have an IQ of around 110.
Why People Ignore the Average and Trust Stereotypes
Instead of basing their predictions on the average, most people ignore it when the average contradicts their intuitive stereotypes. For example, if you met someone with a doctorate, you might see them as a genius and guess they have an IQ of 140. However, the average IQ is 100—only about one in 200 people have this genius-level IQ. Using Bayes’s Rule, even if you estimate that someone with a PhD is ten times more likely to have a genius-level IQ than the average person, that would mean 90% of people with PhDs do not have a genius-level IQ.
In Thinking, Fast and Slow, psychologist Daniel Kahneman explains why our brains make this kind of irrational mistake. People are instinctively driven to identify cohesive narratives of cause and effect, and for this reason, we invent stereotypes to describe certain groups. Our brains do this because overall, stereotypes help us make better predictions by motivating us to focus on broad group statistics instead of potentially misleading specific details.
Obviously, negative stereotypes are harmful when they’re detached from reality and inspire hostility, but Kahneman asserts that neutral stereotypes help us by counteracting our instinct to over-rely on immediate evidence. For example, if we imagine that all doctors are more intelligent than average, we’re more likely to trust that our physician knows what they’re doing, even if specific details contradict the stereotype—for instance, if your doctor is poorly dressed.
In contrast, Christian and Griffiths explain that events that follow a power-law distribution don’t cluster around an average outcome. Instead, a few outcomes are so extreme that calculating the average of the entire group is unlikely to accurately describe any individual. Website views, record sales, and individual income follow a power-law distribution—there will consistently be a few websites, records, and individuals who vastly outpace the rest of the herd.
Christian and Griffiths assert that the more extreme a sample initially seems, the more extreme you can expect it to be in the future. This is the most important thing to remember when attempting to predict events that follow a power-law distribution.
For example, imagine you were asked to predict the likelihood of a YouTube video hitting a million views. A video with 40,000 views will be far more likely to reach a million than one with 400 views. Since they vary so wildly, power-law distributions require you to rely far more on observable data for your predictions than normal distributions.
Power-Law Distributions in the Digital Marketplace
Since the turn of the twenty-first century, digital markets have revolutionized the world of commerce by removing the need for sellers to accurately predict power-law distributions. Unlike physical storefronts, companies like Netflix and Spotify are not limited to only shipping the few superstar products that will sell massively. Instead, companies can make a sustainable profit off of a massive inventory of products that each sell relatively little—for the first time ever.
Physical marketplaces need to predict which products will top the power-law distribution because stocking physical stores with products requires exorbitant overhead costs. When you remove the cost of physical inventory, however, digital marketplaces can profitably offer products that only appeal to a select few.
For example, Walmart has to sell 100,000 copies of a CD to make a profit on that CD’s production and distribution. If they stock a CD that isn’t a hit, they can potentially lose hundreds of thousands of dollars of sunk cost. On the other hand, it costs Spotify relatively little to simply stock everything, earning themselves subscriptions from listeners who primarily enjoy obscure bands with tiny audiences.
Historically, physical sellers spent millions on market research, attempting to gather observable data and make accurate predictions of which products would top the power-law distribution. In the digital marketplace, this investment is unnecessary—instead, you can just offer everything to the consumers and see what works.
Typically, it’s difficult to make accurate predictions of power-law distributions without a good amount of background knowledge. For example, to accurately predict how many views a YouTube video will have, you’ll at least need to know the rate at which they typically grow, if not more specific details about the content itself. However, according to Christian and Griffiths, the exception is when you’re predicting how long something will last.
Christian and Griffiths offer up a helpful rule of thumb called the Copernican Principle: When predicting the longevity of something that follows a power-law distribution, multiply its current age by two.
The Copernican Principle states that we’re unlikely to be observing something at a special point in its lifespan—for example, the first or last year of a nation that will last a thousand. Instead, on average, we’re most likely to arrive exactly halfway through any phenomenon.
We can use this principle to predict the lifespan of anything that follows a power-law distribution: anything that could either last for either an extremely long time or an extremely short time (for instance, companies, technologies, customs, and nations). A start-up that was founded a month ago is likely to only last a month more. One that was founded four years ago has proven itself to be viable and is likely to last for around four more years. A company founded a hundred years ago has become a mainstay, and it’s likely to be around for a hundred more.
The Copernican Principle as a Judge of Quality
In Skin in the Game, Nassim Taleb writes extensively on an important aspect of the Copernican Principle that Christian and Griffiths don’t cover. Taleb argues that the Copernican Principle can be used to judge the quality and effectiveness of anything nonperishable—in fact, he is adamant that it is the only way to objectively judge quality. (In his book, Taleb refers to the Copernican Principle as the “Lindy effect,” named after Lindy’s deli in New York, where Broadway actors realized you could accurately predict that the oldest shows would last the longest.)
Taleb argues that in systems where the least effective components are eliminated, such as a competitive market, the fact that something has lasted a long time means that it’s good at accomplishing its purpose. This means that it will continue to last because it’s proven to be well-designed. By this logic, you can use the Copernican Principle in your own life to predict the quality of nearly anything. For example, if a movie is still famous, the older it is, the better it probably is.
Taleb asserts that the Copernican Principle’s test of time is the only trustworthy judge of quality—he’s skeptical of all subjective human judgment. In his eyes, humans rate quality based on superficial, often misleading appearances. Only after time has swept away everything without lasting quality can we objectively know what is effective.
Events that follow the Erlang distribution have an equal probability of happening at any time. Christian and Griffiths explain that strings of entirely independent events, such as consecutive spins on a roulette wheel or callers to a customer service line, follow an Erlang distribution.
In contrast to the other probability distributions, observable data in an Erlang distribution should have no impact on your predictions. Christian and Griffiths note that you can calculate the outcome that is most likely to occur—for example, you can predict that a roulette wheel should land on black in the next one or two spins—but this prediction should never change, no matter what the evidence seems to indicate. Even if a roulette wheel hits red ten times in a row, you should still predict that it will hit black in the next one or two spins—the probability of each outcome stays the same, no matter what.
Christian and Griffiths explain that many people mistakenly apply normal or power-law distribution prediction strategies to games like roulette that follow an Erlang distribution. If they expect that their winnings will equalize toward an average (normal distribution), they’ll continue playing after losing money, convinced that their luck is due to turn around. On the other hand, if they expect that big wins indicate a streak of luck that is likely to continue (power-law distribution), they’ll seek to capitalize on “hot streaks” and continue playing after wins.
(Shortform note: The former irrational strategy is known in psychology as the “gambler’s fallacy,” while the latter is known as the “hot hand fallacy.”)
Are Gambling Fallacies Really Irrational?
You may question Christian and Griffiths’s advice to ignore observable evidence in Erlang games after learning that winning streaks (or hot hands) are very real statistical phenomena. A study of 565,915 sports bets found consistent patterns of hot streaks of consecutive wins as well as cold streaks of consecutive losses. After every win, a player’s chance of winning their next bet increased. Those who had won their last six bets had a whopping 76% chance of winning their seventh.
However, despite the fact it’s backed up by statistics, the hot hand fallacy is still an illusion. The researchers behind the study theorize that hot hands are a phenomenon caused by a widespread belief in the gambler’s fallacy.
When gambling, most people use a prediction strategy better suited for normal distributions—they assume that if they’re winning, it won’t last long, so they make safer bets that will lose them less. These safer bets are more likely to win, creating the illusion of a hot streak of luck. The same goes for losers: If you’re on a losing streak and assume that your luck is bound to turn around, you’ll make riskier bets that would earn you more but are less likely to pay out. The gamblers adjust their level of risk unconsciously, so all they see is the existence of winning and losing streaks.
After learning that hot hands are real, you might try to use this knowledge the next time you go gambling. However, if you were to try and capitalize on a hot hand by taking a bigger risk, you would eliminate the statistical effect you were trying to take advantage of. Christian and Griffiths are still correct: The rational way to gamble on an Erlang distribution is to completely ignore the observable data and treat every new bet as the independent event it is.
Christian and Griffiths find that people do a great job at internalizing the differences between distribution categories and instinctively utilizing the appropriate prediction strategy. Studies show that people can make estimates that are extremely close to the results of large-scale data analysis.
Christian and Griffiths interpret this to mean that humans are extremely good at accumulating knowledge that will help us make better predictions in the future, and we do so automatically. With this in mind, if you’re unable to identify a distribution pattern, trust your instincts for a fairly accurate prediction.
However, we shouldn’t blindly trust our instincts, as they’re far from infallible. Christian and Griffiths warn that warped beliefs and false perceptions can easily lead us to inaccurate predictions. For example, some argue that social media distorts life to seem far more exciting and carefree than it actually is since people typically only post the highlights of their lives. In theory, this causes users to instinctually predict that their lives will also be continually exciting and carefree (and become bored and dissatisfied when this prediction inevitably doesn’t come true).
How Much Should We Trust Our Intuition?
Christian and Griffiths conclude that human instinct is surprisingly rational and precise, despite the fact it operates entirely subconsciously. Other experts agree, advocating for a greater reliance on gut instinct in disciplines that typically eschew it. For instance, psychologist Gerd Gigerenzer argues that intuition is one of the most practical tools at our disposal to help us navigate the complex, unpredictable world.
Intuition is for the most part based on heuristics—extremely simple rules of thumb that we follow either consciously or unconsciously. Even if we don’t understand it, our gut feelings are based on some kind of internal logic, and we can comfortably rely on these heuristics to guide us in times of uncertainty.
Gigerenzer agrees with Christian and Griffiths that intuition isn’t infallible—however, he asserts that in many cases, it’s the best option we have. Often, intuition is far less error-prone than complex data-driven decision-making models. Because these models require strings of exact calculations, they suffer many more opportunities for error than a simple heuristic does. For example, German tennis amateurs were able to predict the outcomes of the Wimbledon tournament better than complex algorithms by simply guessing that whoever they recognized would win.
Christian and Griffiths offer a specific step-by-step process to follow while making predictions—try it out now.
Think of a prediction to make about something in your life. Does this event follow a normal, power-law, or Erlang distribution? What led you to this conclusion? (For example, if you wanted to predict whether or not your sister is going to get engaged in the next six months, you would begin by noting that this event follows a normal distribution. This distribution isn’t power-law because it’s constrained by human life—no one gets engaged for a thousand years—and it’s not Erlang because other events can influence how long it will take for her to get engaged, such as a recent breakup.)
What evidence, if any, is available for you to base your prediction on? Based on the probability distribution you’ve identified, how important do you think this information is to your prediction? Why? (In our example above, you might consider the fact that your sister has been seriously dating someone for three years. Since you identified a normal distribution, you’d want to base your guess on a statistical average—for instance, the US national average of dating for around 30 months before getting engaged—and put slightly less emphasis on the specifics of the situation.)
Based on your answer to the previous question, what do you think will happen? Does your prediction differ at all from what you would have instinctively guessed? Explain why (or why not). (In our example, your sister is already overdue to be engaged after three years of dating, so unless you have reason to believe otherwise, yes, she is likely to get engaged in the next six months—even if it feels to you like their relationship just started.)
Christian and Griffiths’s final algorithm to aid decision-making is as follows: To make better decisions, consider less information.
With this algorithm, the authors address the problem of overfitting. In statistics and machine learning, “overfitting” is when a statistical model takes too many variables into account, resulting in faulty understanding that fails to successfully predict additional data. When programming AI, telling it what data to ignore is just as important as telling it what to learn from.
(Shortform note: What makes overfitting such a dangerous problem is the fact that its flaws are hidden. An overfitted model can predict the exact data you feed it, creating the illusion of success, but in the real world, any new influence will cause it to fall flat. It’s also possible to “underfit” a model, but it’s not nearly as big of a problem as overfitting because an underfitted model is obviously wrong—it can’t predict anything.)
Christian and Griffiths establish that whenever you make a decision, you’re using the existing data of your life so far to predict what choice will result in the best outcome. For instance, if you’re trying to decide whether or not to take a job, you’ll think back to other jobs you’ve had in the past and what you enjoyed about them.
Christian and Griffiths explain that overfitting occurs when you overestimate the impact of insignificant information and underestimate what is truly significant. You would be overfitting if you conclude that since the worst job you ever had stuck you inside a cramped cubicle, you never want a job that requires working indoors ever again. Overfitting to this one criterion may cause you to reject your dream job—perhaps designing public parks—just because it requires you to work indoors.
According to Christian and Griffiths, the trick to conquering overfitting is to consciously restrict the amount of information you consider when making decisions. Base your decisions on only the most relevant facts and ignore everything else to avoid jumping to unfounded conclusions.
Minimalism: Stop Overfitting Your Life
Christian and Griffiths assert that to conquer overfitting, you must focus on what matters and ignore everything else. In Minimalism, Joshua Millburn and Ryan Nicodemus apply this logic to life itself. The authors argue that in our consumer culture, we assume that we need to buy expensive goods to be happy: a luxury car, a fancy house, picturesque vacations. In other words, modern humans have a tendency to overfit, trying to make themselves happier by adding more to their lives instead of focusing on the few factors that matter.
If you want to live a simpler, more fulfilling life, Millburn and Nicodemus recommend identifying your “anchors”—responsibilities that take up your time and attention without substantially improving your life. These could be “major anchors” like an exhausting, miserable job or an enormous mortgage, or “minor anchors” like unwanted clothing or a front lawn that often needs mowing. Removing the things in your life that don’t add value is a more sustainable path to happiness than constantly trying to add bigger and better new pleasures.
Once you get rid of all your anchors, you’re free to invest all your time and energy into the five values that, according to Millburn and Nicodemus, contribute the most to a happy life: health, relationships, passions, growth, and a sense of contribution to others. In other words, you focus on the only details that matter to avoid overfitting.
Christian and Griffiths support their argument by citing Occam’s razor—a well-known principle stating that the simplest hypotheses are most likely to be true and the simplest solutions are often the best. They argue that this principle should be a key influence on your decision-making strategy: Cast more doubt on decisions that require a complex explanation to justify them.
For example, if your son gets bad grades, but you suspect that all his teachers are conspiring to keep him back a grade and want to make a complaint to the school board, you’re probably overfitting. You’re attributing too much significance to the wrong evidence: the personalities of his teachers. The simpler explanation—that your son isn’t studying very hard—is more likely to be true.
Misusing Occam’s Razor
Science writer Philip Ball objects to the way Christian and Griffiths portray Occam’s razor. In his eyes, a key distinction needs to be made when utilizing the principle: The simplest explanation doesn’t necessarily have a greater chance of being correct, since many parts of the universe work in very complicated ways. Rather, simple explanations are valuable because they omit small complexities that don’t matter, conceptualizing the intricate world in a simplification that’s less accurate to reality but has more practical use.
Ball argues that this distinction is important because he doesn’t want us placing undue faith in simple explanations. In reality, science isn’t a straightforward path toward simplicity and clarity—it’s a complicated web of theories that never quite explain everything about our complex universe. We shouldn’t be afraid to challenge theories just because they’re elegantly simple. Sometimes, the only way to advance our understanding is by getting more complex.
Christian and Griffiths make it clear: This isn’t to say that you should never embrace complex solutions, just that you should seek out more supporting evidence before committing to them. With this in mind, they argue that the most reliable form of evidence is found through a process called “cross-validation.”
The authors explain that in statistics and machine learning, cross-validation is the practice of testing a model’s predictive power. By creating a model from a set of data and then making sure it can accurately predict data points it hasn’t seen yet, you can verify that it accurately interpreted the first set of data.
How to Use Cross-Validation
Christian and Griffiths don’t go into detail on how you can utilize cross-validation in your everyday life. We can frame this process as simply: “Pay attention to the outcomes of your decisions.” Every time something happens that you didn’t expect (a data point that falls outside of your mental model), question whether one of your beliefs is incorrect. If you use a complex justification to talk yourself into making a specific decision, see this as a red flag and put extra scrutiny on whether or not things turn out as you anticipated. Most people use complex explanations to rationalize as much as necessary to confirm their preexisting beliefs—don’t fall into this trap.
One strategy that Christian and Griffiths offer to combat overfitting is to make your decisions sooner rather than later. This prevents you from overthinking and adding complexity that is likely to mislead you. More often than not, the first factors we consider when making a decision are the ones that will have the greatest impact on the outcome.
Imagine someone you know well asks you on a date. If your first instinct is to say no, that’s probably the right call. In those first few seconds, you only have time to evaluate the most important factors—for instance, whether or not you’re attracted to and respect them. Given time, you could likely rationalize saying yes by considering less important details like their income level or how polite they are, but you would be falling victim to overfitting.
Split-Second Decisions
In Blink, Malcolm Gladwell takes this argument further, writing that in many cases, the best way to make decisions is by only thinking for a fraction of a second. He claims that humans instantly intuit a significant amount of accurate information about anything they come in contact with, a process psychologists call “thin-slicing.”
Thin-slicing works so well because our brains can instantly filter out information they deem unhelpful. If you override this instinct and give yourself the time to consciously mull over a decision, you’ll frequently focus too much on the wrong details. Once again, we’re faced with the paradox that trusting your gut is the closest way we have to make decisions like finely-tuned machines.
So far, we’ve taken a look at four algorithms intended to help you make better decisions. Next, we’re going to explore algorithms related to an area of life that computers had to master long ago: organization.
Christian and Griffiths show that by arranging your schedule, belongings, and other collections in specific ways, you can manage your life with the efficiency and precision of a computer.
The first organizational algorithms we’ll discuss relate to scheduling. How should you optimally spend your working hours every day? What tasks should you complete first?
Unlike in other chapters, Christian and Griffiths don’t offer a single algorithm to handle scheduling. Rather, they state that your optimal scheduling algorithm differs based on your goals and priorities. Christian and Griffiths offer four different algorithms for you to choose from, each with a unique advantage. Computers use these same algorithms to dictate which tasks to process and the order to process them in. There, too, designers choose an algorithm according to the specific needs of the machine.
(Shortform note: While Christian and Griffiths’s scheduling algorithms are intended to suit your specific goals and priorities, they’re still all meant to help you complete work tasks in particular as productively as possible. Consequently, some readers criticize the book for offering “algorithms to live by” that don’t acknowledge the value of life outside of productivity and work.)
Now, we’ll examine each scheduling algorithm in turn and explain what makes each one useful.
Christian and Griffiths assert that most of the time, your highest priority is to complete whatever tasks earn you the most value.
Typically, the tasks we need to accomplish come with rewards for completing them and/or consequences for remaining incomplete. The authors explain that if all your tasks are equally important, you can maximize your value by completing whatever available tasks that will take you the shortest amount of time. In computer science, this algorithm is called “Shortest Processing Time,” or “SPT.”
However, Christian and Griffiths point out that in real life, the items on our to-do list are never completely equal. Clearly, you wouldn’t go grocery shopping instead of attending your wedding just because it would take less time. Therefore, Christian and Griffiths advise you to assign a “weight,” a measurement of value, to every item on your to-do list. By dividing this weight by the amount of time it’ll take you to complete the task, you can easily calculate how much value you’re generating every hour that you’re working.
By continuously working on whatever task gives you the most value per hour (a “Weighted SPT” algorithm), you can squeeze the most value out of your workday as possible.
Money is the simplest way to measure the “weight” of a given task—calculating your hourly rate for every task on your to-do list will give an easy way to schedule for maximum value—but assigning your own “importance values” to each task works just as well.
Do Your Hardest Tasks First
In the productivity bestseller Eat That Frog! Brian Tracy places even more importance on the need to weigh your tasks by value than the authors do. He argues that your most valuable tasks are almost always the most difficult to complete—as a result, most people procrastinate on these major tasks, filling their time with easy busywork that ends up accomplishing very little. Tracy’s thesis is that unless you intentionally tackle difficult high-value tasks first, life will hand you a never-ending supply of easy low-value tasks, and you’ll never get around to doing what’s truly important.
Instead of following Christian and Griffiths’s suggestion to calculate value per hour, potentially getting bogged down in unnecessary math, Tracy recommends that you sort each of your tasks into tiers of importance. Assign each task on your to-do list a letter: A, B, C, D, or E. Complete “A,” “B,” and “C,” tasks in descending order of importance. “D” stands for “delegate”—find a way for someone else to do this task for you. “E” stands for “eliminate”—habitual tasks that do nothing but waste your time should be ruthlessly cut out of your schedule.
Tracy’s advice diverges from Christian and Griffiths’s Weighted SPT algorithm in that he doesn’t find much use in tackling the shortest tasks first. In his view, by breaking your most important tasks down into a series of shorter steps, you can translate everything you have to do into tasks that take approximately the same amount of time. Then, all you need to do is rank them by importance.
An Unweighted Shortest Processing Time algorithm involves simply doing the shortest tasks first, no matter what they are. Christian and Griffiths state that you may choose to use this algorithm if you require motivation or if you’re stressed and overwhelmed by a large quantity of tasks.
Crossing items off your to-do list provides real psychological benefits. If you need to feel like you’re getting things done or you want to cut your to-do list down to an emotionally manageable size, tackling your shortest task first may be the right move.
Psychological Relief Through Scheduling
Christian and Griffiths argue that Unweighted SPT is the best way to relieve mental stress because it’s the fastest way to shrink your to-do list. However, in Getting Things Done, David Allen argues that you don’t actually need to complete pending tasks in order to lighten their mental burden—you just need to feel as if they’re under control.
In Allen’s famous “Getting Things Done” task management system (or “GTD,” as it’s widely known), you create feelings of control over your time by “capturing” every one of your tasks, goals, and commitments in an external organizational system. Every single action you need to take should exist in some form outside of your brain. This alleviates your mental burden by making sure you never need to worry that you’re forgetting something.
Christian and Griffiths argue that what we call procrastination is just the use of an Unweighted SPT algorithm in a place it doesn’t belong. For example, when someone checks their email instead of assembling an important presentation or decides to clean their room instead of writing a term paper, they’re prioritizing what they can get done soonest over what’s the most urgent.
This means that procrastination doesn’t necessarily reflect laziness—in many cases, procrastinators are simply aiming to lessen the mental burden of their incomplete tasks and choosing the most appropriate algorithm for this goal.
However, in cases of unhealthy procrastination, this goal isn’t the one the procrastinator should be pursuing. Christian and Griffiths argue that the way to avoid procrastination is to consciously identify what you’re trying to achieve and deliberately select the right algorithm.
Overcoming Procrastination by Rebalancing Your Emotions
Mark Manson (The Subtle Art of Not Giving a F*ck) agrees that procrastination isn’t necessarily a sign of laziness. In a blog post, Manson argues that procrastination occurs when your negative emotions toward a task outweigh your positive emotions. This lines up with Christian and Griffiths’s theory, as they see the misapplication of Unweighted SPT as an attempt to reduce an unpleasant mental burden.
With this in mind, one strategy you can use to reverse this balance is to intentionally make it more painful to not do something than it is to do it. For example, if you’re procrastinating on your thesis paper, you could write a $1000 check to a friend and tell them if you don’t show them two new pages every day, they can cash it. The fear of losing an extreme amount of money is a powerful motivator.
The third algorithm we’re going to focus on is what’s known in scheduling theory as “Earliest Due Date,” or “EDD.” With this algorithm, you work on whatever task needs to get done soonest, no matter how long it’ll take to complete.
Use this algorithm if your top priority is to minimize total lateness in terms of hours or minutes—if there are penalties for not getting your jobs done on time that get worse the longer tasks remain unfinished.
This strategy might work best for a student trying to stay on top of homework assignments from many different classes or an intern balancing tasks from multiple departments within a company—situations where lateness is a primary concern. As we’ll see next, however, this strategy is not without its drawbacks.
Lastly, Christian and Griffiths offer a strategy called “Moore’s Algorithm.” This algorithm is a variation on EDD and attempts to accomplish a similar goal. Instead of minimizing total lateness in terms of time, Moore’s Algorithm attempts to minimize the total number of late tasks.
With Moore’s Algorithm, you begin by laying out your entire to-do list in order of Earliest Due Date. Look over your schedule. If it’s obvious that you won’t currently be able to complete all of your tasks by their due dates, identify the task that will take you the longest and either schedule it to be done last or throw it out altogether. Repeat until you have a schedule with fully manageable due dates, with a few exceptions either thrown out or collected at the end.
Christian and Griffiths explain that this algorithm has one specific advantage over Earliest Due Date. If you fall behind using EDD, you’ll be stuck turning in one late task after another, struggling to dig yourself out. On the other hand, with Moore’s Algorithm, you’ll have to turn in a few tasks extremely late or abandon them completely, but you’ll finish the majority of tasks on time or early. For example, if you’ve checked out a stack of library books but don’t have time to read them all before they’re due, Moore’s Algorithm will work better than EDD. It’s better to skip reading one book than to pay late fees for each one.
Choosing the Right Algorithm to Prevent Burnout
“Burnout” is a debilitating state of stress that prevents you from enjoying work or performing efficiently, typically caused by overwork. Christian and Griffiths don’t factor burnout into their advice because their algorithms were designed for machines. However, you’re only human, and your productivity is heavily dependent on your mental state.
Many of the authors’ scheduling algorithms, including Earliest Due Date, result in never-ending streams of tasks that could quickly lead to burnout. If it causes you to work yourself into a state of exhaustion, this algorithm won’t even accomplish its goal of minimizing your tasks’ total lateness.
On the other hand, Moore’s Algorithm is arguably the scheduling algorithm most likely to prevent burnout, as it’s the only one that forces you to question whether or not each task is necessary. By using Moore’s Algorithm to intentionally set a limit on the number of tasks you’re willing to tackle each day and throwing out additional tasks rather than forcing yourself to power through them, you can preserve the mental and physical health necessary for peak performance.
Experts state that the best way to prevent burnout is to prioritize self-care. If you’re consistently “late” on tasks like sleep, exercise, and necessary social connection, your health and productivity will suffer. You should treat these activities as the urgent priorities they are and instead use Moore’s Algorithm to deliberately select a few work-related tasks that you can afford to turn in late.
Christian and Griffiths emphasize the fact that we live in an unpredictable world, and sometimes, your tasks will change before you’ve had the chance to complete them. You could be given a new task out of the blue, or you might not know exactly what tasks you’ll have to do in the near future. Tasks may take longer than you expect or require you to do something else first.
Fortunately, if you adjust your algorithm on the fly, you can effectively adapt to any unforeseen circumstances. Christian and Griffiths suggest that you reapply your algorithm whenever circumstances change and switch tasks if the algorithm dictates you should. For example, if you’re doing office work according to the Earliest Due Date algorithm and you’re suddenly asked to present at tomorrow’s meeting, you should immediately drop what you’re doing and switch to the task with the new earliest due date.
The Dangers of Overplanning
Christian and Griffiths include this guidance because, due to the unpredictability of life, it’s often impossible to perfectly plan and execute a schedule, no matter what algorithm you use. In fact, failing to recognize this fact and investing too much in the perfect plan can be just as counterproductive as not planning at all.
If you devote too much time to planning every detail of your project or workday, you’ll be more easily stressed when things inevitably fail to go exactly according to plan. Additionally, overplanning makes you less flexible—you’ll be less likely to take advantage of new opportunities in the moment.
The fact that the strategy of reapplying the same simple scheduling algorithms is so effective shows how little planning is truly necessary. To get things done, you don’t need to predict what’s going to happen—you just need to know how to evaluate new options as they present themselves. Christian and Griffiths’s simple rules are intended to make this evaluation as easy as possible.
Often, however, there’s a cost that comes with switching tasks. The time it takes for you to orient yourself to a new task or revise your changing schedule can consume a large portion of your workday. Thus, Christian and Griffiths suggest that you set a minimum working time for each task before allowing yourself to switch to ensure that you are staying productive.
Christian and Griffiths recommend a strategy from computer engineering called “interrupt coalescing”—instead of allowing many small tasks to interrupt you, schedule a working time to address them all at once. For example, instead of allowing your co-workers to enter your office any time they need something, designate an hour every day to handle all their concerns.
Task Switching Prevents Deep Work
In Deep Work, Cal Newport argues that frequent task switching is the single biggest killer of productivity in our modern economy. Social media alerts, continuous channels of communication between coworkers, and open-floor office spaces all contribute to a culture of continuous distraction at work. In Newport’s eyes, the most valuable “deep work” is done in uninterrupted blocks of several hours, and those who learn to cultivate deep work have an extreme competitive advantage over their peers.
Like Christian and Griffiths, Newport is a staunch advocate of interrupt coalescing—for example, he recommends the extreme measure of setting aside a short block of time to use the Internet and going completely offline in the interim. In other words, he recommends coalescing distraction. He frames the resultant blocks of time without the Internet as exercises for your attention span: training to keep you from interrupting yourself.
Take some time to determine which of Christian and Griffiths’s scheduling algorithms is best suited for your pending tasks.
Evaluate your current scheduling habits. How do you typically decide which task to work on first? Are you already instinctively following any of Christian and Griffiths’s scheduling algorithms? (For example, you might habitually follow Earliest Due Date, avoiding difficult, important tasks until someone forces you to meet a deadline.)
If you could change anything about your current productivity habits, what would it be? (For example, do you procrastinate more than you’d like? Are you struggling to find the time to start a side business?)
Which of Christian and Griffiths’s scheduling algorithms would help you make this change? (For instance, if you aren’t spending as much time on your side business as you’d like, you could shift to Weighted SPT and assign a higher value to the tasks you’d like to prioritize.)
The next algorithm to help you organize your life gives you easy access to the things you need: To efficiently access any collection, segment it based on frequency of use.
Christian and Griffiths explain that a computer can store vast amounts of data, but the more memory it has, the more data it has to search and the longer it takes to retrieve anything specific. Engineers solved this problem by inventing the “cache.” By grouping the information that needs to be accessed most often and searching through that smaller cache first, computers can find the data they need much faster.
Christian and Griffiths argue that we can employ this same strategy with anything that needs to be organized in our lives—our closets, files, bookshelves, you name it. To “cache” any collection, create a smaller collection of frequently used items as close as possible to the place you’ll need them. Leave a couple of your favorite board games underneath the coffee table. Keep a bowl of fruit on the kitchen counter. Put the contacts you call and text the most on your phone’s “favorites” list. Real-life applications of caching are everywhere.
The authors argue that the most common organizational method, organizing based on category, isn’t actually very efficient. For example, you might organize your DVD collection by genre so when you’re in the mood to watch a specific comedy, you know where to look. However, if you have a favorite movie that you show to someone new every few months, it won’t be efficient to navigate through the genres to find it every time. Instead, the authors would argue that you should have a “cache” of your favorite movies next to the TV at all times—a hall of fame with easy access.
Marie Kondo Rejects This Algorithm
In The Life-Changing Magic of Tidying Up, Marie Kondo argues that sorting your belongings by frequency of use is a common organizational mistake. In her eyes, the seconds you may save by storing everything in “caches” within arm’s reach incur a greater cost: the clutter of countless piles around the house.
Kondo asserts that this kind of “organization” is really disorganization in disguise. More often than not, we’ll drop our belongings wherever we are, then build our routines around the location of these new caches. Additionally, this system lacks a way to easily memorize where everything is, so if you need something that’s stored in an unusual place, you’ll struggle to find it.
Instead, Kondo is a strong proponent of sorting by category, which Christian and Griffiths condemn as inefficient. Her “KonMari” system involves grouping together belongings such as clothes or books in a single location where each item is immediately visible. Kondo’s logic is that as long as you know where everything is, it takes very little time and effort to retrieve anything from storage.
Christian and Griffiths assert that if you’re trying to be efficient, you’ll want to cache everything you’ll need in the immediate future. This is easier said than done. It’s impossible to perfectly predict what you’ll need—how do you know what to put in the cache?
Christian and Griffiths explain that computer engineers had to solve this exact problem and did so by developing caching algorithms, or “replacement policies.” These are rules that help computers determine what to cache and what to pass down to a larger base of storage.
The most frequently used cache replacement policy, and the one Christian and Griffiths suggest practicing in real life, is called “Least Recently Used,” or “LRU.” This algorithm dictates that when the cache is full, you should replace the item you haven’t used for the longest time. The item you used last is the one you’re most likely to use again. Keep the board game you played least recently in a cupboard rather than under the coffee table, get rid of the fruit in your kitchen bowl that’s always last to be eaten, and un-favorite the contacts on your phone you don’t talk to anymore.
If It’s Not in the Cache, It Belongs in the Trash (or Donation Pile)
Christian and Griffiths argue that if you don’t use something frequently, you can afford to stash it somewhere relatively inaccessible. However, minimalists make the case that if you don’t use something frequently enough to cache it, you shouldn’t own it at all. Here are a few reasons you might be better off severing ties with your least-used belongings rather than saving them for a rainy day:
It’ll lower your stress. Having fewer things to worry about storing, organizing, finding, and losing will lift a significant weight off your mind. Plus, after purging your house of unnecessary belongings, you’ll see how little they impact your happiness and grow less attached to material possessions altogether.
You can focus on what matters. By getting rid of unnecessary possessions, you’ll have more space around the house for caches of things you use often. Similarly, when you own less, you’ll appreciate the belongings you have left even more.
It’ll smooth out your life. Permanent decluttering will save you massive amounts of time in the long run and ease the burden of the tiny, annoying tasks that come with owning a lot of stuff. You’ll be able to declutter the house in minutes and immediately find everything you need.
Christian and Griffiths argue that, despite its unattractive and disorderly appearance, stacking cached files or clothes in a pile is an extremely efficient organizational system.
When your belongings are stacked in a pile, you typically throw whatever you were using most recently on top. This turns the pile into a “self-organizing list,” another concept the authors borrow from computing. As you use the items in the pile, this naturally-occurring LRU algorithm will eventually arrange them approximately in order of what you use most frequently. Then, if you start searching from the top of the pile, you’re likely to find what you need within the first few items.
Marie Kondo Disapproves of Piles
This is another way Christian and Griffiths’s advice conflicts with that of Marie Kondo, who specifically argues against storing your belongings in piles. While Christian and Griffiths assert that the best place for something you don’t use often is at the bottom of a pile, Kondo makes the case that storing an item at the bottom of a pile causes you to use it less. Additionally, items such as clothes and papers deteriorate much quicker if they’re stuck under a heavy pile.
Kondo states that, instead, you should turn those piles on their side and organize everything you can in vertical rows. You won’t be able to store as many possessions as when you could stack piles to the ceiling, forcing you to only keep what is essential. Additionally, you’ll be able to survey your entire inventory at a glance, allowing you to easily retrieve what you need.
Christian and Griffiths’s final organizational algorithm we’ll be discussing dictates the most efficient way to sort a group of items into a specific order.
We ask computers to sort lists countless times a day—more than we’d realize. Every time Netflix offers you a menu of options or you search for a YouTube video, the results have to be sorted in a specific order. Christian and Griffiths argue that we should make use of these same sorting algorithms to efficiently sort collections in our own lives.
For most collections that you’d want to sort in life, computer-inspired algorithms are unlikely to save you more than a few minutes. However, if you’ve been tasked to sort an enormous group—for example, if your company asks you to organize twenty years’ worth of old archived meetings on VHS by date—the right algorithms will help you finish in a fraction of the time.
Automated Sorting in the Real World
Christian and Griffiths imply that since the majority of automated sorting only happens digitally, it’s valuable for us to learn how to efficiently sort physical objects. Soon, however, this may no longer be the case—advancements in robotics and artificial intelligence are quickly allowing us to automate sorting in the real world.
Amazon and other package delivery services around the world have already implemented free-roaming sorting robots in their facilities—imagine a Roomba that knows a package is going to Philadelphia and drives it over to the correct chute. Alphabet, the company that owns Google, is working hard to produce AI robotic assistants to perform tasks around the house—for instance, setting the table or organizing your bookshelf. Robots using Christian and Griffiths’s efficient sorting algorithms may soon sort the physical world as cleanly as they sort our digital files.
This algorithm is based on the fact that sorting gets more difficult with scale. Sorting a large group takes significantly more time than sorting four groups one-fourth of its size.
Christian and Griffiths explain why: In computer science, the efficiency of a sorting algorithm is measured by the number of times the computer must inspect each item. The bigger a collection is, the more items the computer has to inspect every time it determines where to place one.
Since sorting is easier at a smaller scale, the best strategy Christian and Griffiths have to offer involves dividing a large collection into smaller groups—an algorithm known as “Bucket Sort.” First, divide the collection into categories that will make sorting easier, then sort individual items.
For example, you would begin ordering the VHS archives by dividing them into piles by year. At this point, using an “Insertion Sort” algorithm to insert each tape in its proper place on a shelf (most human sorters’ preferred method) will take far less time than if you were dealing with one huge pile.
How Other Sorting Algorithms “Divide and Conquer”
Since sorting gets more difficult with size, many of the most efficient sorting algorithms involve separating the collection into smaller groups, just like Bucket Sort. These are known in computer science as “divide-and-conquer algorithms.”
One of the most popular divide-and-conquer algorithms that Christian and Griffiths chose to exclude is called “quicksort.” With quicksort, you pick an item to be your “pivot” and divide the entire collection into two groups based on whether they should be before or after the pivot. You repeat with a new “pivot” within each group until the whole list is sorted.
Divide-and-conquer algorithms come in handy in situations where Bucket Sort doesn’t work well. If many of the items in your collection are too similar, you won’t be able to come up with buckets that evenly divide them. Additionally, if the buckets you create don’t evenly divide your collection as well as you expect them to, the time you spend Bucket Sorting is a waste.
The purpose of sorting is to make future searches more efficient. If the time it would take you to sort a group outweighs the total amount of time you will ever spend searching through it, you’re better off not sorting it at all.
Christian and Griffiths argue that people typically sort their belongings more than is necessary. Taking the time to alphabetize your spice cabinet would likely be a waste since most of the time it’ll only take you a few seconds to spot the spice you need. (Shortform note: As we discussed in the previous chapter, you’re better off caching the three or four spices you use the most in their convenient location.)
The Hidden Benefits of Sorting
Christian and Griffiths assert that people sort too much because it often doesn’t save them time in the long run. However, they may be overlooking the mental health benefits of order and neatness.
Research shows that people who live in chaotic, cluttered homes possess higher levels of cortisol, the stress hormone. Many can’t help but see an unsorted collection as an unfinished task and become distracted and irritated as long as it remains unorganized. Unsorted collections can be particularly obtrusive if they’re in your field of vision while you’re trying to do something else.
Organizing your surroundings, on the other hand, will give you an empowering sense of control over your environment. Knowing that you’ve sorted your belongings exactly the way you want them may help keep you centered and stable, especially in times of stress and uncertainty.
So far, we’ve taken a look at decision-making algorithms to help you determine what to do in life and organizational algorithms to help you do it as efficiently as possible. The next two algorithms we’ll be discussing will help you get yourself unstuck when you’re faced with complex problems with no obvious solutions.
In this chapter, Christian and Griffiths cite “constrained optimization problems.” These are problems in which you need to find the optimal arrangement of a set of variables to achieve the best outcome under specific constraints. If you’re trying to purchase the cheapest airline tickets available that get you the most vacation days with good weather or trying to minimize risk in your investment portfolio, you’re solving an optimization problem.
Christian and Griffiths explain that when optimization problems reach a certain level of complexity, they become impossible to solve in the sense that they would take too long for even the fastest computer on earth to calculate the perfect solution. Since we don’t have access to this level of computing power in our lives, perfect solutions are often far out of our reach, even for relatively simple optimization problems.
Christian and Griffiths argue that the best way to solve problems like these is to strategically embrace imperfection.
Sometimes, simply lowering your standard for success is necessary to keep moving forward. In some cases where it’s impossible to find the perfect solution, getting close is just as good. Cristian and Griffiths argue that, at a certain point, the additional benefits earned by the “perfect solution” aren’t worth the cost of the time it would take to find it. Even when experts use computers to solve optimization problems, they often trade away accuracy to save time.
This strategy is far more effective than you would imagine. Christian and Griffiths explain that when computers are dealing with extremely complex optimization problems, they might calculate an answer half as good as the perfect solution in one quadrillionth of the time. Finding a solution that’s merely workable and moving on is often the best use of your time.
Imperfection Is Its Own Kind of Perfection
Christian and Griffiths make the case that imperfect solutions can still be optimal. Seth Godin takes this argument further, asserting that “perfect” is contextual—anything can be perfect if it meets the requirements expected of it, even if it didn’t turn out exactly the way its creator intended. For this reason, striving for “perfection” as it’s defined in computer science is nothing more than a waste of time.
When it comes to human problems, the vast majority of solutions aren’t perfect. You wouldn’t care if the individual gears of your bicycle aren’t perfectly carved. All you care about is if it can get you where you need to go, comfortably and quickly. This is what true perfection looks like—the fulfillment of someone’s reasonable expectations. In this sense, a solution that takes years to discover and execute is far less “perfect” than an efficient compromise.
Godin argues that the idealized “perfection” we typically imagine is either a way to hide—an excuse to avoid having to earnestly aim for success because being perfect is impossible—or a weapon used to dominate others by criticizing them when they don’t live up to your perfect standards.
Quantum Computing Could Solve Impossible Problems
Christian and Griffiths argue that imperfection is necessary because some problems are simply impossible for even the fastest, most powerful computer to solve efficiently and easily. However, in the near future, we may not need to settle for imperfect solutions. Some believe that we’re quickly approaching a watershed moment in computer science in which many of the problems we see as impossible will become solvable—thanks to “quantum computing.”
Instead of processing information in ones and zeroes like a traditional computer, quantum computers can perform calculations on data that are neither ones nor zeroes—each digit has a chance of being either value and behaves like something totally new. Quantum computers will be able to do more than solve optimization problems like planning your vacation—they’ll theoretically be able to simulate inconceivably complex systems like financial markets and weather systems.
In many cases, quantum computing may entirely remove the need for us to resort to imperfect solutions. For example, investing “rules of thumb” would be worthless if you had access to a computer that could predict the exact return on any investment. This isn’t just theory—existing quantum computers are extremely efficient, as we predicted. For now, they make too many errors to be of any real use, but engineers are working hard to fix this in the near future.
Even after accepting an imperfect solution, you may find it still isn’t good enough. In this case, Christian and Griffiths describe other ways of embracing imperfection to get closer to perfection.
First, by adjusting the rules of impossible problems, you can create solvable analogous problems that give you valuable information about the original problem. This strategy is known in mathematics as “relaxation.”
Christian and Griffiths explain that the most common form of relaxation is “constraint relaxation,” and it involves temporarily reducing or removing limitations on the problem. People using this strategy imagine the ideal form of the problem they have, solve it, and learn from that solution.
For example, imagine you’re struggling to compose a poem for a contest, and entries can only be a maximum of fifty words long. In a sense, this is an impossible optimization problem. You want to find the right combination of words that will earn you the win, but it’s obviously impossible to calculate the perfect poem.
If you’re feeling creatively stumped, with several drafts of “solutions” that aren’t good enough, you could try removing some constraints. You might remove the word count constraint, freewriting a thousand-word poem and see if that gives you any ideas. Alternatively, you could try removing some constraints on the language you’re choosing to use—dip into made-up words, or start using profanity.
Christian and Griffiths argue that the insights you gain from solving easier iterations of the problem will help you get closer to the original solution. The poem you wrote full of profanity might be your best one yet after you’ve reapplied the constraints and taken out the expletives.
In the Tech World, Relax Constraints to See the Future
While Christian and Griffiths present constraint relaxation as a way to generate new insights for the problem you’re solving now, tech investor Howard Morgan twists it into a way to predict the future. Morgan sees constraint relaxation as one of the most valuable tools available for entrepreneurs looking to navigate the cutting edge of technology. He argues that by relaxing constraints while designing your business, you’ll be prepared for a future in which those constraints have disappeared.
We can predict that as time goes on, new technologies will be discovered, and existing technology will become cheaper and more efficient. By coming up with ideas that are currently impossible, you’ll be the first to utilize them when they become possible.
For example, in the late 1990s and early 2000s, app developers began work on GPS for mobile phones, even though the current 2G cellular network didn’t have the bandwidth to properly run it. These developers knew it was only a matter of time before the cell network would get upgraded. By relaxing constraints and developing something that was currently impossible, they could eventually take their product to market before anyone else.
If you need ideas, Morgan suggests you practice relaxing four kinds of constraints:
Technology constraints—Imagine your business wasn’t limited by today’s technology.
Cost constraints—Imagine your business wasn’t limited by your current expenses.
Knowledge constraints—Imagine your business wasn’t limited by what is currently thought to be possible.
Time constraints—Imagine your business wasn’t limited by its current production schedule.
The final relaxation technique, and perhaps the most easily applicable one, is called “Lagrangian relaxation.” With this strategy, you remove some constraints on the problem as a part of your ultimate solution, penalizing broken rules instead of forbidding them entirely. By allowing the rules to bend, you’re not just relaxing constraints as a thought exercise anymore—you’re doing it in real life. Bending the rules and suffering the consequences is sometimes the only way to solve a complex problem.
Imagine a pregnant woman going into labor in the passenger seat of a car as her husband desperately looks for a place to park at the hospital. The closest lot is full, but there is a handicapped parking spot open nearby. Normally, the driver would never consider parking in that spot, but, using Lagrangian relaxation to solve the problem, he decides that it’s worth it to take the spot and pay whatever fine is necessary.
When Bending the Rules Is the Right Thing to Do
In many settings, people see “bending the rules” as an outright immoral thing to do and would scorn the idea of Lagrangian relaxation. However, even history’s noblest heroes had to make compromises to achieve a greater good. As described in Malcolm Gladwell’s David and Goliath, even Martin Luther King Jr., a moral absolutist who famously denounced violence for a higher cause, stretched the limits of morality in his push for civil rights. For example, he deliberately allowed hundreds of child volunteers to be imprisoned and threatened with police dogs so the resultant publicity would garner sympathy for their cause.
How do you know when bending or breaking the rules is the right thing to do? The answer to this question will vary depending on your values, but one thing is certain: When you bend the rules, be aware of what you’re sacrificing. Lagrangian relaxation requires you to pay a penalty because most constraints confer certain benefits that you lose by bending the rules. For this reason, it’s important to fully understand rules before breaking them. For example, Pablo Picasso mastered the rules of traditional painting—perspective, light, and shadow—before throwing them out the window. He knew the exact effect that these techniques gave his paintings, which allowed him to intentionally achieve a specific groundbreaking effect by sacrificing them in his cubist work.
Christian and Griffiths argue that the relaxation strategies mathematicians use to solve their complex problems are a useful tool in any context. Try using constraint relaxation to solve one of your ongoing personal problems.
Describe an ongoing problem in your life that doesn’t seem to have a solution. What makes it so difficult to solve? (For example, if you feel like you’re being ignored at work, you may find the problem difficult to solve because you don’t want to confront your coworkers and come across as needy or insecure.)
Try using constraint relaxation: Imagine the ideal version of this problem. Remove or completely change the characteristics that make it so difficult. (To return to our example, imagine you could say anything to any of your coworkers with no consequences—what would you tell them?)
What fresh insights did removing constraints yield about the problem or how you’re relating to the problem? (In our example, you may have found that, after writing them out, the things you want to tell your coworkers aren’t so unreasonable after all, and you decide to confront them.)
Christian and Griffiths’s next problem-solving algorithm is all about the power of randomness: To move past dead ends, act randomly.
Just like a traveler who points to a globe to decide where to live, computers act randomly when they don’t know what else to do. Such behavior seems like an irrational human quirk, but Christian and Griffiths explain that pure randomness has several unexpected uses, in and outside of computer science.
(Shortform note: Even though computer scientists use it frequently, randomness doesn’t come naturally to computers. Most “random number generators” are actually “pseudorandom number generators,” because their random-looking numbers are actually created by performing the same mathematical calculations on a predetermined number called a “seed.” True random number generators typically draw their numbers from naturally-occurring random phenomena, such as atmospheric noise.)
To understand why randomness is such a useful tool in our lives, we first need to explain why it’s so useful in computer science. To solve an optimization problem like the ones we studied in the previous chapter, computers sometimes utilize something called the “hill-climbing” algorithm. This algorithm is extremely effective at finding solutions, but it’s subject to a fatal flaw that’s only curable through randomness.
Christian and Griffiths explain how the hill-climbing algorithm functions: When following the hill-climbing algorithm, a computer calculates a solution, makes a small adjustment, compares the two solutions, and discards whichever one is worse. It then repeats this process, testing countless small adjustments until it lands on a solution that’s better than any other adjacent alternatives.
The authors argue that humans, too, use the hill-climbing algorithm to solve problems. When a strategy is working, we’re far more likely to make small adjustments than completely start over in hopes of finding something better. If you’re trying to improve your favorite meatloaf recipe, you’re more likely to switch out a single spice than start entirely from scratch.
Christian and Griffiths explain that hill climbing is an effective strategy if your problem has a single optimal solution and the closer you get, the better your solution performs. However, if the problem has multiple unrelated solutions, hill climbing will likely get you stuck on a solution that’s decent, but not perfect. It’ll be better than any immediately adjacent solution, but worse than the truly optimal solution—potentially, much worse. When this happens, your problem-solving hits a dead-end. Every immediate possibility you can see would be a step backward from where you are now.
For example, in life, you might feel stuck at a dead-end job that pays enough for you to survive- but is otherwise unfulfilling. You feel like you would be even more miserable if you had to hunt for a new job or move to a less expensive city, so you end up doing nothing. In computer science, this kind of imperfect solution is called a “local maximum.” You’ve climbed the closest hill, but there’s a taller one—a better solution—that you can’t see from where you are.
Genetic Algorithms: Solutions That Evolve
Hill climbing is just one of many types of optimization algorithms—there are many fascinating algorithms that Christian and Griffiths choose not to discuss. One closely-related kind of algorithm is the genetic algorithm, an optimization technique closely modeled after biological evolution across several generations.
Genetic algorithms are similar to the hill-climbing algorithm in that they procedurally generate many solutions to an optimization problem, getting progressively closer to the ideal over time. However, instead of incrementally adjusting the same solution (hill climbing), the genetic algorithm “breeds” together pairs of solutions to create new, better solutions that “inherit” useful features from their parents.
Genetic algorithms begin by generating a “population” of many decent (but not perfect) solutions. Each solution is scored on “fitness”—how well it solves the optimization problem. Then, the algorithm randomly pairs the highest-scoring solutions into couples and combines them into a new “child” solution. Since only the fittest solutions from a given population are selected to reproduce, the new population of children is (on average) fitter than the original population. The algorithm repeats this process over several generations until it accepts a solution.
Like the hill-climbing algorithm, genetic algorithms run the risk of getting stuck on local maxima. If the perfect solution requires a component that isn’t found in the gene pool, the genetic algorithm will never be able to find it. Genetic algorithms fix this issue by implementing mutations, random variations in the child solutions they generate. This added randomness ensures that no solution is permanently out of reach. (As we’ll see next, hill-climbing algorithms use randomness in much the same way.)
Genetic algorithms are effectively a more complex version of the hill-climbing algorithm—if they were surgical tools, hill climbing would be a scalpel, while genetic algorithms would be a surgical laser. While humans can intuitively use the hill-climbing algorithm in their own lives, we’d struggle to wrap our heads around a genetic algorithm without a computer.
Furthermore, genetic algorithms require far more processing power than the hill-climbing algorithm, and since they’re so complex, it’s easier to make mistakes when implementing a genetic algorithm. For these reasons, genetic algorithms are only used by professionals with significant time and resources, like engineers who need to design aircraft that are as close to perfectly aerodynamic as possible. For the rest of us, hill-climbing algorithms remain the optimal solution.
According to Christian and Griffiths, the way to escape local maxima is an injection of irrational randomness. By making a few random, intentionally suboptimal decisions, you can discover opportunities you couldn’t see before and get unstuck.
Christian and Griffiths explain how researchers use a modified hill-climbing technique called the Metropolis Algorithm that incorporates a random chance for the computer to accept worse solutions to avoid getting stuck at local maxima. By pushing through periods of temporary backtracking, the computer can find new optimal solutions it didn’t know existed.
Similarly, the authors imply that a sufficient amount of randomness in your real-life problem-solving leads you down paths you would have never chosen, some of which lead to greater success. In our dead-end job example, a random change, whether it be a new job or a new home, would force you out of your comfortable but unfulfilling local maximum and give you a chance to find a better “new normal.” Your living conditions may become temporarily worse, but unforeseen new opportunities to improve your life will hopefully leave you better off in the long run.
Enlightenment Through Extreme Randomness
In How to Live, Derek Sivers takes Christian and Griffiths’s argument to the extreme, advocating for a fulfilling life entirely built around randomness. Sivers argues that the world is already mostly random—we assume that the world is made up of countless series of causes and effects, but in reality, it’s mostly coincidences and chance. In the grand scheme of things, making your decisions randomly won’t take much away from your life.
Like Christian and Griffiths, Sivers points out that random decision-making lets you encounter valuable experiences that you never would have intentionally chosen. According to Sivers, these random experiences will transform you. You’ll no longer base your identity or self-worth on your career or the way you dress since you didn’t choose them.
In fact, he argues that by making your decisions randomly, you can live a life entirely without ego. You never need to worry about whether or not you’re making the responsible choice, or if you’re doing everything you can to ensure a good future. Instead, you’re free to live wholly in the present, enjoying life as it is instead of how it could be.
This is an intentionally extreme argument—How to Live offers 27 extreme manifestos like this that all contradict one another, to show how no one guiding philosophy is capable of fully capturing the complexity of life. It probably wouldn’t be wise to make all your decisions randomly as Sivers recommends here, but it’s possible that adding randomness to your life in moderation, as Christian and Griffiths suggest, will ease the burdensome expectations you put on yourself.
Random experimentation is necessary to discover new options and avoid getting stuck, but Christian and Griffiths argue that fully unconstrained randomness can be destructive. If you let chance make all of your decisions, it’ll quickly ruin your life. Instead, to take advantage of randomness, you need to deliberately select the best ideas you discover through random experimentation.
Christian and Griffiths advise that when you find an idea that looks promising, you should override random generation and pursue it to its conclusion. Don’t let your dependence on randomness prevent you from exercising good judgment.
Christian and Griffiths argue that this balance is at the heart of all creativity and innovation. To be successfully creative, you need to alternate between randomly pursuing the irrational and purposefully eliminating bad ideas. It’s impossible to predict what ideas are good while you’re brainstorming—you need to generate as many random ideas as possible and wait until later to evaluate them.
Christian and Griffiths point out that randomness has played a major role in groundbreaking discoveries by scientists and masterpieces by legendary artists. Creative masters require both an abundance of random ideas and the keen rationality to select and execute them.
How to Be More Creative: Ten Ideas a Day
Christian and Griffiths argue that random idea generation is the key to creativity, but they don’t offer specific advice on how to generate random ideas. In theory, it sounds simple—just write down anything that crosses your mind—but as you’re probably aware, generating ideas is easier said than done.
Entrepreneur James Altucher argues that your capacity for coming up with creative ideas is a muscle—the more you use it, the better you’ll get. Specifically, he prescribes a simple daily habit that he asserts will permanently boost your creativity to a world-class level within six months: Write down ten ideas a day. These ideas can be anything—Altucher creates lists like “ten old ideas to make new,” “ten book ideas,” and “ten things I want to get better at.” The content of the lists doesn’t matter. What matters is that you’re practicing coming up with ideas past the point where it gets hard.
If this is too difficult, Altucher offers a technique to help you get unstuck: If you can’t come up with ten ideas, come up with twenty. Altucher argues that the reason most people can’t come up with ideas is that they’re putting too much pressure on themselves. Doubling the size of your list forces you to lower your standards—if you can’t come up with a good idea, use a bad one.
Altucher’s advice lines up with Christian and Griffiths’s analysis that 50% of creativity is coming up with ideas, but it seems to ignore the other half: idea selection. What good is this exercise if you only come up with bad ideas? Altucher downplays the role of idea selection—in his eyes, there’s no way to know if an idea is good or not unless you test it. Creative geniuses don’t know for certain which ideas are winners; they just execute one idea after another until one succeeds.
At first glance, this seems to contradict Christian and Griffiths’s assertion that deliberate selection is necessary for creative success. However, you could see Altucher’s “testing” process as its own kind of selection—one where an external judge, such as a base of customers, does the selecting for him.
Christian and Griffiths assert that over the course of your attempts to solve a problem, you should start with a high level of randomness and slowly become more rational and consistent over time. This strategy is called “simulated annealing” because it’s based on annealing, the slow cooling of metals to organize their atomic structure.
Christian and Griffiths explain that in computer science, simulated annealing has proven to be one of the most effective optimization algorithms to date. The initial high level of randomness helps you discover as many different potential solutions as possible without spending the resources to fully commit to any single direction. This also helps you avoid getting stuck at any local maxima. As time goes on, you’ll become less likely to pivot and more likely to refine the best solution you’ve found so far.
You can successfully apply this schedule for randomness across any period of problem-solving, whether you’re writing a novel, your business is launching a new product, or you’re just living your life. Start by pursuing any idea that crosses your path, then slowly narrow your focus to the most promising solutions.
Simulated Annealing Your Life Purpose
In Range, David Epstein advocates for a form of simulated annealing to help you accomplish your life purpose. He argues that everyone who wants to be a world-class achiever should begin their careers with a “sampling period” in which they test out many different activities before picking one to master. By randomly experimenting before committing to specialized training, you have a better chance to discover the long-term pursuit that fits you best.
This contradicts the widely-held assumption that to be the best in the world at something, you need to train for it from birth. For example, Tiger Woods’s father taught him how to putt at 10 months old, and he won his first tournament at age two. However, for every Tiger Woods, Epstein argues that there’s a Roger Federer—the tennis player who spent six years ranked as the best in the world. Federer played soccer, basketball, and many other sports before deciding as a teenager to focus on professional tennis, and he still managed to become a master. The need to specialize as early as possible is a myth.
Golf might have been a local maximum for Tiger Woods. He never had the chance to test out other vocations, so it’s possible he would have been happier trying to be an architect or a scientist. Simulated annealing—a period of early exploration before committing to a single path—would have ensured that he was pursuing the best solution available.
Christian and Griffiths argue that when you don’t know what else to do, random action is better than nothing. Try using a random suggestion to solve one of your problems.
Describe a problem you’ve been unable to solve or an area in life in which you feel stuck. Then, pick a number from one to six.
Consider a random solution to your problem. If you picked the number:
What would happen if you followed this random suggestion? (Seriously consider doing it—randomness is meant to get you to do something you would never normally do.)
Did reflecting on a random solution spawn any new insights about your problem? Did this process convince you to consider using randomness when solving problems in the future? Why or why not?
To conclude, we’ll take a look at a few of Christian and Griffiths’s algorithms that quite don’t fit into any of our previous categories.
This next algorithm shows us how we should view the rules that govern our society: To prevent collective harm, design the rules of the game to create win-win scenarios. Here, Christian and Griffiths introduce us to the idea of game theory, the mathematical study of competition between strategic decision-makers—people, computers, or even animals.
In these competitions, each player is trying to maximize their benefits, but importantly, their behavior has a direct impact on the other players. This typically results in endless strategizing and re-strategizing as each player attempts to predict what the others will do, knowing that everyone else is trying to do the same thing.
The Impact of Game Theory on Economics
Christian and Griffiths claim that game theory was one of the most influential theoretical advances of the twentieth century. What made it such a significant discovery?
By far, game theory’s largest impact has been in the field of economics. For most of modern history, economic theory was severely limited in its application to the real world. The generally laissez-faire principles of classical economist Adam Smith were eventually mathematically formulated and proven to be true, but only in “perfectly competitive markets”—that is, in theoretical markets without costs of entry, economies of scale, or any other side effects of the messy real world. Without game theory, economists had to assume that the closer a market was to this theoretical ideal, the more efficient it would be—which, as it turns out, isn’t the case.
While it had been a topic of conversation in academia for a couple of decades, most agree that game theory was truly founded in 1944, when John von Neuman and Oskar Morgenstern published the book Theory of Games and Economic Behavior. The book marked a sea change in economics—ever since its publication, numerous theoretical breakthroughs based on game theory have allowed us to more accurately model the nuanced mechanics of real-world economics. The most significant of these breakthroughs would be the discovery of Nash equilibria and the mathematical formulation of mechanism design, both of which we’ll discuss shortly.
Before we discuss the specifics of this algorithm, we need to understand a little more background from the field of game theory. One of the most important concepts in game theory is the “Nash equilibrium”—a theoretical game state in which every player is fully aware of their opponents’ strategies yet sees no need to change their own. The game becomes stable and unchanging because everyone has settled into the best possible strategy available to them.
This concept is important because if you can calculate an equilibrium, you can predict the inevitable stable outcome of any game’s rules and incentives. Christian and Griffiths assert that this function makes knowledge of Nash equilibria invaluable to policymakers of all kinds who want to bring about positive changes to the status quo.
Nash Equilibria Are Imprecise Tools
Christian and Griffiths frame the Nash equilibrium as a useful tool for policymakers. However, some argue that Nash equilibria are nearly useless for this purpose. Even Christian and Griffiths admit that most of the time, Nash equilibria are impossible to predict or calculate algorithmically, hindering their practical use.
On the other hand, the concept of Nash equilibria is, at the very least, useful as a general intellectual framework, even if they can’t be precisely calculated. The concepts and vocabulary of Nashian game theory help decision-makers ask the right questions and glean new insights. For example, a lawmaker introducing a new policy doesn’t need to mathematically calculate its exact equilibrium—all game theory needs to do is spark the question: “Will following this policy be the optimal strategy for everyone?” If not, others will find a way around it, and the policy likely won’t function properly. In short: Vague, imprecise game theory is still useful.
Christian and Griffiths detail one major type of problem that game theory is good at solving: mutually harmful equilibria. When the optimal strategy for each player causes more harm to others than it helps the one using it, the system incurs a collective loss. In this case, the current Nash equilibrium hurts all players involved.
For example, bluefin tuna fishing has a mutually harmful equilibrium. This tuna makes valuable sashimi, so each fisher’s optimal strategy is to catch and sell as many tuna as possible. However, after years of overfishing, the tuna is in danger of going extinct—if it isn’t protected, it’ll disappear, harming all the fishers who used to profit from it.
Mutually Harmful Cultures
In Sapiens, Yuval Noah Harari frames human culture as a series of games with often arbitrary rules—and sometimes, mutually harmful equilibria. The goals that humans are culturally compelled to achieve may end up doing collective harm to the human race. For example, two countries who believe they need military superiority over the other engage in an arms race, spending vast amounts of their nations’ wealth on progressively more powerful weapons. In the end, the balance of power remains the same as when it began, but the countries have wasted money that could have been used to improve their citizens’ quality of life.
Christian and Griffiths argue that the way to ensure beneficial outcomes is to adjust the system’s rules to yield a better equilibrium.
The authors explain that the rules of a game have a far greater impact on its outcome than the intentions of its individual players. This is because the rules determine the Nash equilibrium—whether individuals’ optimal self-serving strategies help or harm the collective.
The process of designing rules to accomplish specific outcomes is known in game theory as “mechanism design.” By adding negative consequences to dissuade behavior that’s harmful to the collective, you change the optimal strategy and create a more beneficial equilibrium.
For example, a father who wants his children to share their toys might tell the kids that if they ever start fighting over a toy, it’ll get taken away. This way, if all the kids want to play, sharing is the optimal strategy and taking turns is the Nash equilibrium.
Incentives Don’t Always Work the Way We Expect
Mechanism design is a tricky business—since rule adjustments have such a significant impact on the game’s outcome, small unforeseen side effects can have extreme consequences. In Freakonomics, Steven Levitt and Stephen Dubner describe an unusual way that mechanism design can go wrong.
The Freakonomics authors categorize incentives into three types: economic, social, and moral. Economic incentives are when you save money by behaving well, social incentives are when you can maintain a good reputation by behaving well, and moral incentives are when you can satisfy your conscience by behaving well. The authors argue that economic incentives have the potential to backfire by inadvertently removing social and moral incentives.
For example, a study of ten daycares in Israel discovered that when parents were fined $3 as a penalty for arriving late to pick up their children, the number of late parents doubled. Levitt and Dubner theorize that the fine caused parents to see the obligation as economic instead of social and moral—after paying a few bucks, they no longer felt guilty for being late. This kind of unpredictable human psychology shows how difficult it is to design proper incentives.
Christian and Griffiths note that one simple way to improve the equilibrium of a competitive game is to change the rules to make honesty and transparency the optimal strategy.
The authors argue that when you design a game in which players are incentivized not to hide their intentions, it eases the burden on everyone. Players no longer have to stress about predicting what others are going to do, or what others think they’re going to do. Unnecessary cognitive exertion fatigues and frustrates us, and thoughtful system designers do what they can to minimize it.
(Shortform note: These principles are especially true in the world of business. In Radical Candor, Kim Scott argues that the key to business management is to create an environment in which total honesty is the optimal strategy. If everyone shares brutally honest feedback while maintaining genuine care and empathy, team members never have to stress about what information others are hiding from them, allowing them to build trusting relationships with one another.)
Additionally, Christian and Griffiths argue that making honesty the optimal strategy prevents a harmful phenomenon known as the “information cascade.” In games of hidden information, players can see each other’s actions but don’t know the logic and strategy behind others’ decisions. In an information cascade, one player makes a suboptimal decision based on misinformation or poor judgment. Others see this decision and, assuming that player is competent or knows something they don’t, copy the same suboptimal decision.
As more players all make the same decision, it seems like the consensus of many experts and others become more likely to join the herd. Soon, a large crowd is all making the wrong decision, suffering mutual losses caused by a faulty set of rules that reward hidden information. Christian and Griffiths assert that this can be avoided if players are incentivized to share their knowledge.
The Dangers of Social Proof
The information cascade is an example of social proof—the human tendency to imitate other people in uncertain situations. In Influence, Robert Cialdini describes other instances of this phenomenon and offers advice on how to avoid being swept up in harmful herd behavior.
Social proof causes something called the bystander effect—the social phenomenon that people are less likely to help someone in crisis if other bystanders are nearby. Just like in an information cascade, each person assumes that others know what they’re doing and copies their behavior. For example, if someone were to have a heart attack and collapse in a public park, it’s possible every witness would notice no one is panicking and conclude that there is nothing to panic about, leaving the victim without help of any kind.
Cialdini recommends that whenever you catch yourself blindly following the lead of those around you, question whether you have a good reason to do so. Additionally, if you’re the one who needs assistance from a crowd, you should specifically single out one individual to go get help to negate the bystander effect.
To conclude, we’ll cover a set of algorithms that Christian and Griffiths have adapted from the Internet’s networking protocols. Transmission Control Protocol (“TCP”) is the main body of rules dictating how computers communicate over the Internet. These algorithms ensure that both sides understand the information being sent, make sure it’s successfully received, and manage congestion resulting from unpredictable quantities of transmitted data.
(Shortform note: Even though this protocol was established way back in 1974, its basic structure is exactly the same today. According to Vint Cerf, co-creator of TCP, the protocol has lasted this long because they specifically designed it to never become obsolete—they wrote a language that could be used regardless of what medium was transmitting it. This way, the protocol would continue to function effectively through networking technology that hadn’t been invented yet, like Satellite Internet.)
In this chapter, we’ll discuss four algorithms from TCP that, according to Christian and Griffiths, we can and should apply in various areas of human life.
Christian and Griffiths explain that digital “connections” between computers aren’t continuous streams of data, even though many activities like video streaming appear continuous. Rather, data flows in bursts called “packets” that take only milliseconds to send.
To ensure that a connection is stable, both sides exchange “acknowledgment packets,” or “ACKs.” These are short messages that tell the other computer that its message has been received. These ACKs are a vitally necessary part of the communication process, and they make up a huge portion of all uploaded data.
Christian and Griffiths assert that, similarly, acknowledgment is an extremely important part of human communication, and it’s one that we often overlook. Recent research in the field of linguistics has put renewed focus on “backchannels,” a listener’s short interjections that acknowledge a speaker’s message without ending their turn to speak.
Backchanneling is a powerful determinant of success in communication. Christian and Griffiths cite studies showing that when someone is telling a story, consistent, engaged responses from their listeners have a drastic impact on how well the story is told. When distracted listeners fail to fully acknowledge stories they’re being told, the storytellers tend to get defensive, justifying why their story is interesting instead of trying to tell it well.
In sum, Christian and Griffiths argue that if you want to connect to the people around you, you need to become a good listener. Being quiet and polite isn’t enough—if you don’t give active feedback, communication falls apart.
What Does a Good Listener Really Look Like?
Common advice on being a good listener often contradicts Christian and Griffiths’s perspective—for example, Dale Carnegie’s classic How to Win Friends and Influence People argues that the best conversationalists do nothing but listen attentively while the other person talks. However, recent research indicates that this isn’t the whole picture: As Christian and Griffiths maintain, the best listeners are much more active in conversation than Carnegie claims.
You can sum up much of the confusion surrounding active listening in one question: Do good listeners offer their opinions and suggestions or do they just let the other person vent? A common criticism of bad listeners is that they attempt to solve problems right away instead of listening—unlike in network transmission, not all “acknowledgment packets” make your conversation partner feel heard.
To determine what kinds of listener interjections are welcome, researchers have deconstructed the act of listening into six “levels.” Each level is built on the previous one, indicating a deeper connection between speaker and listener. If you interject in the wrong way before you reach a high enough listening level, you may come across as insensitive or disingenuous.
The deeper you want a conversation to be, the further you should advance from Level One to Level Six:
Level One: Create a safe environment for the speaker, where they can be comfortable being vulnerable.
Level Two: Put away all distractions and give the speaker your full attention.
Level Three: Verbally acknowledge that you understand what the speaker is saying, without offering ideas of your own. Ask questions if you need clarification. This is the “backchanneling” that Christian and Griffiths discuss.
Level Four: Actively interpret the speaker’s body language and nonverbal cues and gain a full understanding of their emotional state.
Level Five: Verbally empathize with and validate the speaker’s emotions. Make them feel supported, without judgment.
Level Six: Offer your thoughts and opinions that you think would be useful to the speaker.
With these levels in mind, follow these rules in your next conversation:
Only acknowledge that you’re listening and understanding the speaker (Level Three) after you’ve made them feel comfortable to talk (Level One) and given them your full attention (Level Two).
Only convey that you understand how they feel (Level Five) after the speaker has confirmed you accurately understand the situation (Level Three) and you’ve actively observed their emotional state (Level Four).
Most importantly: Only offer suggestions or advice (Level Six) after doing all of the above. The speaker needs to know you understand the situation, acknowledge how they feel, and support them unconditionally before they’ll be receptive to your advice.
Christian and Griffiths explain that TCP utilizes an algorithm called “Exponential Backoff” so computers can determine how much of their limited resources to invest in unreliable connections.
If your computer doesn’t receive an ACK when it’s trying to connect to a server, it’ll immediately try again. If it still doesn’t receive a signal, indicating that the server may be down, it’ll either try again immediately or wait slightly longer before trying again, choosing at random. After every failed connection attempt, the range of time during which it will randomly try again to connect will double, waiting longer to retry each time, until it successfully gets through. This is the Exponential Backoff algorithm.
Christian and Griffiths explain the value of this technique: With this algorithm, your computer doesn’t waste resources constantly trying and failing to reconnect. Simultaneously, however, your computer never entirely gives up hope, ensuring that if the server ever comes back online, it’ll get through.
Christian and Griffiths argue that we should use Exponential Backoff to handle everything unreliable in our lives. This algorithm shows us how to invest less in the unreliable without ever completely giving up hope.
For example, the authors describe a friend of theirs trying to arrange a get-together with an old companion who keeps canceling at the last minute. They recommend that she apply Exponential Backoff to the time between invitations—scheduling a meetup a week in advance, then, if they canceled, two weeks in advance, a month, and so on. This way, she can invest less of her time and energy on someone as they continually prove unreliable, while still giving them a chance to change their behavior.
The Cost of Endless Hope
Arguably, this Exponential Backoff algorithm works better for computers than it does for people. The time and energy that computers waste by trying and failing to connect to an unreliable server are almost negligible. For human beings, on the other hand, repeated rejections can potentially have a severe emotional toll. Continually placing your hope in someone or something unreliable may cause you to suffer the same pain over and over again—sometimes, you may be better off giving up hope entirely.
For example, as the woman in the authors’ example repeatedly tries and fails to reconnect with her friend, she may blame herself, spiraling into regret for any time she offended her friend in the past. Even if she waits weeks or months before trying again, she’s likely to work up hope only for it to be shattered again, throwing her into the same cycle of despair. The best thing for her may be to let the friendship end and move on.
Here are a few signs that indicate it’s time to give up hope:
You’re holding onto the past. If you find yourself trying and failing to return to the way things used to be, it’s likely time to give up hope and move on. The one constant in life is change. By clinging to something that no longer exists, you prevent yourself from discovering better opportunities in the present.
You’re unhappy now because you want to be happy in the future. Another red flag is the belief that all you need to do is hold out hope a little longer until things get better. Unless you have a real reason to believe that your situation will change soon, the only thing your hope does is discourage you from doing something that will actually change things.
You’re avoiding an unpleasant task. Often, people cling to misguided hope to avoid doing something unpleasant. Check your gut: If you know, deep down, that you need to take action (for example, quitting your unfulfilling job or breaking off a toxic relationship) the real problem is likely your illusory hope that the problem will solve itself.
TCP also uses a form of Exponential Backoff to prevent network congestion. Christian and Griffiths explain that networks face a big problem: They can only handle a finite amount of data. If users try to force too much data through a single link, everyone’s connections slow to a crawl, and the network becomes unusable for everybody. If left unchecked, computers will continue sending data even though very little is successfully delivered, trapping the network in permanent gridlock.
(Shortform note: This characteristic of networks renders them vulnerable to something called a denial-of-service (“DoS”) attack. By causing overwhelming swarms of computer systems to connect to the same server at once, assailants can shut it down for everyone. DoS attacks are treated as a criminal offense around the world—however, the hacker group Anonymous once filed a White House petition to have DoS attacks recognized as a form of peaceful protest.)
To prevent gridlock, each sender and receiver must dynamically react to the current level of network congestion, transmitting less data when the connection is overloaded and transmitting more when the network can handle it.
Christian and Griffiths reveal that the strategy that’s most effective at accomplishing this goal is an algorithm called “Additive Increase Multiplicative Decrease” (AIMD). This algorithm has two parts: “Multiplicative Decrease” is essentially Exponential Backoff. When a computer doesn’t receive an ACK, indicating that the network is beginning to fail, it cuts the speed at which it’s sending data in half. If all senders and receivers do this simultaneously, network congestion is cleared up instantly as it occurs.
“Additive Increase” means that, after the computer cuts the data it’s sending by half, it will slowly, incrementally speed back up, testing how much it can push the network before causing it to fail and cutting speeds in half again. This pattern allows each computer to successfully transmit as much data as possible without causing systemic failure. They get as close as possible to the line without ever crossing it.
Christian and Griffiths argue that we should use this same strategy to maximize the returns on our resources in situations where overinvestment causes waste and failure. Often, we also want to get as close as possible to the line without crossing it.
People Trust Others Using AIMD
Christian and Griffiths note that in many cases, humans already follow this strategy: for instance, when determining how quickly to reply to texts. One important facet of human relationships based on AIMD that the authors don’t discuss is trust—it takes years to slowly build up intimate trust in someone, but if they ever do something that compromises your trust, it only takes seconds for you to pull away.
It makes sense that people would use AIMD in this context: You typically want to trust others as much as possible, but trusting the wrong person too much can result in a painful betrayal. For everyone you meet, you want to get as close to the line as possible without crossing it.
Because people build trust in you incrementally—and that trust can shatter in an instant if you “cross the line”—it’s difficult to deliberately earn someone’s trust. In Dare to Lead, Brené Brown offers tips to help you win the trust of others. According to Brown, all you need to do to earn someone’s trust is exhibit small, trustworthy behaviors over a long span of time:
Set boundaries and respect the boundaries of others: Make it clear what you’re willing to do for others and don’t hold it against anyone if they refuse to do something for you.
Be reliable: Follow through on your commitments.
Be accountable: Don’t blame anyone else for your mistakes.
Keep things confidential: Don’t spread information that isn’t yours to give away.
Act with integrity: Show that you’re willing to make sacrifices to do what’s right.
Be nonjudgmental: Don’t shame the ideas or emotions of others.
Assume the best in others: Counteract your negativity bias by giving others the benefit of the doubt when they do something wrong.
To apply this algorithm to real life, the authors use the example of the Peter Principle—the 1960s business management idea that in a hierarchy, people “rise to the level of their incompetence.” The Peter Principle states that since people earn promotions when they’re good at their job, they get stuck in jobs they’re bad at. As a result, many organizations are full of people doing their jobs poorly.
Christian and Griffiths argue that AIMD offers a solution to this problem: a hierarchy that is being constantly overturned. A business run according to AIMD would make sure that each employee is constantly being promoted until they hit a wall where their skills fail. Then, they’re severely demoted and the process starts again.
This way, frequent changes would ensure that the company is pushing each employee to their maximum potential and no further. An employee’s talents would never be wasted for long on an easy job of little consequence, nor one that’s too difficult for them.
How to Overcome the Peter Principle
Christian and Griffiths’s suggestion to combat the Peter Principle is likely too extreme for managers to use in most situations. Unless you’re restructuring a business from the ground up, you won’t be able to apply the authors’ advice of constantly promoting and demoting your employees. Here are some more practical tips for managers in typical organizations to help keep the Peter Principle from taking hold:
Promote based on skillset, not performance. Just because an employee does outstanding work doesn’t necessarily mean they’re suited for a higher role in the company. Different titles often require entirely different skillsets. The more subordinates an employee is in charge of, the more important “soft skills” like communication become—if your employee lacks these skills, they shouldn’t be promoted.
Provide formal training before promotions. Often, even talented employees need specific direction before they can conquer new responsibilities. Walk them through the new job or assign them a mentor to make sure the transition is successful.
Don’t force employees to take new responsibilities. If an employee has more passion for the job they’re working now than a higher-level position, think twice before promoting them.
Christian and Griffiths describe one recent fiasco in the field of computer science: an unexpected phenomenon called “bufferbloat.” If you have fast Internet that still suffers from inexplicable latency and unreliability, bufferbloat is likely the culprit.
Christian and Griffiths explain that your home modem has what’s called a buffer—a short queue of received data that it accepts but waits before responding to. This helps it smooth out intense bursts of data without slowing down.
However, if your modem’s buffer is too big—if its “to-do list” is too long and grows too fast to ever get finished—the ACKs your computer sends to web servers through the modem take an extremely long time to get delivered, causing your download speed to plummet. Essentially, bufferbloat makes your modem tell your computer that it can handle far more information than it really can, causing your computer to send too much data, clogging up the connection between itself and the Internet.
The authors assert that to fix bufferbloat, your modem should not be sending ACKs back to your computer. It needs to entirely reject some of your computer’s data, signaling it to slow down and allow the modem to clear its backlog of tasks.
Christian and Griffiths argue that many people suffer from a version of bufferbloat in real life, too. Recent communication technology has increased humans’ ability to buffer each other. Emails can pile up in a never-ending queue, as can DMs on Instagram and Twitter. Instead of simply allowing messages from others to slip through the cracks, we add them to a never-ending to-do list. Similarly, we frequently buffer news and entertainment—there’s no excuse for not staying up to date on the latest controversies and TV shows, as they’re always asynchronously available. It’s easy to become overwhelmed, trying and failing to catch up on everything you’ve saved for later.
Christian and Griffiths argue that you can use the same solution as home modems to protect yourself from bufferbloat—just say no. Shrink your buffer down to only the most essential tasks. It’s better to completely reject the unimportant than to drown in it, no longer able to devote the time and energy to what you really care about.
The Joy of Missing Out
Real-life bufferbloat is arguably one of the leading causes of “FOMO”—the “Fear Of Missing Out” on the exciting events we see happening on social media. Like their home modems, people who suffer from FOMO feel as if they have to do everything that’s expected of them and assume that there’s something wrong with rejecting a good opportunity. Often, they end up spending too much time on social media, as the rush of endless interaction creates the illusion that you’re living the maximum amount of life.
The healthier mindset is to try and cultivate “JOMO”—the “Joy Of Missing Out.” Following Christian and Griffiths’s advice and rejecting opportunities instead of buffering them doesn’t have to be a mere sacrifice for your health, like brushing your teeth. Instead, every missed opportunity has the potential to be a source of joy. Here are a few benefits of “missing out” that may convince you to start saying “no” more often:
By committing to fewer things, you can invest in them deeply. If you try to be friends with everyone, you’ll end up without a single true, deep friendship. In this way, many of the most meaningful things in life can only be gained by saying “no” to everything else. If you try to avoid missing out on anything, you’ll ironically end up missing out on the valuable life experiences that are impossible to post on Instagram. Staying home from a concert to spend time reading to your kids may seem like you’re trading the extraordinary for the ordinary, but many would argue that a deep, loving parent-child relationship built up over years is the most extraordinary experience possible.
Intentionally passing up an opportunity is empowering. The act of intentionally choosing how you want to spend your time is inherently satisfying, no matter what you end up doing. Saying “no” to an invitation reinforces the feeling that you’re in charge of your own destiny. Compare this to the alternative—going to a party because you feel like you have to. This is a recipe for feelings of helplessness and resentment.
Sympathetic, vicarious joy can be a great feeling. When you hear about an opportunity you’re missing out on, instead of feeling anxiety or pressure to get out and do more, try to feel empathetic joy for those out having a good time. Happiness isn’t a zero-sum game—if you’re missing a fun event for something you’d rather be doing instead, you and those at the event both win. Try to feel gratitude for the net positive of happiness in the world instead of worrying if others are happier than you.
Reflect on Algorithms to Live By as a whole. Self-help computer science is a bold, unusual idea—decide whether or not you think the authors were successful.
Recall the 11 algorithms we discussed. Which algorithm are you most eager to implement in your life, and why? How will it change the way you live?
After learning about their algorithms to live by, do you agree with Christian and Griffiths’s assertion that computer science can teach us how to live? Why or why not?