1-Page Summary

In The Age of Surveillance Capitalism, Shoshana Zuboff explores the concept and consequences of “surveillance capitalism”—a term she created to describe the invasive and controlling data collection practices that tech companies like Google, Microsoft, and Facebook have adopted to maximize their profits.

Zuboff uses scientific research and numerous real-life examples to support her insights into the development and advancement of surveillance capitalism. She also warns of the potentially disastrous outcomes for our society if we don’t advocate for change in this area.

The Age of Surveillance Capitalism is one of three books that Zuboff has written about different defining stages of technological development. It’s the culmination of years of research that she conducted while a professor at Harvard University.

In this guide, we’ll explore:

We’ll also expand on Zuboff’s ideas by suggesting counterarguments to some of her claims, adding context where possible to enhance understanding, providing updated examples of tech companies’ actions, and recommending concrete steps you can take to combat surveillance capitalism.

What Is Surveillance Capitalism?

According to Zuboff, surveillance capitalism is an emerging form of capitalism in which companies harvest data about our behavior, make predictions about our future behavior using that data, and sell those predictions for profit.

(Shortform note: Zuboff first invented the term “surveillance capitalism” in 2014. She used it in a paper exploring the future of Big Data and how we have both the power and responsibility to shape it how we want to. The paper was the precursor to many of the ideas discussed in The Age of Surveillance Capitalism.)

Zuboff explains that although most of us are aware that big tech companies are doing something to us, we aren’t capable of understanding the situation’s complexities, implications, or magnitude because it’s entirely unprecedented—we have no other event in history to compare it to.

(Shortform note: As the first event of its kind, the rise of surveillance capitalism is one of many unprecedented events author Nassim Nicholas Taleb calls “Black Swans.” In his book, The Black Swan, Taleb explains that Black Swans are fundamentally unpredictable events that have a major impact on humanity. He argues that the only way we can prepare for these events is by accepting that they’re unpredictable. Other examples of Black Swans are World Wars I and II, 9/11, and the 2008 financial crisis.)

How Surveillance Capitalism Operates

Zuboff tells us that surveillance capitalism consists of four main components: an underlying philosophy, products, a means of production, and a marketplace. Let’s discuss each component in more detail.

Component #1: An Underlying Philosophy

Zuboff explains that although people tend to see surveillance capitalism as a type of advanced technology that’s capable of learning uncomfortably specific details about us, it’s actually a philosophy that guides how companies use technology.

The idea underlying surveillance capitalism is that serving people’s needs is less profitable and therefore less desirable than selling predictions about their future behavior. Through this lens, technology isn’t a means to make our lives better, but rather a means through which companies can better collect and control our data to maximize profits. In other words, in surveillance capitalism, the purpose of technology is to help companies collect more data, make more accurate predictions, and sell those predictions for more money.

(Shortform note: Zuboff says that the philosophy of surveillance capitalism is that selling behavioral predictions is more profitable than serving people’s needs. Why is this so concerning? In his book Basic Economics, Thomas Sowell explains that, in a free market economy, the promise of higher profits is supposed to incentivize businesses to produce goods and services that people want. If companies can earn higher profits without concerning themselves with the needs of consumers, the system breaks down and the people suffer.)

Component #2: Products

Zuboff says that in surveillance capitalism, the product that’s sold is predictions about your thoughts, actions, and emotions. Companies develop these predictions using data they collect about your behavior, both online and out in the real world. This includes everything from your online searches, text messages, and purchases to your facial expressions and attitudes.

(Shortform note: Zuboff makes it clear that large amounts of data make our behavior increasingly easy to anticipate, but just how predictable are humans in general? According to research, we’re extremely predictable. When researchers studied the way random cell phone users moved around, they found that users traveled in simple, regular patterns, regardless of age, gender, language, and other factors. These patterns were so regular that researchers could predict users' whereabouts within the next hour with 93% accuracy.)

Component #3: Means of Production

According to Zuboff, companies like Google develop their predictions through machine intelligence. Machine intelligence feeds on behavioral data, constantly learning from what it takes in. The more data it collects about how people behave, the more accurately it can predict how people will behave in the future, and the more profitable its predictions become.

(Shortform note: Although Google’s use of machine learning to predict human behavior and sell those predictions for profit is unprecedented, machine learning has been around since the 1950s. In 1952, Arthur Samuel of IBM wrote a computer learning program that could improve its ability to play checkers with every game by studying the winning strategies.)

Component #4: Marketplace

Zuboff says the surveillance capitalism marketplace is where companies—the customers in this form of capitalism—trade for predictions about people’s behavior. While it was originally meant for advertisers, these days, any company that wants to take advantage of information about our future behaviors can participate by purchasing behavioral predictions from companies that gather user data.

(Shortform note: In her book, Zuboff paints the expansion of this marketplace to sectors other than advertising as a negative development that typically causes harm to society. However, in some cases, behavioral predictions can, and have, been used for good. For example, pet adoption agencies can use predictive analytics to determine which pets people are more likely to adopt. That way, they can focus their efforts on those pets that need more help being placed with families.)

The Conditions That Led to Surveillance Capitalism

Zuboff argues that surveillance capitalism was born out of a specific set of conditions—a perfect storm of factors that gave birth to a new technological philosophy. These conditions include the rise of neoliberal ideology and the intensification of surveillance following the 9/11 terrorist attacks in the United States. Let’s explore each condition separately:

Condition #1: The Rise of Neoliberal Ideology

Zuboff explains that the anti-regulation mindset of neoliberalism contributed significantly to the inception of surveillance capitalism. Following WWII and the height of the Cold War, the Western world—in particular the US and UK—was in the midst of an economic downturn. In addition, these countries were extremely wary of the governmental control underlying totalitarianism and communism in other parts of the world. As a result, the people of the US and UK began calling for more democratic participation and equal rights for marginalized groups.

As a solution to both the economic decline and the public’s demand for less governmental authority, neoliberal economists began to advocate for a radical free market based on Friedrich Hayek and Milton Friedman’s economic theories. They pushed for an unimpeded market where competition would reign free, and deregulation and privatization would replace government oversight, labor unions, and government-owned corporations.

While the US never adopted an entirely free market in practice, the essence of these neoliberal ideas took hold of the economy. Zuboff says that by the 1990s, the idea of self-regulation grew out of the absence of government regulation. Companies gained the ability to oversee themselves, which set up ideal conditions for Google to experiment with the data it had been collecting about people in whatever way it saw fit.

The Ties Between Economics and Politics

As Zuboff explains, neoliberal ideology grew in popularity because of how it seemed to address the public’s political concerns following WWII and the height of the Cold War. However, she doesn’t explain exactly why economists like Hayek and Friedman believed that neoliberalism was the answer to the public’s fears of governmental control and the lack of a political voice.

In Friedman’s book Capitalism and Freedom, which details his economic theories, he argues that economic freedom is essential to political freedom. This is because being able to choose which goods and services satisfy one’s individual needs puts significant power in the hands of the people, rather than the government. In other words, free markets give control of the economy to the public, thereby checking the power of government, maximizing individual freedoms, and preventing the oppression of individual rights.

However, one could argue that Friedman’s assertion that economic freedom gives control back to the public is misleading due to the existence of monopolies, which are a product of free, unimpeded markets. A monopoly is when one person or organization becomes the only supplier of a product or service. Without competition, they can control prices and therefore the market itself. When monopolies have total control, it undermines the benefits of a free market outlined in Friedman’s argument, as the power is in the hands of a few corporations rather than distributed among the people. Therefore, perhaps neoliberalism wasn’t the answer to people’s fears that economists claimed it was.

Condition #2: Heightened Surveillance After 9/11

Another condition that helped pave the way for surveillance capitalism, continues Zuboff, is the US’s push for heightened surveillance after the September 11, 2001 terrorist attacks. This push influenced Google to collect more and more data about their users, and as already discussed, their behavior ultimately gave rise to the philosophy of surveillance capitalism.

Zuboff explains that before 9/11, the Federal Trade Commission (FTC) recommended that online privacy be regulated. Had they succeeded, many practices of surveillance capitalism would be illegal. However, after the attacks, the US government disregarded these recommendations and instead intensified its own surveillance in the name of fighting terrorism, as seen with the passing of the Patriot Act.

(Shortform note: According to a report issued by the FTC in 2000, their motivation for recommending government regulation was the discovery that only 20% of the busiest commercial sites at the time met the FTC’s basic standards for privacy protection while under self-regulation. This suggested that self-regulation alone was not enough to protect consumers’ privacy and government action was required.)

Intelligence agencies became interested in working with Google and its search-engine capabilities to reliably predict and detect future threats by collecting internet data about anyone and everyone. These partnerships encouraged Google to invest in new technologies and participate in behaviors that would lead to surveillance capitalism.

The Patriot Act and Surveillance Capitalism

As Zuboff explains, the US government heightened surveillance after 9/11 in the name of fighting terrorism; for example, by passing the Patriot Act. However, Zuboff doesn’t explain the details of this act and how it may have influenced Google in the coming years.

The Patriot Act expanded existing surveillance laws to allow the government to access the phone, email, bank, and credit records, as well as the internet activity, of average Americans. But despite being advertised as a measure to combat terrorism, these investigations led to just one terrorism conviction from 2003 to 2006. In addition, the Patriot Act didn’t require the government to destroy the information it collected—even when it was from innocent Americans—and prohibited Americans from telling anyone that they were under investigation.

These secretive and invasive practices aimed at average citizens arguably set a dangerous precedent that would later play a major role in the development of surveillance capitalism.

Google’s Journey as the Pioneer of Surveillance Capitalism

According to Zuboff, Google was the pioneer of surveillance capitalism. It developed the foundations of surveillance capitalism in response to the threat of the dot-com bubble, which caused the stock prices of internet companies like Google to fall. This put these companies in a dire financial position.

(Shortform note: While Zuboff details the consequences of the dot-com bubble, she doesn’t explain what it was. In the 1990s, overly optimistic investments in technology companies led to a rapid rise in their stock equity valuations. These stock prices were far higher than the technology companies’ true value, creating a bubble that eventually burst in 2001.)

This bursting of the dot-com bubble prompted several phases in Google’s journey that ultimately led to the creation and expansion of surveillance capitalism. Let’s look at each phase in detail.

Phase #1: Google Starts Using Targeted Advertising

Zuboff explains that from its conception, Google’s founders Larry Page and Sergey Brin had intended for Google to be a free search engine. They refused to charge people for using their service and committed to excluding ads from their site. However, this left them with few opportunities to earn revenue, which made them extremely vulnerable when investors were looking to pull out as a consequence of the dot-com bubble.

To create a consistent source of revenue, Page and Brin finally decided to surrender to the idea of adding advertisements to their site. They used the data they had collected from searches to match advertisements to specific users, making the ads more relevant and therefore more valuable to advertisers.

(Shortform note: Page and Brin were forced to compromise their vision for an ad-free site to please investors, which business experts say is a common problem. In fact, some experts recommend that start-ups forgo venture capital funding altogether to avoid this exact predicament. They claim that the less external funding that start-ups raise, the more successful they are.)

The History of Targeted Advertising

Although targeted advertising became the solution to Page and Brin’s revenue issue, they were far from the first to implement it. Advertisers began targeting certain demographics as early as the mid-90s, when they realized they could reach certain groups of consumers depending on where they placed their ads on websites.

In the years to follow, targeted advertising became increasingly advanced with new tracking tools. By the time Google entered the scene, sponsored search—which gave advertisers the chance to bid for top search engine results related to particular terms—was already a popular advertising model.

Phase #2: Google Discovers the Potential of Predictions

Then, Zuboff says, in the early 2000s, a seemingly insignificant event caught Google’s attention. It noticed a large spike in searches across different time zones for Carol Brady’s (a popular character from the American sitcom The Brady Bunch) maiden name after it aired as a question on Who Wants to Be a Millionaire? In other words, the increase in searches corresponded to a precise and predictable pattern that Google’s analysts could see within their data.

The Carol Brady incident made Google realize that they could use their search data to identify events and trends before the news media and predict with precision what users were looking for. With these predictions, they could then target users with more relevant advertisements. Zuboff argues that this marked the beginning of surveillance capitalism because Google recognized the value and power of its behavioral data for the first time.

Google Was Not the First Organization to Use TV for Mass Predictions

While it may have been a breakthrough moment for Google, the Carol Brady incident was not the first time that a company has been able to predict people’s behavior in connection with televised events. Over a decade before the Carol Brady incident, the UK’s National Grid had successfully forecasted electricity needs based on major televised events they predicted would cause energy surges.

How do these events relate to energy surges? In the same way that the Carol Brady question on Who Wants to Be a Millionaire? caused a surge in search queries, certain popular televised events—major soccer games, royal weddings, or even highly anticipated episodes from popular TV shows—cause energy surges when UK households collectively go to the kitchen and make a cup of tea during commercial breaks. The largest energy surge to date occurred during the 1990 World Cup Semi Final, after England missed a penalty against West Germany.

Phase #3: Surveillance Capitalism Expands

Over time, continues Zuboff, Google progressed from collecting data from only its search pages to extracting it from sites across the internet to improve its predictions and sell them to advertisers at a higher value. Once other tech companies (and eventually, non-tech companies) realized how profitable Google’s model was, they followed suit by finding ways to extract their own behavioral data. As an example, Zuboff cites Microsoft, which launched a personal assistant called Cortana to capture users’ personal information. Cortana encourages users to share as much of their data as possible to improve its functionality.

(Shortform note: Zuboff uses Microsoft’s Cortana as an example of how other companies have created their own data extraction tools. But just how concerned should users be about Microsoft’s collection of their personal information? A privacy report conducted by Common Sense Media gave Cortana an overall privacy rating of just 71%—a poor score in the context of online privacy. The report also rated Cortana as particularly poor at preventing the sale of data, prohibiting the exploitation of users’ decision-making process, and following student data privacy laws.)

Now, companies are continuously inventing new technologies capable of extracting more specific personal information from users, such as wearable technologies that capture people’s biometric data, surroundings, and even emotions.

(Shortform note: In addition to the wearable technologies that Zuboff mentions, researchers are now developing ways to implant consumer-driven monitors inside of our bodies that could extract specific medical data. These implants would be able to do things like screen patients before appointments or even monitor glucose levels by pairing with a mobile app.)

Why Surveillance Capitalism Has Managed to Thrive

How have Google (and, later, other companies) managed to keep mining user data despite showing such blatant disregard for privacy? Zuboff argues that there are a variety of factors that have contributed to surveillance capitalism’s ability to thrive. These factors fall into three categories: overcoming opposition, cornering the public, and mastering the art of disguise. Let’s discuss each of them further.

Surveillance Capitalism Has Overcome Opposition

Zuboff argues that Google and its competitors have learned to overcome any form of opposition, making it difficult for the public to demand—and lawmakers to enact—change.

Tech companies have refused to take accountability. These companies have never stopped to consider whether their actions are immoral or against public opinion and have proceeded unfazed by any and all attempts to raise concern.

(Shortform note: Do companies have moral responsibility, as Zuboff seems to suggest here? Some would argue that they don’t. According to the legal compliance view argued by Milton Friedman, corporations have no moral obligations outside of their legal obligations.)

Tech companies have developed strong defenses. Google (and, later, its competitors in a similar fashion) has defended itself from governmental threats by proving its value to political campaigns, investing in lobbying, and building close ties with Washington.

(Shortform note: Just how involved is Google in the political sphere? According to reports, in 2018, Google spent $21 million on federal lobbying, more than any other company in the US. In addition, as of 2019, its public policy division provided funding to 349 different organizations— including academic institutions, trade organizations, and advocacy groups—that work to defend Google and its practices.)

Surveillance Capitalism Has Cornered the Public

Zuboff maintains that Google and its competitors have cornered the public in such a way that they are effectively unable to resist the problematic practices of surveillance capitalism.

Surveillance capitalism has fostered dependency. By tying data extraction to free services that meet people’s needs, Google and other companies have forced customers to allow their invasive practices. These days, it’s difficult—if not impossible—to live without access to those resources.

(Shortform note: Is this dependency on Big Tech’s services as absolute as Zuboff claims? Our experience during the Covid-19 pandemic would suggest it is. According to a Pew Research survey conducted in 2021, 58% of Americans said their use of the internet and technology—like video calls—was essential, and 90% said it was extremely important.)

Surveillance capitalism has prevented users from reclaiming their privacy. Because users are dependent on their services, Google and other companies have no incentive to prioritize user privacy. As a result, they either provide no alternative option where user privacy can be protected, or make the information regarding how to opt out of data collection extremely difficult to find.

(Shortform note: Zuboff argues that companies either provide no option to opt out of data collection or make the information about how to do so nearly impossible to find. Research conducted since the book’s publication supports this claim. In a 2020 study, over 50% of the 7,000 websites examined by researchers contained no option to opt out of data collection, and just over 11% provided only one opt-out hyperlink.)

Surveillance capitalism has exploited people’s desire for inclusion. Google and other companies—especially social media sites like Facebook—have taken advantage of the fact that people have a natural desire to feel included, which makes them highly likely to keep using their social services.

(Shortform note: How does this exploitation work? Psychologists say that social media platforms create a cycle of isolation and connection to keep us reliant on their apps. They isolate us by enticing us to connect with acquaintances and strangers rather than see our friends and family face-to-face. Then, when we’re feeling lonely, we’re influenced to look to their apps to feel like we’re connecting with our social networks, restarting the vicious cycle.)

Tech companies have leveraged their image of authority. Because of their innovative technologies, the public sees Google and its competitors as experts on the ways of the future. This means that people feel they can’t question them, and tech companies have taken advantage of that position to continue their data collection practices.

(Shortform note: Zuboff’s claim that people feel they can’t question tech companies may be in question based on a recent survey. According to a 2021 survey, trust in technology has dropped to a record low in the US and 17 other countries, including China, the UK, and Germany. Diminishing trust may indicate that people around the world are indeed questioning the authority of these companies.)

Surveillance Capitalism Has Mastered the Art of Disguise

Zuboff argues that Google and its competitors have mastered the art of disguise so that their intentions and practices are undetectable and therefore unstoppable.

Tech companies have masked their intentions. Google and other companies have learned to mask their intentions with innovative technologies like personalization and digital assistants. Because these technologies are undeniably useful, companies can distract users from the fact that they simultaneously harvest sensitive information.

(Shortform note: Perhaps companies’ intentions aren’t as well-masked as Zuboff implies. According to a survey, while 76% of Americans say they use smart assistants like Amazon’s Alexa and Apple’s Siri, 61% of those who use them are also worried that these devices are listening to their private conversations.)

Surveillance capitalism has operated in secret. Google and its competitors have worked hard to conceal the details of their data-mining practices and have actively opposed calls to reveal information.

(Shortform note: In some cases, companies are so secretive that employees themselves are out of the loop. For example, in 2018, Google employees requested that the company be more transparent about an ongoing project to develop a search engine for China. Employees cited concerns that they couldn’t make an ethically informed decision to continue working on the project without further information.)

Surveillance capitalism has progressed at lightning speed. As a side effect of Google and its competitors’ fast-moving technological innovation, the public and government have been incapable of processing and confronting these changes fast enough to raise concerns and enact regulatory policies.

(Shortform note: Technological progress is likely to become even faster (and even more difficult to regulate) as time goes on. Famously championed by Ray Kurzweil, the Law of Accelerating Returns states that technological progress occurs exponentially, not linearly—the rate at which technology transforms our world is constantly increasing. Kurzweil, currently in his 70s, predicts that technology will accelerate so quickly in the next few years that he’ll be able to live forever.)

The Ultimate Goal of Surveillance Capitalism

Zuboff argues that the ultimate goal of surveillance capitalism is to create a society in which our free will is replaced by behavioral conditioning that encourages predictable and machine-like patterns of behavior. This would eliminate human mistakes, accidents, and randomness. By guaranteeing specific human behavior, companies like Google can sell certainties instead of predictions and maximize their profits.

(Shortform note: Zuboff argues that the goal of surveillance capitalism is to replace human error with predictable, machine-like behavior. However, according to behavioral economics theory, humans are both irrational (and thus error-prone) and predictable. In his book Predictably Irrational, behavioral economist Dan Ariely argues that humans are systematically irrational, meaning that we tend to repeat the same mistakes in a predictable way without recognizing or correcting them. If this is true, then surveillance capitalism’s aim of behavioral conditioning may be misguided; to make behavior fully predictable, companies simply need to learn the patterns of our mistakes, rather than trying to eradicate mistakes entirely.)

Methods to Modify Behavior

Zuboff says that at present, tech companies use various methods to modify people’s behavior. One strategy they use is to provide subliminal cues that subtly influence people’s choices without them realizing it. For example, Airbnb displays how many other users are browsing for the same dates as you to create subconscious urgency to book a reservation.

Another method tech companies use to control users’ behavior is to reinforce actions that build a predictable routine—a routine that will reliably guarantee the outcomes companies want. For example, UberEats suggests ordering food at meal times, thereby reinforcing a routine of using the app on a regular schedule.

The Birth of Persuasive Technology

By describing these methods, Zuboff shows that companies have become remarkably good at modifying people’s behavior. How did they come to master this art of manipulation?

According to psychologist Richard Freed, the tech industry developed its powerful methods of persuasion by studying the behavioral research of B.J. Fogg. Fogg discovered that to modify behavior, you need to give your target motivation, ability, and triggers. In his book, Tiny Habits, Fogg describes this model in detail, explaining that motivation is the desire to act, ability is the capacity to act, and triggers are the cues that prompt you to act. So, for example, Airbnb creates motivation to book a reservation by showing how many users are browsing for the same dates. Similarly, UberEats’ meal time notifications act as triggers to keep you using the app on a regular schedule.

Because Fogg taught classes at Stanford University, which is a hub for the tech industry, he was in close contact with many individuals who would go on to develop the technologies of surveillance capitalism, like Instagram. They learned Fogg’s research directly from him and went on to test and perfect it for their industry. The result is the methods of behavioral modification that Zuboff describes.

Creating a Fully Connected Society

Zuboff explains that to reach a point of total predictability, companies’ control over our behavior needs to be all-encompassing. To accomplish this, companies want to create a society in which people and devices are connected at all times.

(Shortform note: We may be closer to the existence of the connected society that Zuboff describes than you may think. Meta (previously Facebook) is currently designing the Metaverse. This is a type of cyberspace that uses technology like virtual and augmented reality to blend the physical with the digital world. Once created, the Metaverse could facilitate the type of connection and control that Zuboff describes.)

As an example of what this would look like, Zuboff cites a patent application by Microsoft for a device that would monitor human behavior to detect anything abnormal, such as excessive shouting. The device could then report those abnormalities to individuals like family members, doctors, or law enforcement.

(Shortform note: Since the book’s publication, Microsoft has filed for similar patents, such as one for a system of sensors that would monitor employees’ body language, facial expressions, speech patterns, and mobile devices to track a meeting’s overall quality in real time. Although they haven’t stated it as their intention, Microsoft could use such sensitive data to surveil and control employees in the manner that Zuboff describes.)

Social Principles of a Connected Society

Zuboff argues that in this type of connected society, relationships within the community would fundamentally change, and algorithms would replace familiar social functions—like supervision, negotiation, communication, and problem solving—that govern current civilization.

(Shortform note: While Zuboff focuses her discussion on the future role of algorithms, researchers say that the current use of algorithms is already negatively impacting our society. In particular, our reliance on algorithms has led to the persistence of bias, deepening social divides, and the rise of unemployment.)

Zuboff identifies several social principles that would underlie this new reality. First, in a connected society, we would prioritize the collective over the individual. Companies would justify total control over our behavior by arguing that it’s “for the common good.” Furthermore, valued concepts like privacy and individuality would cease to exist for the sake of total connection and harmony.

(Shortform note: From a philosophical standpoint, there are counterarguments to Zuboff’s warnings about prioritizing the common good and forgoing freedom. For example, some would say that the concept of “common good” doesn’t exist, because each individual has unique experiences. Therefore, there’s no single policy that could benefit everyone, and companies couldn’t aim for such an ideal. On the other hand, others would argue that the total transparency of a hyper-connected, data-driven world would be its own kind of freedom, as it would allow us to understand far more about the world around us.)

In addition, instead of relying on the negotiation and compromise of politics to make decisions for society, automated systems would quickly compute certain solutions for the greater good.

Once they’ve determined specific solutions, companies would manipulate connections between people to drive change. In other words, they would influence your actions by exploiting your desire to “do what your friends are doing.”

Automation and the Power of Social Connection

Zuboff argues that in this connected society, automated decision making would replace politics and all of its relational components. But is this such an undesirable thing? According to a British poll conducted in 2018, one in four people would prefer robot politicians to human politicians, and research shows that AI is particularly good at understanding public issues.

That said, it’s important to consider the social implications of such innovation, such as whether people can connect with robots in the same way they do their human representatives. As Zuboff explains, social connection is a powerful tool—which is why she says that companies may be looking to manipulate it.

Consequences of Surveillance Capitalism

Zuboff stresses that surveillance capitalism has already caused a number of grave consequences for our society, and particularly for our democracy. Let’s discuss each consequence in more detail.

Consequence #1: Surveillance Capitalism Threatens Our Right to Privacy

Zuboff explains that companies often collect information without the knowledge or meaningful consent of their consumers. Additionally, they invade personal spaces—both physical and psychological—to do so.

For example, she says that even when we’re inside our homes, devices like TVs, thermostats, and even mattresses are monitoring and delivering information about what we say and do to company computers. In addition, companies can analyze metadata—like how often you change your profile picture—to determine extremely specific information that you never intentionally disclosed, such as whether or not you have depression.

How Far Does Privacy Invasion Go?

Here, Zuboff warns that tech companies are violating our privacy by invading our personal spaces and analyzing our metadata for extremely specific personal information. Arguably an even more severe threat to privacy is that these companies share this private information with law enforcement agencies without people's knowledge or consent and without the required warrants.

For example, according to one report, law enforcement demanded seven days of location information from a man’s cell phone provider in connection with a criminal investigation. Fortunately, however, when the case was taken to the US Supreme Court in 2018, the Court ruled that his location information was protected by his Fourth Amendment rights, meaning law enforcement needed a warrant to access it. That said, whether this ruling will serve as a precedent for future cases involving privacy and technology remains to be seen.

Consequence #2: Surveillance Capitalism Removes Individual Autonomy

Zuboff argues that because companies aim to control people’s behavior—often outside of our awareness—they have removed our right to individual autonomy. Not only do companies engage in practices like eliminating the option to opt out of privacy invasion and fostering the public’s dependency on their services (as we’ve discussed earlier), but they also interfere with our emotions and choices without their knowledge or consent.

(Shortform note: While Zuboff’s argument presupposes the existence of free will, some would say that there’s no such thing. They insist that because we’re always influenced by biological and environmental factors outside of our control—like the health conditions we’re born with or the family we’re raised by—we don’t have the autonomy over our lives we think we do. Following this logic, no one can take away our free will, because it didn’t exist to begin with.)

For example, Zuboff says that in 2010, Facebook ran an experiment to test whether it could mobilize people to vote. It found that it could influence whether a person went to the polls by showing them photos of friends and family who had already voted. While it may seem minor, this subtle manipulation essentially removed their autonomous decision about whether to vote.

(Shortform note: Social media has been used not only to influence people to vote but also to influence people who to vote for. According to a report published by the US Senate in 2018, Russia tried to influence American voters through all major social media platforms prior to the 2016 election. This wide-scale attempted manipulation has concerning implications for both national security and democracy, which hinges on individuals’ right to have a voice.)

Consequence #3: Surveillance Capitalism Disregards Social Norms

According to Zuboff, surveillance capitalism’s goal of total predictability has influenced companies to disregard social norms in favor of machine automation. Because these social norms involve flexibility and risk—which are inherent in human-to-human interactions—they can’t facilitate the high level of behavioral control that companies seek.

For example, Zuboff says auto loan lenders install devices that deactivate the car’s engine if borrowers are late on payments. While this automatic process may help lenders avoid risk, it’s also void of empathy for human struggles—like whether the person was short on money due to illness—that is essential to the social contracts of our current society.

Starter Interrupt Devices Disengage Morality

These starter interrupt devices and similar technologies of surveillance capitalism arguably encourage action that lacks empathy and regard for social norms because they depersonalize the action. Research shows that people disengage their sense of morality when they lose the sense that the people they are mistreating are unique individuals. Someone who presses a button to shut off someone’s engine may not do the same thing if they were face-to-face with the car’s owner.

In addition to being void of empathy for human struggles—as Zuboff explains here—starter interrupt devices also unfairly target the poor. Dealers and lenders see them as a way to protect their assets from “risky” borrowers who have poor credit scores and other financial challenges, so they most often distribute them to poor people who have no choice but to apply for subprime auto loans. What’s more, not only are the devices immoral for their lack of empathy and unfair targeting, but they can also be legitimately dangerous. For example, some borrowers have claimed that their cars have been shut off while idling or driving on the freeway.

Consequence #4: Surveillance Capitalism Damages Our Mental Health

Zuboff says that some of the methods companies use to extract our data are damaging to our mental health—particularly social media. In particular, she argues that apps like Facebook and Instagram have encouraged extreme social comparison, which has caused damaging psychological effects like low self-esteem and self-worth, increased body judgment, and more frequent depressive moods.

Zuboff elaborates that because social media puts our lives on constant display, we exaggerate reality to gain standing among our peers. This causes others to feel inferior in comparison and pushes them to keep up with an unrealistic standard. Ultimately, this traps everyone in a vicious cycle of comparison and posturing that leads to a downward spiral of worsening mental health.

What’s Behind Our Social Comparison?

While Zuboff describes the damaging cycle of social comparison, she doesn’t speak to the cultural roots of our need to compete with our peers in the first place. According to Brené Brown in Daring Greatly, we live in a flaw-focused culture that makes us feel as if we’re never enough. In response to these feelings of inadequacy, we try to compensate by showing how incredible we are.

In today’s society, one of the ways we do this is by posting on social media, where we receive external validation in the form of likes and follows. While psychologists say that some level of validation seeking is normal, social media has made us rely exclusively on validation from others, which has led to the negative psychological effects that Zuboff describes. To overcome this tendency, psychologists recommend identifying when you’re seeking external validation and instead taking actions to self-validate: for example, journaling about your improvements and successes and learning to encourage yourself.

Consequence #5: Surveillance Capitalism Causes a Loss of “Self”

Zuboff explains that the hyper-connected world that is a major consequence of surveillance capitalism has caused young people—who have not yet matured and developed a strong sense of identity—to lose their sense of “self.” While we all have a desire for connection, she says that adolescents in particular have become so dependent on their connections to others and so incapable of escaping public view that it has threatened their ability to develop an identity that is separate from others’.

As a result of this loss of self, adolescents have become less able to tolerate solitude and more vulnerable to peer pressure and manipulation. They also try to control other people because they see others as an extension of themselves.

Developing a Sense of Self in the Face of Hyperconnectivity

Why does hyperconnectivity threaten adolescents’ ability to develop a sense of self? According to psychologist Erik Erikson’s Stages of Development theory, adolescents need sufficient opportunities for personal exploration of their beliefs, ideals, and values to develop a secure sense of independence and identity. Spending an excessive amount of time connected to others online may limit those opportunities and prevent adolescents from developing their sense of self.

To help adolescents build a stronger sense of identity and avoid the negative effects of hyperconnectivity that Zuboff mentions, parents can encourage their teens to explore their interests, avoid pushing their own agenda on their children, and let their children learn from their own choices.

How Society Has Tried to Fight Back

Zuboff argues that although Google and other companies have been overwhelmingly successful at avoiding and preventing any sort of regulations that would curb their surveillance capitalism practices, this hasn’t prevented people and governments from trying to fight back.

For example, in 2011, 90 Spanish citizens submitted claims demanding that Google give them the right to have their private information removed from its site. The claims included desires to stay hidden from abusive partners and forget old arrests. The “right to be forgotten” became a fundamental principle of EU law in 2014.

Then, in 2018, the EU adopted the General Data Protection Regulation (GDPR), which forces companies to modify their data activities according to certain regulations. For example, companies are prohibited from making personal information public by default.

In addition, activists, artists, and inventors have created ways to avoid the prying practices of surveillance capitalism. This includes signal-blocking phone cases to help protestors hide their location by eliminating all wireless communication.

(Shortform note: You, too, can work to combat the practices of surveillance capitalism thanks to an inventor who developed a way to make signal-blocking phone pouches at home. The design requires materials that can be purchased online, as well as some light sewing.)

Opposing Viewpoints: The EU and US’s Take on Surveillance Capitalism

Zuboff cites two examples of laws that the European Union has used to fight back against surveillance capitalism. But what action has been taken in the United States, where core companies like Google, Microsoft, and Amazon are based, and how does that impact the actions of the EU?

With regards to the “right to be forgotten,” the US opposes the EU. From the US’s perspective, the right to be forgotten violates the First Amendment right to free speech because companies have a right to publish whatever information they want online, even if that information reveals undesirable truths about individuals. In addition, in 2018, the US Congress enacted the US CLOUD Act, overruling the GDPR before it could be enforced.

With the US at the heart of surveillance capitalism, is it possible for the EU and other countries to drive meaningful change without American support? Thus far, it seems unlikely. For example, as previously mentioned, the US CLOUD Act overrules the GDPR. That means that US-based companies can and must allow government access to all stored data, including data stored by EU servers. In other words, the GDPR is rendered useless in the US—a likely outcome for any legislation regarding foreign data.

What We Can Do to Stop Surveillance Capitalism

Despite these efforts, we haven’t been able to drive change fundamental enough to end surveillance capitalism. Zuboff insists that to defeat it, society must undergo a series of mindset shifts:

Ultimately, Zuboff argues that, regardless of others’ past decisions, it’s each new generation’s responsibility to make things right.

What Can We Do to Stop Surveillance Capitalism?

Although Zuboff doesn’t offer any specific action steps individuals can take to stop the advance of surveillance capitalism, other writers offer tips on how to achieve this kind of change. Since Zuboff asserts that it’s every new generation’s responsibility to make things right, she would likely encourage everyone to take active steps like these:

To become more aware of what’s happening, intentionally research political action regarding big tech and surveillance capitalism. Keep close tabs on the members of Congress who represent you: Sign up for their newsletters, follow them on social media, and create Google News alerts for their names. Use GovTrack.us to stay informed about current congressional legislation as it develops.

To help society recognize surveillance capitalism as anti-democratic and inspire ourselves (and others) to fight for our right to privacy, find ways to spread Zuboff’s message. In Contagious, Jonah Berger explains that effectively spreading ideas is all about influencing others to spread them in everyday conversation. To do this, make your idea as visible as possible and engage your audience with an emotional story. For example, 2017’s #MeToo movement successfully spread awareness of sexual abuse and harassment by urging its audience to use a specific hashtag (increasing the movement’s visibility) and share their personal, emotionally-charged stories (engaging the audience).

To aid collective social action, join an existing activist group. There are plenty of activist organizations currently fighting Big Tech for our right to privacy, including the Electronic Privacy Information Center, the Electronic Frontier Foundation, and Privacy International. Seek a job opportunity or volunteer at one of these organizations—or just donate.

Exercise: Audit Your Personal Data Privacy Practices

Reflect on your technology use and how you could improve your data privacy practices in a world dominated by surveillance capitalism.