“Merchants of doubt” (MODs) are people or organizations who create the impression that scientific findings that threaten their agenda or ideology are unsettled or flat-out wrong. The goal is to stave off regulation to keep the economy free.
The tobacco industry pioneered techniques for manufacturing doubt when scientists discovered that smoking caused cancer, and other MODs had been attacking science using the same techniques ever since.
Most of us don’t really understand what science is. We think it’s iron-clad proof that something is true. In reality, scientific knowledge is expert consensus—a group of qualified scientists agrees that a claim is correct when there’s enough evidence to support it.
Expert consensus comes from peer review. No science is considered legitimate until it passes peer review. Typically, the process works like this:
Step #1: A scientist comes up with an idea, collects evidence to support it, writes a paper, and then submits it to a scientific journal for publication.
Step #2: The journal sends the paper to three other scientists to review. Reviewers must be subject matter experts and mustn’t have a close relationship with the paper’s author. They look for bad science and mistakes and provide comments. If the reviewers provide conflicting comments, the journal might send the paper to more scientists or the editor might provide notes.
Step #3: The journal sends the reviewers’ notes back to the paper’s author to address. The author may write multiple drafts as she implements and receives new feedback. If the author ultimately fails to revise well enough, the journal won’t publish the paper. The author has to start over or try a different (less prestigious) journal.
Even after a paper has passed peer review and been published, there will always be some uncertainty regarding the details—science is always evolving, so while we may know a lot, we’re never going to know everything about any topic. This is normal and important—it’s what drives scientists to keep discovering new things.
When presented with an uncertainty, the question shouldn’t be whether there’s any doubt, but whether there’s any reasonable doubt. If there’s reasonable doubt, the matter isn’t settled. If there’s doubt on the details, that’s normal.
By nature, science challenges the status quo and ruling class because its purpose is to change our understanding of the world around us. Science often reveals market failures and suggests regulation as a solution, so merchants of doubt tend to be defenders of the free market such as:
Four notable individual merchants of doubt—Fred Seitz, William Nierenberg, Robert Jastrow, and Fred Singer—were involved in doubt-mongering multiple scientific questions from the 1960s-2000s. All of them were media-savvy, well-connected, politically powerful, fiercely anti-Communist physicists (though none of them did much original science after 1970—they wrote reviews or editorials, but didn’t publish much in peer-reviewed journals).
There are 10 techniques that the merchants of doubt use to create debate around issues that are scientifically settled. This appearance of debate can influence public opinion and delay or even halt regulation.
MODs play up the natural uncertainty inherent in science to suggest that everything is uncertain.
Example #1: When scientists were studying the possible effects of a nuclear weapons exchange, the science was all based on projections because the only way to get evidence would have been to start a nuclear war and observe what happened. MODs claimed there was too much uncertainty around the possible consequences of an exchange to justify exposing the country to the dangers of disarmament.
Example #2: After scientists discovered that smoking caused cancer, there were still uncertainties, such as how exactly smoking causes cancer and why not all smokers get lung cancer (this is still unknown today). MODs tried to exaggerate these uncertainties into uncertainty about the whole question of whether smoking caused cancer.
MODs only cite or publicize data that support their position, and they ignore anything that doesn’t.
For example, in 1989, the Marshall Institute (run by Jastrow, Nierenberg, and Seitz) put out a report that claimed the steady increase in atmospheric carbon dioxide couldn’t be responsible for global warming because the Earth’s surface temperature hadn’t steadily increased—there was warming before 1940 (before significant emissions), cooling from 1940-1975, and then more warming.
To support this, the report included a diagram that plotted CO2 against temperature. The lines didn’t match up particularly well, but that was because the MODs had only included part of the diagram. The full diagram was multiple graphs that plotted the effects of CO2, the sun, and volcanoes against temperature. Because all these factors affect surface temperatures, all of them had to be considered, and when they were, the temperature did line up.
If MODs don’t like what mainstream science has discovered about a subject, they often spend large amounts of money funding research that might “debunk” mainstream science or show that uncertainty is greater than previously thought. Sometimes they funnel the money through front groups or think tanks to disguise that they’re the source of the funding.
For example, after scientists had determined that smoking caused cancer, the tobacco industry recruited Seitz to allocate $45 million worth of funding to scientists doing biomedical research on the leading causes of death in the U.S. The industry hoped for a discovery that would help them defend against the science that said smoking was unhealthy.
MODs recruit people with good credentials (including scientists) to challenge evidence and repeat non-consensus claims. They also put out publications that look like real scientific findings. This makes the public think that the scientific community as a whole is in debate.
Example #1: The tobacco industry recruited famous researcher Martin J. Cline to testify in its favor against a nonsmoking flight attendant who had gotten lung cancer from secondhand smoke in airline cabins.
Example #2: The CFC industry sent Richard Scorer, a professor of theoretical mechanics, on a tour of the U.S. to denounce a Climate Impact Assessment Program (CIAP) study. Scorer spread misinformation: Human activities were so small as to have a negligible effect on the atmosphere and the discussion of ozone destruction was fear-mongering.
MODs convince journalists that it’s only fair to present “both sides” of a story when there’s a dissent. Some journalists don’t understand that dissenters already received fair consideration during the peer review process and feel like both views really do deserve to be aired. Other journalists allow themselves to be schmoozed, and still others don’t have enough time before their deadlines to dig deep into the research.
In fact, “balanced” media coverage of science actually results in more-biased coverage because minority voices end up with proportionally more time than consensus ones.
Equal attention for all views only makes sense when applied to opinions. For example, two political parties might have differing views on an issue, so it makes sense to hear both. However, there is no opinion in science—it’s either right, wrong, or unknown.
For example, tobacco industry MODs threatened journalists with the Fairness Doctrine—an FCC rule in effect from 1949-1987 that TV journalists had to give equal airtime to both sides of controversial issues. As a result, journalists gave the same amount of coverage to the industry as the mainstream science finding that smoking kills.
Most scientific research is first published in scientific journals, which the average person doesn’t read. Therefore, if MODs can get their “facts” and views into mainstream media such as newspapers and magazines, those are the facts and views the public will see.
Example #1: When Fred Singer wanted to spread doubt about global warming, he published a piece in a popular magazine called Cosmos.
Example #2: In 1990, MOD Dixy Lee Ray published Trashing the Planet, a trade book easily available to the public and reviewed by mass media.
MODs take public focus away from the real issue and direct it toward something else—often true or important, but irrelevant.
For example, when merchandising doubt about the ozone hole, Fred Singer argued that there are plenty of causes of skin cancer besides UV radiation (the hole allowed UV radiation through). This was true, but irrelevant to the point that UV radiation causes cancer.
MODs claim that the solutions to issues science has unveiled will create more problems than the original issue. Therefore, it’s better not to act.
This technique is supported by rational decision-theory analysis—if there are unknowns, the best course of action is usually to do nothing because acting has costs, and if you’re not sure you’re going to get any benefits, they’re not worth it. (Additionally, the costs are usually in the present and the benefits in the future.) This is part of why doubt-mongering works so well.
If a scientist discovers something an MOD doesn’t like, the MOD will often personally attack the scientist.
Example #1: MODs accused Rachel Carson, who revealed that the use of DDT (a pesticide) was damaging the environment, of being “hysterical.”
Example #2: MODs accused Benjamin Santer, who helped prove that climate change was caused by humans, of secretly editing an International Panel on Climate Change (IPCC) report and corrupting the peer review process. MODs and fact-fighters wrote that Santer admitted he’d doctored a report to make it fit political policy (he didn’t), tried to stop Santer from publishing a defense of himself, tried to get him fired, and contributed to the break up of his marriage.
The final technique is to call science that doesn’t support your position bad or junk science. MODs might make claims that scientists massaged the numbers or rigged the experiment to further their agenda.
For example, MODs accused the Environmental Protection Agency (EPA) of doing bad science on secondhand smoke. They claimed the EPA wanted to regulate so badly that it manipulated the science to support regulation.
You can’t do the science and original research yourself—you don’t have the expertise in every single field you might be interested in. Therefore, you have to rely on the information that other people provide.
When you encounter a piece of information, keep in mind the following:
1. The information tends to be legitimate when it comes from a reputable source like:
Example #1: Benjamin Santer’s papers and presentations about climate change were legitimate because he was a climate modeler and part of the International Panel on Climate Change (IPCC).
Example #2: Fred Seitz was a scientist, but he was a physicist, not a medical professional, and he was funded by think tanks and the tobacco industry. Therefore, his input on tobacco was more likely to be doubt-mongering than real science.
2. Dissent can be doubt-mongering when the attacker is:
For example, MOD Steve Milloy regularly and dramatically attacked a variety of issues he didn’t agree with (among other things, he accused Rachel Carson of being a mass murderer). He worked with strongly pro-industry organizations.
“Merchants of doubt” (MODs) are people or organizations who create the impression that scientific findings that threaten their agenda or ideology are unsettled or flat-out wrong. The goal is to stave off regulation.
The tobacco industry pioneered techniques for manufacturing doubt when scientists discovered that smoking caused cancer, and other MODs had been attacking science using the same techniques ever since.
In this book, science historians Naomi Oreskes and Erik M. Conway cover how science and doubt-mongering work, and they look at doubt-mongering campaigns around the following scientific questions:
Finally, they offer tips for evaluating whether information is doubt-mongering or real science.
(Merchants of Doubt was originally published in 2010 and then updated in 2020.)
(Shortform note: We’ve reorganized the book’s material for concision and clarity. Much of the original book’s front and end matter, as well as general information about merchandising doubt, appears in our Chapter 1. We’ve also rearranged the chapters to place the two smoking chapters beside each other.)
Most of us don’t really understand what science is. We think it’s iron-clad proof that something is true. In reality, scientific knowledge is expert consensus—a group of qualified scientists agrees that a claim is correct when there’s enough evidence to support it.
How does expert consensus come about? The basic system has been in place since the 1600s. When scholars first became interested in learning new information (in medieval times, learning was about studying ancient texts and preserving what was already known), they realized that there had to be some sort of way to check new ideas for validity, or people could just make things up and be taken seriously. To prevent this, they built a system in which claims had to be backed up by evidence, and both the claim and evidence needed to pass the judgment of other scientists to be considered legitimate. If a claim failed the process, it was just a claim, not science.
Today, the process is called peer review, and no science is considered legitimate until it passes peer review. Typically, the process works like this:
Step #1: A scientist comes up with an idea, collects evidence to support it, writes a paper, and then submits it to a scientific journal for publication.
Step #2: The journal sends the paper to three other scientists to review. The reviewers aren’t paid—since everyone’s work has to be peer-reviewed to be considered science, someone else will return the favor later.
Reviewers must be subject matter experts and mustn’t have a close relationship with the paper’s author. They look for bad science and mistakes such as:
If the reviewers provide conflicting comments, the journal might send the paper to more scientists or the editor might provide notes.
Step #3: The journal sends the reviewers’ notes back to the paper’s author to address. The author may write multiple drafts as she implements and receives new feedback. If the author ultimately fails to revise well enough, the journal won’t publish the paper. The author has to start over or try a different (less prestigious) journal.
Papers that are presented at conferences don’t have to go through the same process and aren’t considered serious (they aren’t taken into consideration for tenure).
Even after a paper has passed peer review and been published, there will always be some uncertainty regarding the details—science is always evolving, so while we may know a lot, we’re never going to know everything about any topic. This is normal and important—it’s what drives scientists to keep discovering new things.
When presented with an uncertainty, the question shouldn’t be whether there’s any doubt, but whether there’s any reasonable doubt. If there’s reasonable doubt, the matter isn’t settled. If there’s doubt on the details, that’s normal.
Science has gotten a lot done even though it was never designed to provide certainty—for example, we landed astronauts on the moon. Feats like this don’t prove that science is perfectly, objectively right, but they do suggest that when we act on the work of expert consensus, we can get a lot done.
Like science and scientific uncertainty, many of us don’t understand how cause works, or what scientists mean when they use the word “cause.” For example, we think that if smoking causes cancer, then if a person smokes, she’ll definitely get cancer.
In reality, it’s more complicated than that. There are two types of causes:
Now that we know how science, uncertainty, and cause work (and we know that a lot of people don’t understand how they work), we can look at how merchants of doubt exploit this to cause widespread public confusion.
By nature, science challenges the status quo and ruling class because its purpose is to change our understanding of the world around us. Science often reveals market failures and suggests regulation as a solution, so merchants of doubt tend to be defenders of the free market such as:
MOD Type #1: Free market fundamentalists, who believe that free markets are the only economic system that allows for freedom. These MODs are opposed to regulation, particularly international treaties because global governance reduces an individual nation’s powers.
However, free market fundamentalism is unsound—there are plenty of examples of market failure throughout history, such as the Great Depression. Free market fundamentalism is particularly fallible in the case of the environment because it produces “negative externalities,” which are consequences and costs that aren’t reflected in the market system, so the market has no motivation to solve them.
MOD Type #2: Cornucopians or technophiles, who believe that technology will fix every one of humanity’s problems as long as markets stay free (so inventors will be motivated to innovate by huge rewards).
This belief isn’t completely wrong—in many ways, modern life is better than medieval life, and there have been historical instances of governments oppressing their citizens with regulations. However, cornucopianism includes two potentially flawed assumptions:
Four notable merchants of doubt were involved in doubt-mongering multiple scientific questions from the 1960s-2000s. All of them media-savvy, well-connected, politically powerful, fiercely anti-Communist physicists (though none of them did much original science after 1970—they wrote reviews or editorials, but didn’t publish much in peer-reviewed journals).
They were:
Fred Seitz was a solid-state physicist. He contributed to building the atomic bomb and throughout his career held various high positions such as NATO science advisor, president of the National Academy of Sciences, and president of Rockefeller University.
Some of the following likely contributed to his becoming a merchant of doubt:
1. He loved science and technology and thought they were the only way forward. He thought the public was anti-science and technology, and that environmentalists were anti-progress.
2. He disagreed with the mainstream scientific community:
3. He was fiercely anti-Communist and always took the side of private enterprise. He believed that capitalism and freedom were intertwined, so any regulation, even if supported by science, was a no-go.
4. He was a genetic determinist—he believed that health and behavior were determined by people’s genes, not their environment. He may have actually believed some of the doubt-mongering he helped spread—for example, that lung cancer was caused by genetics, not smoking.
5. He and the organizations he was part of received industry funding. For example, the tobacco industry funded a lot of research at Rockefeller University.
William Nierenberg was a physicist. Like Seitz, he contributed to building the atomic bomb and was involved with Cold War weapons programs. He held positions including director of the prestigious Scripps Institution of Oceanography, and he was a member of Reagan’s transition team.
Some of the following likely contributed to his becoming an MOD:
Robert Jastrow was an astrophysicist and popular science author. He was the director of the Goddard Institute for Space Studies (part of NASA).
Some of the following likely contributed to his becoming a merchant of doubt:
Fred Singer was a rocket scientist. He was a director of the National Weather Satellite Service and chief scientist for the Department of Transportation under Reagan. Interestingly, in the 1960s, he was an environmentalist. He believed that human activity was affecting the planet, and he urged both more research and action. By the 1980s, however, he'd changed his mind. He came to believe that protection might come with too-high costs and that technological development driven by the free market would solve all of humanity’s problems.
Some of the following likely contributed to his becoming an MOD:
Now that we understand some of the merchants of doubt’s motivations, we’ll look at some of the techniques they use to create debate around issues that are already scientifically settled:
Merchants of doubt play up the natural uncertainty inherent in science to suggest that everything is uncertain, not just the details. If they can create a strong enough impression of uncertainty that people can’t even be sure there actually is a problem, they can delay policy and regulation.
MODs only cite or publicize data that support their position and ignore anything that doesn’t.
If MODs don’t like what mainstream science has discovered about a subject, they often spend large amounts of money funding research that might “debunk” mainstream science or show that uncertainty is greater than previously thought. Sometimes they funnel the money through front groups or think tanks to disguise the connection.
MODs recruit people with good credentials (including scientists) to challenge evidence and repeat non-consensus claims. They also put out publications that look like real scientific findings. This makes the public think that the scientific community as a whole is in debate.
MODs convince journalists that it’s only fair to present “both sides” of a story when there’s a dissent. Some journalists don’t understand that dissenters already received fair consideration during the peer review process, and they feel both views deserve to be aired. Other journalists allow themselves to be schmoozed, and still others don’t have enough time before their deadlines to dig deep into the research.
In fact, “balanced” media coverage of science actually results in more-biased coverage because minority voices end up with proportionally more time than consensus ones.
Equal attention for all views only makes sense when applied to opinions. For example, two political parties might have differing views on an issue, so it makes sense to hear both. However, there is no opinion in science—it’s either right, wrong, or unknown.
Most scientific research is first published in scientific journals, which the average person doesn’t read. Therefore, if MODs can get their “facts” and views into mainstream media such as newspapers and magazines, those are the facts and views the public will see.
MODs take public focus away from the real issue and direct it toward something else—often true or important, but irrelevant.
MODs claim that the solutions to issues science has unveiled will create more problems than the original issue. Therefore, it’s better not to act.
This technique is supported by rational decision-theory analysis—if there are unknowns, the best course of action is usually to do nothing because acting has costs, and if you’re not sure you’re going to get any benefits, they’re not worth it. (Additionally, the costs are usually in the present and the benefits in the future.) This is part of why doubt-mongering works so well.
If a scientist discovers something a merchant of doubt doesn’t like, the MOD will often personally attack the scientist.
The final technique is to call science that doesn’t support your position bad or junk science. MODs might make claims that scientists massaged the numbers or rigged the experiment to further their agenda.
There are two ironies related to the doubt-mongering techniques:
1. The means merchants of doubt use to justify their ends conflict with their free-market values. Interestingly, many of the doubt-mongering techniques were used by communists, or enemies of capitalism—for example, the Soviet Union erased history with historical cleansing, and in Orwell’s 1984, the totalitarian government made up fake histories and destroyed any information it didn’t like.
2. Delays create a greater need for regulation. The worse the problems get (and they worsen as they’re not addressed), the more aggressively the government will have to step in to fix them. The MODs have likely prompted more extreme regulation than would have been necessary had they not interfered.
Doubt-mongering techniques are powerful and widespread, and like everyone, you’ve likely been subjected to some of them.
Think of a time when you thought a scientific question about health or the environment was unresolved. What made you think there was uncertainty? Why?
Think of a time you heard about a scientist being attacked. Do you think the attacker’s claims were legitimate? Why or why not?
Think of a time there seemed to be two sides to a story. What made you think both sides were equally legitimate? Why?
In this chapter and the next, we’ll look at doubt-mongering around the science of tobacco use. The tobacco industry pioneered the manufacture of doubt with the help of Fred Seitz, and many of the techniques they came up with (and the people who implemented them) were used by future merchants of doubt. The doubt-mongering was so successful that it took over 50 years for the majority of the public to believe that smoking was dangerous to health.
Scientists have known that smoking is bad for you since the 1930s when German scientists discovered that smoking cigarettes caused lung cancer. (Lung cancer was uncommon before smoking became widespread.) However, because German science had Nazi associations, most people ignored it.
In the 1950s, scientists at the Sloan-Kettering Institute rediscovered the cancer-causing effects of tobacco when they painted cigarette tar on mice and the mice developed terminal cancer. Their research was widely publicized, striking fear in the hearts of those involved with the tobacco industry.
The tobacco industry responded to the Sloan-Kettering Institute discovery by hiring PR firms to challenge the science. Four tobacco companies—American Tobacco, Benson and Hedges, Philip Morris, and U.S. Tobacco—and three PR firms launched a campaign that included:
Scientists continued studying tobacco and continued to find that it caused cancer:
Tobacco industry scientists were getting the same results. They also discovered that nicotine was addictive (the rest of the scientific community wouldn’t discover the effects of nicotine until the 1980s).
In 1962, U.S. Surgeon General Luther L. Terry created an Advisory Committee on Smoking and Health. To make sure the panel wasn’t biased, Terry excluded anyone who had previously expressed an opinion, allowed the tobacco industry to vet potential panelists, and invited panelist nominations from the tobacco industry.
In 1964, the committee produced a report that had considered over 7,000 studies and over 150 consultants’ testimonies. Despite the industry’s involvement with the committee and its report, the report found that smoking tobacco was very unhealthy:
Terry released the report on a Saturday (to avoid paralyzing the stock market) and in a locked auditorium (for security). The report was huge because it revealed:
There were still uncertainties, as is normal for science. For example, scientists didn’t know why not all smokers get lung cancer (and this is still unknown today). By this point, however, there was no reasonable doubt that smoking caused cancer.
The 1964 report dealt a serious blow to the tobacco industry, and some industry members suggested concessions such as:
For the most part, though, the tobacco industry didn’t give up. They:
Despite the tobacco industry’s continued work, the scientific evidence kept piling up that smoking was dangerous. Three conclusions were reinforced by 2,000 new scientific studies:
The scientific evidence was enough to spur change by 1969:
Despite the turn away from smoking, the tobacco industry was still doing well:
By the end of the 1960s, so much peer-reviewed research had concluded that smoking causes cancer that the industry shifted from trying to find evidence to prove their position to distraction and doubt-mongering. Notably, they:
1. Fought the ban on advertising. They did this by publicly supporting the ban (because the Fairness Doctrine gave antismoking groups free ads, and these were affecting public opinion) and privately approaching the liquor industry and warning them the FCC would eventually go after them too, hoping to gain their support. (The FCC had stated that they had no intention of trying to control advertising of any other sensitive products.)
2. Funded research with the help of Dr. Frederick Seitz. In 1979, Seitz joined up with the tobacco industry to award $45 million worth of funding to scientists doing biomedical research on the leading causes of death in the U.S. The goals of the program were to:
With the help of two other scientists, Seitz would decide who got the funding. Everyone who ultimately received it was respected and worked for prestigious organizations, and all of the studies addressed legitimate topics.
Seitz was especially interested in two scientists:
This program had several results:
By the end of the 1980s, however, Seitz was connecting with people with extreme views and some thought he was becoming irrational. In 1989, a tobacco executive suggested the industry stop working with him because he was elderly and irrational.
Eventually, the truth mostly won out:
In the previous chapter, we looked at doubt-mongering around active smoking. Now, we’ll look at the doubt-mongering surrounding secondhand smoke (also called passive smoke or environmental tobacco smoke). Fred Singer helped the industry respond by again challenging the science. This time, though, the merchants of doubt didn’t just highlight uncertainty—they attacked the Environmental Protection Agency (EPA) for doing “bad science.”
In the 1970s, tobacco industry scientists discovered that secondhand smoke could cause cancer. In fact, they discovered that secondhand smoke was actually more toxic than active smoke. This was partly because more toxic compounds develop when cigarettes burn at a lower temperature (smoldering) than at a higher temperature (while smoking).
The industry responded to this discovery by:
Mainstream science started to unearth the health problems associated with secondhand smoke in the 1980s:
The tobacco industry attacked both papers. They criticized the first because the offices were “smoky,” but scientists hadn’t measured exactly how much smoke the workers were exposed to. (This is a hard thing to measure because smoke quickly disperses in the air.)
To deal with the second paper, they hired consultants to disprove Hirayama. One consultant accused Hirayama of making a critical statistical error, and the Tobacco Institute advertised the critic’s work and got the press to cover it by calling for coverage of “both sides” again. (Industry internal memos acknowledged that Hirayama’s study was accurate, and he was an expert scientist.)
The industry received their equal media coverage, but their tactics didn’t fool the scientific community, and medical professionals and anti-tobacco activists pushed for regulation of smoking in public. States started restricting public smoking, the Civil Aviation Board considering banning smoking on flights, and Congress considered advertising controls.
In 1986, a Surgeon General’s report announced that secondhand smoke could cause health problems, including lung cancer, even in people who didn’t smoke. Additionally, secondhand smoke was especially dangerous to young children. The Executive Summary of the report strongly advised policy action. Smoking was no longer something that only affected the smoker—it could kill bystanders too.
These developments worried the industry because if people were no longer able to smoke in public places, they would smoke less. They responded by:
Tobacco company Philip Morris’s leaders came up with four goals for secondhand smoke in 1991:
In 1992, the U.S. Environmental Protection Agency (EPA) reviewed the existing scientific studies on secondhand smoke and released a report that concluded:
The EPA developed the report according to the “weight of evidence approach”—which considers all evidence and determines what it points most strongly toward. This is useful because all studies have limitations, but looking at all of them will give the most accurate picture.
Enter Fred Seitz. Seitz started working on secondhand smoke in 1989, and to address the EPA report, he suggested that instead of challenging the evidence itself, the industry should cast doubt on the EPA’s approach (weight-of-evidence). The industry would claim that examining all the evidence was a mistake and that it would be better to focus on the “best evidence.”
This wasn’t inherently unreasonable. Some studies are bad and don’t deserve to be considered. However, a “best evidence” approach could quickly turn into focusing only on the evidence that agrees with your position, and Seitz considered that the “best evidence” to be studies with “ideal research designs.” Ideal research designs are impossible in medical studies of humans, because the variables can’t be completely controlled. The tobacco industry didn’t think Seitz’s ideas would be effective, and they went to work with Fred Singer instead.
Singer’s approach was to promote science that agreed with the industry’s position as “sound science” and to dismiss everything else as garbage. To effectively do this, he and the industry would have to:
In March 1993, Singer wrote an article called “Junk Science at the EPA” that accused the EPA of doing bad science by:
In 1993, the tobacco industry put out a book called Bad Science aimed at fact-fighters that explained how to manufacture doubt. A large section was devoted to the 1992 EPA secondhand smoke report as a case study. The book stated that:
More generally, the book included:
In November 1993, the public relations firm APCO (on behalf of tobacco company Philip Morris) created The Advancement of Sound Science Coalition (TASSC) to promote “junk science.” TASSC was a hit in several cities and attracted a lot of media attention. Among other things, TASSC:
By the 1990s, more and more Americans were turning away from smoking. Attacking scientific confidence limits was obscure and didn’t attract much attention.
The industry also worked with other right-wing organizations, notably the Alexis de Tocqueville Institution, a think tank with the goal of promoting democracy. (Defending secondhand smoke fell under promoting democracy.) In 1994, the think tank released a report (written by Fred Singer and lawyer Kent Jeffreys) criticizing the EPA for a variety of issues, but most importantly, secondhand smoke. It claimed that:
1. The EPA had asked the federal government to ban smoking. (The EPA hadn’t, and there was no pending legislation about a ban).
2. The EPA had violated scientific standards by assuming that the dose-response was linear (linear means that the risk rises proportionally with exposure). Singer and Jeffreys argued that the EPA should have assumed a “threshold effect,” in which secondhand smoke only started harming people after they’d been exposed to a certain amount. This argument could have come from two places:
The EPA defended themselves by:
Here are the details of the peer review rounds:
The first time the reviewers looked over the EPA report, they agreed with the report’s conclusion that secondhand smoke should be considered a Class A carcinogen. They asked for the following revisions:
Revision #1: Make the conclusion—secondhand smoke had serious health consequences—explicit because the current draft’s conclusions were too weak. The reviewers noted that while the epidemiology data showed only modest health effects (they were modest because most people are only exposed to low amounts of secondhand smoke because it quickly disperses in the air), scientists had already confirmed that active smoking causes lung cancer. Since inhaling both active and passive smoke involves breathing in the same toxic chemicals, passive smoke must cause cancer too.
Revision #2: Include more discussion of how secondhand smoke affected children. The reviewers thought that the evidence showed stronger effects than the report stated, and they also thought that the effects on children were more serious than the lung cancer deaths.
The reviewers didn’t ask for changes to any of the things Singer and the tobacco industry had claimed were bad science:
Response to Critique #1: They didn’t suggest using a threshold effect. This was because standard scientific practice (and the EPA’s guidelines) is to assume a linear dose-response unless there’s a lot of evidence to prove something else should be used. This is because of:
In fact, the reviewers commented that because the dose-response model applied to active smoke, it would likely apply to secondhand smoke too.
Response to Critique #2: They didn’t suggest that a higher confidence level was needed. This was because the results were all consistent, and there was other evidence (such as the tar mice work) that suggested a connection was real and not just chance.
The EPA panel took five months to complete revisions based on the reviewers’ comments and then sent revisions back to the reviewers for a second look. The reviewers still thought the report was understated when it came to the risk to children.
The EPA’s position was that to protect people, the government had to step in. The EPA had determined secondhand smoke could kill people, and it was inappropriate to give people the freedom to kill others.
In the 1990s, Russell Seitz (Fred Seitz’s free-market-supporting cousin) defended secondhand smoke too. In a 1997 Forbes article, he wrote that:
(Shortform note: While the merchants of doubt continued to defend smoking, regulation increased over the years and today, over half of U.S. states have passed comprehensive smoke-free laws.)
In the last two chapters, we looked at how the tobacco industry pioneered and refined doubt-mongering around active and passive smoking. In this chapter and the next, we’ll look at doubt-mongering in another context: strategic defense and nuclear winter.
Unlike tobacco, where there was a lot of evidence that it caused health problems, the science around strategic defense and nuclear winter was all projection (the only way to get evidence would be to start a nuclear war).
In 1975, the CIA and other U.S. intelligence agencies published a National Intelligence Estimate (NIE) that concluded that the U.S. was stronger than the Soviets in three key areas:
Edward Teller, a hawkish physicist, was a member of the President's Foreign Intelligence Advisory Board, one of the agencies that reviewed the 1975 NIE. Teller didn’t agree with the CIA’s assessment of the Soviets and thought their capabilities were beside the point anyway—what the U.S. needed was to prepare for the worst-case scenario.
In 1976, Teller got his chance to push for a new (and hopefully more favorable) assessment when the CIA and Defense Intelligence Agency disagreed about Soviet military spending. Both agencies agreed on the amount of weaponry and soldiers that the Soviets had, but they didn’t agree on how much it cost. The CIA thought it cost the Soviets around 7.5% of their gross national product (GNP), while the Defense agency thought it cost 15%. (The U.S. spent 6-8% of the GNP on military expenditures.)
The right-wing media also picked up on the discrepancy and cited the higher percentage to misleadingly imply that the Soviets were expanding their military and the U.S. was falling behind. (In reality, the Defense Intelligence’s number suggested that the Soviets were less efficient and weaker—they had to spend more money to get the same amount of weaponry.)
The combined pressure from all these detente (disarmament) opponents citing uncertainty prompted the CIA’s director, George H.W. Bush, to allow an “independent” analysis by panelists. All the panel members already thought the CIA had underestimated the Soviet threat, and as a result, they came to the following conclusions:
The panelists didn’t have much evidence to support these conclusions, so they exaggerated, selectively interpreted, or made up evidence.
Just before the 1976 election (Gerald Ford vs. Jimmy Carter), one of the panelists leaked the report to the media, and the panelists’ views went public. Shortly after Carter won the election, he resurrected the Committee on Present Danger (an organization that had been created during the “red scare” in the 1950s), and four panelists became members.
For the next four years, the panelists and other merchants of doubt kept sharing unsupported conclusions with the public and pushing far-right foreign policy. Some MODs advised Ronald Reagan during his presidential campaign in 1980. When he won, the MODs gained even more power and influence.
In 1981-82, the nuclear freeze movement took off. The movement asked both superpowers to agree to stop building and modifying nuclear weapons. (This would essentially be an agreement to disarm because nuclear weapons degrade over time, and if new ones weren’t built, eventually all the old ones would be useless.) The freeze was supported by the public, religious organizations, and state and local governments.
Plenty of physicists also held antiwar opinions. While many American physicists were involved with weaponry during World War II—and almost everyone who worked on the atomic bomb thought their work was justified—the arms race of the ‘50s and Vietnam War in the ‘60s had changed many physicists’ opinions on weaponry.
Freezing was exactly the opposite of what Reagan and the MODs wanted. In 1983, Reagan proposed the Strategic Defense Initiative (SDI), or Star Wars. The goal of the program was to create a shield in space that would destroy incoming missiles before they hit the U.S. This would create world peace because it would make nuclear weapons pointless.
Star Wars had many opponents, including some of Reagan’s advisors and many of the scientists Reagan would need to build it. People thought the project was faulty for several reasons:
By 1986, 65,000 scientists pledged not to take funding for Star Wars research.
Much of the public opposed it too. Part of this was because one of the major opponents, astronomer Carl Sagan, believed in sharing science with the public and created a TV series, Cosmos, to this end. In the last episode of the series, he talked about how nuclear weapons could destroy the planet.
Enter Robert Jastrow, a physicist who, like Carl Sagan, shared science with the public—he published pop-science books, was an NBC co-host and had appeared on the Today show. Unlike Sagan, he opposed detente. (He’d read an article by Daniel Patrick Moynihan, who had imagined an alternate history in which the arms race hadn’t been limited and the U.S.S.R. had gone bankrupt trying to keep up with the U.S. The article finished by saying we’d missed the opportunity for world peace and would regret it.)
Jastrow published an article in Commentary, a neoconservative magazine, building on the doubt-mongering Moynihan had started:
While these claims were exaggerated or totally false (for example, the Soviets didn’t have a sophisticated defense system and didn’t consider a nuclear war “winnable”), the MODs argued so convincingly that Star Wars and successive nuclear projects were approved by Congress. Seitz became chairman of the Strategic Defense Initiative (SDI) Advisory Board.
In the last chapter, we looked at the science and doubt-mongering around strategic defense. Now, we’ll look at a related issue that also inspired research and doubt-mongering: nuclear winter.
During Reagan’s term, astrophysicists at NASA Ames Research Center were working with computer models to assess the effects that dust in the atmosphere has on the surface of planets. Initially, they were trying to learn about Mars’s atmosphere, but then they realized the models could help them figure out what had happened to the dinosaurs and what might happen after a nuclear war.
After something massive collides with the Earth, whether an asteroid or a nuclear warhead, fires start and huge amounts of dust fly into the atmosphere. Smoke and dust block the sun. Without the sun, the surface temperature cools and plants, unable to photosynthesize, die.
The models suggested that even a small nuclear exchange would create enough dust to lower the Earth’s surface temperature below freezing even in the summer. The drop would be 35 degrees Celsius. This would come to pass even if the exchange was small (somewhere between 500-2,000 warheads). Each superpower had around 40,000 warheads.
The models relied on data that came from the Nagasaki and Hiroshima bombings, and above-ground testing in the 1950s. This data wouldn’t necessarily represent a modern nuclear exchange because:
However, while there was uncertainty around the details—the “second-order” effects, it was clear that a nuclear exchange would cause enough damage to seriously endanger everything living on the planet.
The astrophysicists started work on a paper that would be known as TTAPS for the last names of its authors. Notably, the “S” was for Carl Sagan.
In 1982, organizations worried about the effects of nuclear war approached Carl Sagan and three other scientists about giving a conference. The TTAPS paper hadn’t been published yet, so the scientists agreed to hold a conference only if the paper passed peer review during a workshop, and only after it was reviewed by prominent biologists. (Typically, peer review is done individually, but workshop review didn’t violate any scientific protocols.)
Shortly after Star Wars was announced, the TTAPS did pass peer review and the biologists found it so compelling they wrote their own paper, so the conference was green-lighted.
Three days before the conference, Sagan published an article in Parade, a Sunday magazine with more than 10 million readers, that summarized the nuclear winter hypothesis and emphasized the worst-case scenario while not discussing the caveats in detail. He also published an article in Foreign Affairs around the same time as the conference. He argued that policy should reduce the number of warheads below the threshold for causing nuclear winter.
The TTAPS paper and biologists’ paper were published in December 1983 in Science, the U.S.’s most esteemed scientific journal. The magazine’s publisher also published an editorial congratulating the scientists for addressing the ways science might be used for violence.
Other scientists quickly tackled some of the uncertainties. Climate modeler Curt Covey and two colleagues at the National Center for Atmospheric Research remodeled the entire hypothesis in 1984. Their model included the large-scale movement of air (atmospheric circulation) and their analysis found that, yes, the aftereffects of a nuclear exchange would cause the planet's surface temperature to drop, though not by as much as 35 degrees Celsius. Their calculation suggested the drop would be only between 10-20 degrees, more of a nuclear autumn than winter. However, this was still enough of a change to cause serious problems such as crop failure.
Merchants of doubt Fred Seitz, Robert Jastrow, and Edward Teller were unhappy with the direction the studies were headed in. They didn’t like Carl Sagan or some of the other scientists involved with the studies.
In 1984, Jastrow created the George C. Marshall Institute with Frederick Seitz and William Nierenberg. The Marshall Institute’s goal was to improve the public’s scientific literacy when it came to national security and other public concerns, which it would do by distributing materials, holding training seminars for journalists and congressional employees, and writing opinion pieces and articles. Jastrow thought people opposed Star Wars because they didn’t understand it.
1986 was a big year for the MODs:
1. The Union of Concerned Scientists (UCS), who opposed weapons research, put together a TV program about Star Wars that the Marshall Institute considered one-sided. The Institute wrote to all the networks that were planning to air it that the Fairness Doctrine would oblige them to show the Institute’s viewpoint as well. As a result, few networks ran the UCS program so that they wouldn’t be obliged to run anything by Jastrow.
2. In a 1986 letter to Nature, Kerry Emanuel, a hurricane scientist, claimed that everyone involved in the nuclear winter hypothesis lacked “scientific integrity.” He wrote that:
3. Jastrow wrote a fund-raising letter concluding that the change in temperature predictions—from winter to autumn—showed that the TTAPS scientists had been blowing everything out of proportion. He accused them of doing bad science by ignoring mitigating factors.
4. Jastrow hired Russell Seitz to keep pushing scientific fraud and he wrote an article in National Interest. The article was about how scientists shouldn’t be trusted at all and Seitz wrote that:
Seitz failed to mention a few important caveats:
Covey’s team criticized Emmanuel for attacking their scientific integrity, but Covey agreed that Sagan had been out of line by publishing early. He also thought that the TTAPS team should have done more to emphasize that 35 degrees of cooling was an older, less accurate number once Covey’s work had revealed this.
However, while the debate about Sagan’s handling of the matter and how it was presented to the public was ongoing, the science was fairly clear. By 1988, there were enough papers published for nuclear winter to be considered respectable.
In 1990, the TTAPS scientists reviewed all the literature and concluded that a nuclear exchange would result in a cooling of the Earth’s temperature. On average, land under the clouds would cool up to 10-20 degrees Celsius, continental interiors could cool 20-40 degrees Celsius, and the temperature could drop below 0 degrees Celsius even in the summer.
In the previous chapters, we looked at doubt-mongering surrounding public health and national security issues. From here, we’ll look at doubt-mongering related to the environment. Environmental issues are thorny because:
1. They often expose market failure. Markets don’t self-regulate when it comes to pollution or environmental damage because their effects don’t contribute to the cost of a service or good. (See Chapter 1 for more details.) When markets fail, governments often intervene.
Therefore, environmental issues are major targets for MODs, who support free-market economics and abhor government interference.
There are three major ways the government intervenes:
2. Science is never completely certain, so when it comes to informing policy, policymakers have to judge whether what is known is enough to outweigh what isn’t and justify action.
3. To best prevent damage, regulation often needs to happen before the damage actually occurs. This creates the following challenges:
In this chapter, we’ll first look at the history of U.S. environmentalism. Then, we’ll look at how this foundation informed the acid rain “debate,” and in later chapters, other “debates.”
In the first two-thirds of the 19th century, environmentalism was “preservationist”—focused on preserving natural places. This kind of environmentalism was popular and bipartisan—many people valued outdoor recreation and aesthetics and felt that preserving the environment was moral. (For example, when the U.S. Senate and House of Representatives voted on the 1964 Wilderness Act, it passed by a landslide.) Some environmentalists were interested in science, but people didn’t need proof to see the value in nature.
During Nixon’s term (1969-1974), scientists and the population began to realize that economic activity polluted the environment, often unintentionally. Environmentalism shifted toward pollution prevention. The government started to regulate industry when science showed a need. (For example, the government established regulatory bodies such as the Environmental Protection Agency and legislation such as the Clean Water Act.)
People also started to realize:
During Reagan’s term (1981-1989), the Republican Party shifted away from both preservation and regulation. This shift would often put the government in conflict with science.
Anthropogenic (human-caused) acid rain was first discovered in North America during the pollution prevention era of U.S. environmentalism, but by the time it was well-studied years later, the government had moved away from prevention and regulation, which made it hard for scientists to spur policy action.
People first discovered natural acid rain (caused by volcanoes) in the Renaissance, and scientists discovered anthropogenic acid rain in the 19th century. Acid rain appeared to be a local phenomenon—it fell close to the pollution sources, such as factories and cities and central Germany, that caused it. No one had seen it in North America before.
Then, in 1963, Gene Likens and three other scientists discovered acid rain in a remote, experimental forest in New Hampshire called Hubbard Brook. There were no nearby sources of pollution, but the rain had a pH of 4 or less. (pH is a measure of acidity; the lower the number, the more acidic the substance, and each integer represents a tenfold increase from the previous increase. Normal rain has a pH of approximately 5, so the Hubbard forest’s rain was 10 times more acidic than regular rain. One sample even had a pH of 2.85, which is about as acidic as lemon juice.)
The scientists determined that the forest’s acid rain was created when sulfur and nitrogen (released by burning coal and oil) mixed and dissolved in water in the atmosphere. The sulfur and nitrogen wouldn’t necessarily fall as precipitation right away—they could travel through the atmosphere and fall far away from their source. The acid rain in the experimental forest had been falling for around 20 years and was related to pollution stemming from the Midwest.
People had been burning fossil fuels since the 1850s, but anthropogenic acid rain hadn’t been falling for that long (for example, Hubbard Brook had only been experiencing it since the mid-20th century). This was because, in the 1950s, plants and factories introduced two new technologies:
These technologies helped with particle and local pollution, but they also contributed to the creation of acid rain. The particles in emissions, while dangerous to people, neutralized acid. Once they were scrubbed out of the emissions, the pollution stayed acidic. Additionally, the combination of fewer heavy particles and higher smokestacks meant that the pollution could travel farther from its source.
In 1974, Gene Likens submitted a paper to Science that explained this discovery and concluded that acid precipitation was falling on a large part of the northeastern U.S. The potential effects of acid rain included:
However, it was too early to say if any of these effects were actually happening yet.
While Likens and his colleagues couldn’t prove that the effects of acid rain were actually a problem or were already underway, other studies had also found some early warning signs. This science was mainly published in government reports (in the official languages of the government’s country) and specialized journals, so most journalists, politicians, and the public would never read them.
In 1976, Likens summarized everything that was known so far—that acid rain was damaging, far-reaching, and could cross borders—in Chemical and Engineering News. Likens’s article clearly said that acid rain was a problem, but the magazine sowed doubt—it included a caption that suggested that the cause wasn’t entirely clear, and neither were the consequences.
Here’s what was clear at this point:
Here’s what was still unknown:
If acid rain was caused by naturally occurring sulfur, there would be no point in regulating human emissions of sulfur because they weren’t actually responsible for causing acid rain. Therefore, an important next step was figuring out if the sulfur was anthropogenic.
Scientists did this in two ways:
1. Mass balance. Meteorologist Bert Bolin and his colleagues considered the two major naturally occurring sources of sulfur—volcanoes and sea spray—and then compared how much these sources produced to how much sulfur was falling in acid rain in Sweden. Northern Europe doesn’t have any active volcanoes, so they weren’t responsible for the sulfur. Sweden does have sea spray, but it doesn’t travel very far, so it couldn’t be a source inland. Therefore, a lot of the sulfur had to come from pollution, because there was nothing else that could have produced it in such large quantities.
2. Isotopes. Isotopes are heavier or lighter versions of the same chemical element. Their weight can be used like a “fingerprint”—for example, the sulfur isotope sulfur-34 appears in different quantities in different sources (volcanos have a different amount of sulfur-34 than car exhaust). Measuring the amount of the isotope present can tell scientists where the sulfur in acid rain comes from.
In 1979, geologist Noye Johnson published evidence of nutrient leaching at Hubbard Brook in scientific journals. Interestingly, though acid rain was falling on the forest, the streams weren’t acidic. Johnson and his team found that this was because of nutrient leaching—when the rain moved through soil, the acid reacted with nutrients. These reactions (and destruction of nutrients) neutralized the rain so that water running into the streams was a normal pH.
In October 1979, after acid rain had been studied for almost a quarter-century, Likens and his colleagues wrote a piece for Scientific American, which is aimed at the public. There was no uncertainty in the article or from the magazine’s editors.
In November 1979, the Convention on Long-range Transboundary Pollution passed by the UN Economic Commission for Europe said that it was illegal for a country’s pollution to affect another country. This meant that the signatories would have to address atmospheric emissions, including sulfur.
Also in 1979, the U.S. and Canada issued a Joint Statement of Intent to reduce emissions and therefore cross-border pollution and acid rain. Environment Canada had found that over half of the acid rain in Canada was created by pollution in the U.S.
In 1980, Carter signed the Acid Precipitation Act of 1980, which kicked off a 10-year research program on sulfur and nitrogen oxides. He also created a committee and started negotiations with Canada to work together scientifically and politically on acid rain. Canada and the U.S. signed a Memorandum of Intent—both countries would:
In 1980, Reagan was elected and his views—deregulation, a hands-off government, and prioritizing private enterprise—conflicted with the acid rain science. He and the merchants of doubt were concerned by the memorandum of intent and the U.S.-Canadian working groups.
In 1982, Reagan, via the White House Office of Science and Technology Policy (OSTP), called for a review and summary of the working group’s work (which wasn’t completed yet). This seemed unnecessary because the science around acid rain was well-studied by this point—the National Academy of Sciences (NAS) had already done a review of the existing literature in 1981, concluding that acid rain was dangerous and needed to be addressed by reducing emissions. An Environmental Protection Agency (EPA) report concurred.
The OSTP created the Acid Rain Peer Review Panel to do the review and summary and chose William Nierenberg to chair it (possibly because he’d just wrapped up a report about the effects of carbon dioxide on climate that found everything was fine and the only thing to do was more research).
Nierenberg’s panel included several respected scientists including Sherwood Rowland, who had shared a Nobel Prize in chemistry for his work on the ozone hole (see Chapter 7), and Gene Likens. It also included S. Fred Singer at the White House’s suggestion.
At the beginning of 1983, the panel determined that they would include everyone’s point of view, however dissenting, in the summary, and it would be jointly authored. There would be no appendices.
In 1983, the U.S. and Canadian working group’s reports found three critical pieces of information:
Their suggested fix was to use the technology that already existed to reduce emissions. If emissions weren’t reduced, the problems would get worse.
(The reports weren't as explicit about these three pieces of information as they could have been. Scientists, like MODs, tend to emphasize uncertainty, though for different reasons—focusing on uncertainty is what pushes science forward.)
Most of the text of the U.S and Canadian reports was the same—“agreed text”—but the U.S. report found greater uncertainty in the cause-and-effect relationship between pollution and acid rain. (The Canadians probably were more motivated than the Americans to take action on acid rain because much of their economy depended on forests and fish. The Americans were probably less keen because most of the pollution was coming from the U.S., so they would have to pay for the cleanup.)
It would take over a year for Nierenberg’s panel to release their summary of the working groups’ report. Most of the panel found that the working group’s work was thorough and that acid rain was understood well enough to justify action now. There were uncertainties, but they were details, and if we waited until those were all settled (potentially 50 years in the future), too much damage would have happened in the meantime.
Singer didn’t see it like this. As the panel reviewed the science and wrote their summary, Singer:
In March 1983, the summary was almost ready. John Robertson, who was compiling the report and putting together changes, sent the latest draft to the panelists. However, there were more revisions needed—the rest of the panelists didn’t agree with what Singer had written.
Singer’s chapter didn’t actually include any of the cost-benefit analysis he’d insisted was necessary. (He said the numbers were too hard to quantify. In fact, it is possible to assign value to nature—the White House Council on Environmental Quality had put a number on the air quality benefits stemming from the Clean Air Act—$21.4 billion a year.)
Instead, Singer’s chapter said:
Singer’s chapter ended with a question that implied that reducing emissions wouldn’t have a proportional reduction in acid rain.
The rest of the panel didn’t agree with his conclusions because they’d found:
In June 1983, the White House Office of Science and Technology Policy (OSTP) asked for an interim update. The panel put together the first draft of a press release which clearly stated that the U.S. and Canada collectively emit over 25 million tons of sulfur dioxide a year and that while scientists would prefer more certainty, there was enough known to start searching for solutions now. There were two particularly striking points: 1) the existing damage might take decades to repair, so it was fair to call it irreversible, and 2) soil damage could unbalance the food chain.
The OSTP sent back the press release with suggested revisions that weakened it. The two paragraphs about irreversible damage and soil damage were dropped, and the other paragraphs were reordered so that the press release no longer started with the emissions numbers. Instead, the release started by discussing what had already been done under the Clean Air Act and the uncertainties.
Robertson provided a new summary draft in July 1983. This version also didn’t become final—the panelists still didn’t agree. Singer suggested revisions that implied that acid rain wasn’t that bad and was too expensive to fix. For example, a sentence in the report said that there was a need to understand the ecological consequences, Singer added the word “economic,” so it read “ecological and economic consequences.”
In September 1983, the panel’s vice chair shared interim conclusions with the House of Representatives Committee on Science and Technology. Singer wrote to the committee chair to complain that the vice chair’s claims were unsupported. Singer argued that there wasn’t enough information, brought up true but irrelevant information, and made it seem like the panel was divided when in reality, everyone agreed except Singer.
Another draft of the summary was ready in February 1984. This was the last draft most of the panelists saw and they had different opinions on how to handle Singer’s chapter. Some wanted to demote it to an appendix that was signed by Singer (the rest of the report was jointly authored, as had been agreed on and was the norm); others would accept it as a chapter if Singer revised it. Robertson input changes in March and made three copies of the final report, one for himself, Nierenberg, and the White House OSTP. The OSTP received the report in April.
The White House suppressed its release because they didn’t want to spur policymakers into action—in April, an important House subcommittee was considering acid rain legislation).
In May, the OSTP asked Nierenberg to:
1. Change Singer’s chapter to an appendix. If Singer was the sole author of an appendix, the rest of the panel wouldn’t have to sign off on it.
2. Revise the Executive Summary. The OSTP asked to start with a description of the panel’s formation instead of the realities of acid rain. The OSTP also wanted the sections about the dangers and causes of acid rain moved to the next-to-last paragraph.
It’s normal for the leader of a panel (in this case, Nierenberg) to meet with the organization that commissioned the panel’s report (in this case, the OSTP) and present the final report to government officials. What’s not normal is for an official to ask for changes. However, Nierenberg did ultimately make the changes without informing all the panelists.
The summary was finally released in August 1984. One of the panelists, atmospheric chemist Kenneth Rahn, realized that someone had made changes without running them by the panel first. He thought the changes were so substantial that the panel wouldn’t have approved them if they’d been consulted.
The Canadians weren’t impressed that the summary was so much weaker than the actual working group’s reports and asked the Royal Society of Canada to do their own review. The Royal Society found that the U.S. and Canada’s reports agreed on most things, but the U.S. summary was quite different from the actual reports. While there were uncertainties, the reviewers thought what was known was enough to take action.
In 1984, Congress axed the joint pollution control program. No acid rain legislation would go into effect during Reagan’s term because the MODs continued to claim that the cause of acid rain was still unknown and fixing it would be too expensive.
Likens tried to get the truth out. He published articles in niche scientific magazines, but mass media, especially pro-business publications, promoted Reagan’s view and ignored or missed the real science. For example:
As a result of all this conflicting press, the public thought that acid rain wasn’t well understood and that the issue was still in debate. Even Naomi Oreskes, the co-author of Merchants of Doubt, taught the acid rain debate as part of an earth science class at Dartmouth and used Krug’s work as a reference.
People had kept researching acid rain, and in 1989, the World Resources Institute’s Mohamed El-Ashry said that the last 10 years of research had just confirmed what was already known.
It wasn’t until 1990 that the Clean Air Act was amended to control sulfur emissions using a cap-and-trade system.
The cap-and-trade system was effective and inexpensive—between 1990 and 2007, sulfur dioxide emissions reduced 54% and the price of electricity went down (after being adjusted for inflation). The Environmental Protection Agency (EPA) reported that from 1993-2003, pollution control cost $8-9 billion and the benefits were worth $101-119 billion. All of the economic disasters that the industry claimed would befall the world if the environment were protected—costs, job loss, electricity price increases—didn’t happen.
Cap-and-trade is widely viewed as a success, and it could be effective for other types of pollution, but scientists aren’t sure if it’s actually worked in the case of acid rain. The 1990 caps were too high (which might have been influenced by Singer and Reagan’s claims that since there was uncertainty, it would be smartest to only take modest steps) and acid rain still exists and may still be damaging the environment at the same rate.
Scientists, including Gene Likens, kept studying the experimental forest where they had discovered acid rain. In 1999, Likens wrote that acid rain still existed, and forests were even more susceptible to it because global warming was additionally stressing them. The forest no longer grew. A decade later, they wrote that the forest hadn’t grown since 1982 and had been significantly shrinking since 1997.
In the last chapter, we looked at the “debate” over acid rain, which the merchants of doubt successfully fueled. In this chapter, we’ll look at the “debate” over the ozone hole, which didn’t fall prey to doubt-mongering.
The discovery of the ozone hole was roundabout—scientists found it while studying something else.
In the late 1960s, the U.S. wanted to build supersonic transports (SSTs), commercial planes that could break the sound barrier. Unlike regular planes, the SST would travel in the stratosphere, and scientists were concerned the plane’s exhaust (composed of carbon dioxide, water vapor, and nitrogen oxides (NOX)) might damage the environment because:
In 1970, two notable studies tackled the issue:
1. MIT released a study (SCEP) about human environmental impact that, among other things, considered the effect of the SSTs on the stratosphere. Scientists found that an SST fleet would increase the concentration of water vapor in the stratosphere but probably wouldn’t change the planet’s temperature. The study also noted that NOX probably wouldn’t affect atmospheric ozone.
2. A Boeing Laboratories scientist found that 850 SSTs wouldn’t have a large effect on surface temperatures. He did find, however, that the emissions would cause a 2-4% reduction in the ozone column.
In March 1971, the Department of Transportation held a conference on stratospheric flight. One of the attendees, Harold Johnston, an atmospheric chemist, disagreed with the SCEP study’s conclusion that NOX wouldn’t significantly deplete ozone. Johnston handwrote a paper while at the conference that showed that NOX would have a huge effect—10-90% ozone depletion, depending on region. The higher-air-traffic regions, such as the North Atlantic, would see the greatest depletion.
None of the other scientists were very worried about NOX, but Johnston was so vehement they recommended doing more research on NOX at the end of the conference. No one knew how much NOX was currently in the atmosphere, so it was impossible to know if the SST emissions would be a drop in the bucket or a huge amount.
Johnston revised his handwritten paper and tried to publish it in Science. The peer reviewers rejected it because Johnston hadn’t cited an important paper on NOX and because his tone was biased—he wrote that an SST fleet would deplete ozone concentration by 50% and cause blindness. Johnston went to work revising it, but an earlier draft leaked to the press. The PR department at the University of California released the leaked draft officially, which brought the issue of ozone depletion into public consciousness.
The House of Representatives ultimately voted to cancel the supersonic transport (SST) program because it was too expensive (not because of environmental concerns). However, other countries were looking at stratospheric flight too, and the Department of Transportation wanted to look at other types of SSTs, so in 1972, Congress funded a Climate Impact Assessment Program (CIAP). It was one of the first studies to preemptively assess a technology and it would take three years.
At the same time CIAP was going on, scientists started thinking beyond NOX and wondered what effect other emissions, notably chlorine, might have on ozone. Two notable studies addressed the issue:
1. NASA’s 1973 Environmental Impact Statement for their shuttle program. Two scientists at the University of Michigan studied the exhaust of rocket boosters that used a chlorine-based fuel and found that chlorine destroys ozone. One NASA office tried to bury the results and the two scientists were asked to keep the shuttle out of their findings. They wrote a paper that shared the ozone-destroying effects of chlorine but mentioned only volcanoes as a source of chlorine.
2. Rowland and Molina, 1974. These two scientists predicted that chlorofluorocarbons (CFCs)—industrial chemicals used in aerosols and fridges—would eventually make their way into the stratosphere and, due to UV radiation, break down into compounds that would destroy ozone. The reactions were complicated, but the proof was simple—if chlorine monoxide was present in the atmosphere, it had definitely come from chlorine breaking down ozone, because there was no other way it could have gotten there. However, it was hard to measure chlorine monoxide because it's so reactive that when it touches an instrument, it vanishes.
The press covered Rowland and Molina’s work.
The CFC industry responded by doing their own research. The Manufacturing Chemists’ Association gave out millions of dollars in research grants, mainly to universities, and established two PR organizations.
MODs also tampered with the Climate Impact Assessment Program (CIAP) results. In January 1975, CIAP’s scientists (almost 1,000) concluded that 500 SSTs would likely hugely deplete the ozone layer above the North Atlantic and generally deplete the layer by 10-20%.
The Executive Summary, however, didn’t give any of this information. It was written by the Department of Transportation, and it discussed a yet-to-be-developed version of the SST that wouldn’t harm the ozone layer.
After the summary was released in a press conference, the press attacked the scientists for being alarmists and unscientific. The public concluded that SSTs wouldn’t hurt the ozone layer.
Some of the scientists tried to publish the real findings behind the misleading executive summary. Newspapers wouldn’t print them. One scientist finally got Science to publish a letter, which forced the Department of Transportation to acknowledge their meddling, but the exchange took place entirely in Science, which the general public doesn’t read.
In January 1975, the Ford administration put together a panel to consider the Inadvertent Modification of the Stratosphere (IMOS). A few months later, the panel recommended that unless new evidence cleared CFCs, they should no longer be used in any way that could release them to the atmosphere.
The IMOS panel asked the National Academy of Sciences (NAS) to look into CFC regulations. Normally, this job would go to a regulatory government agency, such as the Environmental Protection Agency (EPA), and the NAS’s leadership wasn’t happy with the assignment, but they did it. The president of the NAS put together two panels. Anyone who already had a position on CFCs wasn’t allowed to contribute to the reports.
The CFC industry signed up Richard Scorer, a professor of theoretical mechanics. He went on a tour of the U.S. to denounce the Climate Impact Assessment Program (CIAP) study and in-progress work. Scorer said that human activities were so small as to have a negligible effect on the atmosphere, and he called ozone destruction fear-mongering. His tour wasn’t a success—a reporter discovered that he was connected to the CFC industry and he lost his credibility.
Next, the CFC industry tried to blame atmospheric chlorine on volcanoes (magma contains dissolved chlorine). If volcanoes, not CFCs, were responsible for the presence of chlorine in the atmosphere, the atmosphere would naturally contain a lot of chlorine due to volcanic eruptions and the effect of CFCs would be negligible.
There had already been some research on volcanic chlorine, but there was uncertainty around the numbers. When volcanoes erupt, they fire far more than just chlorine into the atmosphere. Water vapor is erupted too, and chlorine easily dissolves in water. As a result, a lot of the volcanic chlorine falls back to the earth as rain rather than remaining in the atmosphere.
However, no one had measured how much chlorine rains out, so the CFC’s Council on Atmospheric Sciences tried to prove that most of the chlorine wouldn’t rain out and would instead make its way into the stratosphere. They studied a volcano in Alaska, but results were deemed “inconclusive” and never released, so presumably, they weren’t favorable.
The CFC industry also tried to claim that there was no proof that chlorine destroyed ozone, or that CFCs broke down to make chlorine, or that CFCs even made it into the stratosphere.
Scientists discovered several important things from 1975-76:
1. Chlorine monoxide was present in the stratosphere. James Anderson built an instrument that could measure it, and he found it. This proved that the reactions Rowland and Molina predicted (CFCs reacted with ozone, breaking it down to produce, among other things, chlorine monoxide) really were happening.
2. One of the model numbers was outdated. Sherry Rowland continued working on ozone and realized that the National Academy of Sciences (NAS) panels had been using a measurement in their models that was probably outdated. Chlorine nitrate breaks down into chlorine and nitrogen in sunlight, and the rate of breakdown hadn’t been measured since the 1950s when scientists found it broke down in the order of minutes. Rowland remeasured and discovered that it broke down more slowly, which means that less chlorine was in the atmosphere than the NAS’s model had predicted. Even though Rowland supported a CFC ban, he released the information immediately.
3. If we kept releasing CFCs in the same quantities as 1973, the ozone layer would steadily deplete. The NAS found that the depletion was approximately proportional to the CFCs released—if emissions halved, depletion would halve. Since it was beyond the scope of the NAS’s role to propose policy, they concluded that there was uncertainty in the predictions but that CFCs should be regulated within the next two years. Many organizations called for immediate regulation.
The CFC industry wasn’t expecting such a timeline. They tried to claim that the study said nothing should be regulated for at least the next two years when what the NAS actually said was that regulations should be in place within the next two years.
Regulations were announced in 1977, but interestingly, the public had already turned away from CFCs and chosen to use alternatives, such as roll-on instead of spray-on antiperspirants. CFC use had dropped 75%.
The CFC ban went into effect in 1979.
By 1985, the scientific community thought that the science of ozone was fairly well understood. Scientists understood the nitrogen and chlorine reactions, and a shuttle flight had confirmed that the chemicals predicted to exist in the stratosphere actually did.
Then, the British Antarctic Survey announced that they’d found a large ozone-depleted area over Antarctica. They’d actually found it back in 1981, but they didn’t believe their results because nothing currently known about ozone chemistry could explain them, so they studied for another four years before announcing their discovery.
Other scientists didn’t believe the results either. In addition to having no chemical explanation, there were two other reasons to doubt them:
However, when NASA’s Richard Stolarski rechecked the satellite data, he found that the instrument actually had picked up the depletion, but the concentrations were so low that the instrument’s programming and scientists working with it assumed that the data was bad and the instrument faulty. The instrument wasn’t faulty, and there really was an “ozone hole” (an area where the ozone concentration was below 150 Dobson units) over the entirety of Antarctica.
No one knew why or how the hole had appeared, but they did know that it was a problem—if it got much bigger, it would overlap with populated areas of Australia and South America, exposing people in these countries to UV radiation.
To find out more, NOAA and NASA sent two expeditions to Antarctica:
Almost every stratospheric scientist was involved with the study and by 1986, there were two theories for what had created the hole.
In 1987, the AAOE found that the hole was created by both CFC breakdown and the Antarctic’s weather. There are two weather effects unique to the Antarctic:
While the U.S. had banned CFCs in 1979, Europe and the Soviet Union hadn’t, and global emissions were increasing. In 1987, the United Nations Environment Programme (UNEP) put together the Montreal Protocol, which required that nations cut CFC emissions in half over a long period of time. It also required nations to meet regularly to consider new evidence.
In 1987, Fred Singer argued that ozone depletion was natural and scientists were using “the ozone scare” to get more funding. He published an article in the Wall Street Journal in which he emphasized uncertainty, accused scientists of being self-serving and politically motivated, and tried to distract the public.
He wrote:
As Singer had pointed out, there were still uncertainties in ozone science. For example, the ground-based instruments didn’t agree with satellite-based ones, and no one knew which was right.
Robert Watson established the Ozone Trends Panel to address some of the uncertainty. In 1988, the panel found that it was the satellite’s numbers that were off but that they could be corrected. It also found that the models had underestimated the depletion.
Scientists also started studying the Arctic for two reasons:
In 1988, scientists found that there wasn’t a hole but there was depletion:
In 1988, Singer tried to publish a letter to explain his theory of what had caused the ozone hole—natural climate variability that resulted in warming of the Earth’s surface and cooling of the stratosphere. (Cooler temperatures speed up the reactions that destroy ozone.)
Since the variability was natural, it would eventually tip back the other way and return ozone levels to what they had been previously. Nothing had to change.
Singer cherry-picked sections of two papers that discussed the relationship between surface warming and stratospheric cooling to support his position. (If read in full, both these papers discussed why warming of the surface and cooling of the stratosphere weren’t natural.) But Singer just cited the parts that worked for him:
1. V. Ramanathan predicted that as more heat was trapped closer to Earth’s surface in the troposphere, less would make it into the stratosphere, so it would cool.
2. James E. Hansen made a graph of unnatural surface warming and Singer used the graph to suggest that it was natural.
Singer’s theory wasn’t impossible, but it discounted 10 years of evidence, and Science refused to publish Singer’s letter. He eventually published in EOS, the American Geophysical Union’s newsletter.
Watson of the Ozone Trends Panel organized the Airborne Arctic Stratospheric Expedition (AASE). A plane flew out of Norway throughout January and February 1989, and scientists found that the nitrogen and chlorine levels were similar to the Antarctic. The last flight found that chlorine monoxide concentrations were actually higher than in the south. However, the Arctic’s vortex wasn’t as strong and it wasn’t as cold as Antarctica, so that’s why there wasn’t a hole.
In 1989, Singer wrote a doubt-mongering letter in National Review that:
It was now clear that CFCs were damaging the ozone layer, and regulations could be based on evidence rather than the predictions. The CFC industry realized they were damaging the ozone layer and stopped fighting.
In June 1990, the Montreal Protocol banned the creation of CFCs and other chlorine-releasing chemicals. The deadlines ranged from 2000-2040.
Scientists kept researching ozone depletion and the atmosphere, but mainly because they were concerned about CFC substitutes and their meteorological effects, not because there was any doubt that CFCs damage the stratosphere.
It was common in the early 1990s for capitalists to believe regulating the environment would lead to socialism, and Singer and other MODs didn’t give up. The Marshall Institute, Washington Times, National Review, and other anti-Communist or pro-business publications published Singer, and other scientists joined him to blame ozone depletion on volcanoes (even though scientists had cleared up the volcano question years earlier).
In 1993, Sherry Rowland started to worry about the spread of this inaccurate information and tried to clear things up in a presidential address by re-explaining the science of volcanoes and how chlorine reaches the stratosphere. It didn’t take—the media had spread Ray’s information to the general public.
From 1994-1995, Singer kept repeating his claims and in 1995 acted as a witness in congressional hearings on “scientific integrity.” Singer said that Robert Watson’s testimony was inaccurate and ozone depletion wasn’t a problem.
Then, Sherry Rowland and two other scientists won the Nobel Prize in Chemistry for their work on ozone in the stratosphere, which clearly demonstrated that their science was widely accepted. Singer responded by attacking the Nobel committee for making a political statement.
The Montreal Protocol held and CFCs stayed banned, but Singer’s work definitely had an effect—politicians still trusted him, and Ray and Singer’s claims that scientists were untrustworthy and crooked was hard to shake and would persist into the future.
In the last chapter, we looked at an example of science successfully informing policy despite the interference of the merchants of doubt. Now, we’ll look at the opposite case—the MODs discrediting science enough to stop policy on one of the more important issues of the day: global warming and climate change. Climate change is a hugely inflammatory issue for the MODs because addressing it would involve regulating energy use, which is the keystone of economic activity.
Research on atmospheric carbon dioxide (CO2) and its effect on climate started over 150 years ago:
In 1965, Roger Revelle summarized the possible impacts of the warming caused by CO2. There were still a lot of unknowns, so he focused on sea-level rise because it seemed the most certain. He estimated that by 2000, there could be notable changes in climate because there would be 25% more atmospheric CO2. As the planet warmed, sea ice would increase in size due to thermal expansion (volume increases when material warms up), and this would push water farther inland.
Lyndon Johnson read the report but had bigger problems (such as the Vietnam War) and didn’t do anything about it. The next president, Nixon, did consider atmospheric changes, but he was focused on the greenhouse gas effects of water (from supersonic transports (SSTs)), not CO2.
In the 1970s, there was drought in Asia and Africa that caused famine. The famines affected the whole world because the price of food increased globally. This brought some attention to the fragility of the global food supply, which would be affected by changes to weather patterns and global warming.
In 1977, the Department of Energy asked the Jasons, an independent, elite scientific group, to review the science on climate and CO2. The Jasons group:
Other scientists had already established all this, but the Jasons were highly regarded and their study attracted attention from the White House.
None of the Jasons were climate scientists, so President Carter’s science advisor asked the National Academy of Sciences (NAS) to review their study. The NAS put Jules Charney, an MIT professor and eminent meteorologist, in charge. He put together a panel to review the existing knowledge and additionally invited two climate modelers to share the results of their latest, state-of-the-art climate models. The models had similar results to the Jasons’. They found that doubling the concentration of CO2in the atmosphere would likely increase the surface temperature by three degrees Celsius. (Exactly how much the surface temperature would change as a result of increased CO2 was one of the major uncertainties.)
The panel looked at natural processes that might slow down the warming—for example, if cloud cover increased, clouds would block some sunlight and less heat would make it into the atmosphere to be trapped by CO2. However, they didn’t think any of the natural processes would be enough to prevent warming.
Additionally, none of the complications the models couldn’t capture—like ocean circulation, clouds, or winds—would change the math much. They were factors that would affect the second decimal place of the temperature change, not whether or not the warming was happening at all.
Charney’s panel didn’t know how long it would take to see changes because the models didn’t account for the storage of heat in the oceans. It takes a long time to warm up an ocean, and the time required depends on a variety of factors including how much the oceans mix and how deeply the heat goes. The panel estimated that ocean mixing would hold the warming off for decades—while the atmosphere had already changed, the effects wouldn’t be clear for years.
The White House Office of Science and Technology asked the NAS to keep looking into climate change and answer some of the uncertainties Charney’s work hadn’t, particularly, how far off climate change was. The MODs got involved with both assessments.
In April 1980, a NAS committee chaired by economist Thomas Schelling that included Bill Nierenberg and Roger Revelle submitted a letter report. Schelling’s letter:
The NAS’s Carbon Dioxide Assessment Committee started on another assessment. Meteorologist John Perry had suggested that the committee review existing work (which was normally what the NAS did), but Bill Nierenberg wanted new research done.
Nierenberg won out and became head of the committee. Energy Department officials didn’t want to hear anything alarming; they just wanted “guidance,” so Nierenberg chose members who would agree with this position.
The report was published in 1983, and unusually for NAS, wasn’t jointly authored. Instead, it was composed of seven individually authored chapters because the committee couldn’t agree on a collective assessment.
Five of the chapters were written by natural scientists and covered the evidence for anthropogenic climate change. They all agreed that humans had increased atmospheric CO2 and that, unless something was done, the increase would continue and have biological and physical consequences. The natural scientists agreed that there was still uncertainty about the details, but we needed to act now.
The other two chapters were written by economists and covered emissions and impacts. The first chapter, written by William Nordhaus, was about CO2 emissions and energy use in the future. Nordhaus agreed with the natural scientists that anthropogenic CO2 emissions were increasing due to burning fossil fuels, but instead of labeling this a problem, he focused on uncertainty about future effects, and on economic and social consequences. Nordhaus wrote that a permanent carbon tax would be too impractical, there was little to be done about climate change, and the most economical way to “handle” it was inaction.
The last chapter was written by Schelling and rejected the idea that focusing on CO2 was the solution. Schelling insisted that climate change in general (not anthropogenic climate changes specifically) was the issue. Therefore, anything that changed climate (such as natural variability) needed to be considered. He wrote that even if the problem was caused by CO2, the solution might be elsewhere, like adapting.
Schelling also assumed that climate change wouldn’t take effect for over a hundred years, after the policymakers of the day were dead and gone. Future leaders would be able to make better policies than today’s politicians because they would have a better idea of the future society’s needs. For example, today’s humans couldn’t know the energy needs or climate preferences of future humans.
The assessment’s Executive Summary focused on the economists’ chapters, and these chapters were placed first and last. The synthesis was clearly at odds with the rest of the report, but the report was published (one scientist said Academy review was lax at the time).
Some scientists criticized the report, notably physicist Alvin Weinberg, who had been one of the first in his field to realize how dangerous climate change might be. He wrote that there was no evidence to back up the recommended inaction and the synthesis didn’t reflect what most of the report said. Most scientists, however, just ignored the report because they knew it was junk science.
Economists had problems with the report too. In the late 1960s, some economists had concluded that the free market was by nature a danger to the environment because it focuses on growth. The earth doesn’t grow—it has a finite capacity to absorb pollution and a finite amount of resources.
The White House approved of the report and used it to refute other reports (such as two Environmental Protection Agency ones) that recommended action. It was exactly what they wanted—it appeared united, encouraged inaction, and said that future technology would handle anything that did come up.
The press prioritized the National Academy of Sciences (NAS) report.
In 1987 (not much happened on climate change while atmospheric scientists were busy with the ozone hole), climate modeler James E. Hansen testified in a climate hearing. The media ignored him. However, in 1988, there were high temperatures and drought in the U.S. and the public started to take global warming more seriously. Hansen re-testified in 1988 and announced that anthropogenic global warming had already started.
New research showed that over the past eight years, the planet was half a degree warmer than the 1950-1980 average, and there was only a 1% chance that this was due to natural events. Hansen and his colleagues also modeled some scenarios for the future: The surface temperatures could be higher than they’d been 120,000 years ago.
This time, Hansen got media attention. He was on the front page of the New York Times. Some scientists, who didn’t like the attention or were perhaps jealous, criticized him because there were still uncertainties and it seemed premature to announce warming was already happening.
Hansen created political pressure and presidential candidate George H.W. Bush promised to do something about climate change. Bush would propose funding research and send people to climate science meetings.
In 1988, Bob Watson and Bert Bolin put together the International Panel on Climate Change (IPCC). The panel was to involve over 300 scientists and complete an assessment by 1990.
Bolin divided the panel into three groups: one to report on climate science, one to look at the impacts, and one to figure out what to do about it.
In 1989, the MODs tried to blame global warming on the sun’s variability. The Marshall Institute (Jastrow, Nierenberg, and Seitz) put out a report that claimed that the warming didn’t line up with the increase in CO2. There was warming before 1940 (before significant emissions), cooling from 1940-1975, and then more warming. Atmospheric CO2, on the other hand, had continually increased. Since the trends didn’t match, CO2 couldn’t be the cause.
To support this, the report included one of Hansen’s diagrams, which plotted CO2 against temperature. The lines didn’t match up particularly well, but that was because the MODs had only included part of the diagram. The full diagram was multiple graphs that plotted the effects of CO2, the sun, and volcanoes against temperature. Because all these factors affect surface temperatures, all of them had to be considered, and when they were, the temperature did line up.
The paper claimed the “real” cause of temperature changes was the sun, which went through periods of higher and lower output. There was a 200-year-cycle, and it was just about finished, so things would cool down soon. The Marshall Institute piece didn’t mention the fact that the sun’s energy output hadn’t increased in the 1970s, so there was no way that the warming period had been caused by the sun (changes in sun output did happen in the other warming period in the 1940s).
There was another large flaw in this argument—the changes in solar output were small, so if they’d affected climate, the climate must be very sensitive. If it was very sensitive, small changes in atmospheric CO2 could cause large changes to surface temperatures too.
The Institute asked to present their work to the White House, were approved, and Nierenberg gave a briefing. Many politicians took it seriously. Bert Bolin wasn’t invited or didn’t realize he had to ask.
The International Panel on Climate Change (IPCC) published their First Assessment in May 1990. It:
The merchants of doubt continued to plug their sun theory and attack the IPCC, but they started going after individual scientists too.
Roger Revelle was an important voice in the climate change discussion. He was also Al Gore’s mentor, so if the MODs could make it look like Revelle had changed his opinion on global warming, it would not only sow doubt, it would hurt Gore’s 1992 presidential campaign, which focused on the environment.
Singer attended one of Revelle’s talks and approached him about writing an article together that would be called “What To Do about Greenhouse Warming: Look Before You Leap.” Revelle agreed but shortly after, he had a heart attack. An operation saved him, but there were further complications and his health deteriorated. He and Singer continued working on the article (though Revelle appeared to have lost enthusiasm), and Revelle may not have been able to look closely or check changes due to his health.
While still working with Revelle, Singer wrote his own article with almost the same title (“What To Do about Greenhouse Warming”). This article reused the Marshall Institute’s work and focused on uncertainty.
In February 1991, Singer and Revelle met to discuss their paper. They argued about how sensitive the climate was to CO2. The current draft of the paper claimed that the warming would be “well below the normal year to year variation” and less than 1 degree Celsius. Revelle corrected the numbers in the margin. Singer ultimately cut the numbers, so the next draft read that a “modest average warming” and still contained the phrase about the warming being below the normal variation. The historical record doesn’t show whether Revelle ultimately agreed to this wording, but everyone who knew him expected he hadn’t. Revelle was well aware of natural variability because he was a geologist (natural variability is determined from paleoclimate data).
The article appeared in Cosmos (a non-scientific journal with low circulation) and because Revelle was listed as the second author, it looked like he had agreed that the warming wasn’t significant or outside of normal. One of Revelle’s colleagues stated that Revelle was embarrassed by the paper (Revelle died shortly after it came out).
While few scientists read Cosmos, the article made its way into public consciousness. In 1992, Al Gore’s critic Gregg Easterbrook attacked him for failing to mention that Revelle had ultimately concluded that there was too much uncertainty to act. The quote Easterbrook used to support this was from Singer’s solo 1990 paper with a similar title to the co-authored one. Other critics repeated the attacks.
People tried to set the record straight:
1. Revelle’s daughter published an op-ed.
2. Two of Revelle’s colleagues wrote to Cosmos. They wrote that Singer had written the paper, not Revelle, and that Singer had probably only included his name as a thank-you for advising and reviewing. Cosmos wouldn’t publish their letter, so they sent it to Oceanography (a scientific journal mainly read by scientists).
3. Justin Lancaster, a colleague and close friend of Revelle:
Revelle probably didn’t actually change his mind about global warming. In November 1990, he wrote an unpublished introduction to a meeting (Oreskes and Conway found it in his papers), which states that he thought that climate was serious and we should act.
In June 1992, the U.N. held an Earth Summit to address what to do about climate change. The MOD's techniques were working and President Bush only decided to go at the last minute. The conference produced a Framework Convention that 192 countries signed, committing to preventing dangerous anthropogenic activity. The Framework didn’t include any emissions limits; however, it was just a principle.
By 1992, it was clear that global warming was real and happening now. The only major to-do was to prove that it was caused by humans—if it was, contrary to what the MODs claimed, there was definitely something we could do to slow or stop it.
To prove that humans were causing global warming, scientists looked for ways in which anthropogenic warming was distinct from other causes of warming. This approach was called “fingerprinting” or detection and attribution.
A possible fingerprint of greenhouse gases was the temperature of all the layers of the atmosphere. If the sun was the source of warming, since it was outside the atmosphere, the entire atmosphere should warm. If greenhouse gases were the source, however, only the lower atmosphere (where the gases trap heat) would warm up. The outer layers of the atmosphere would actually cool because less heat than usual was making its way through the inner layers to the outer. (In Chapter 7, Singer brought this up when doubt-mongering ozone depletion.)
Climate scientist Benjamin Santer began looking at the vertical structure of the atmosphere. He had worked on checking the accuracy of climate models as a student, and he learned about fingerprinting at one of his first jobs. In 1994, the International Panel on Climate Change (IPCC) asked Santer to be the convening lead author for Chapter 8 of their next (second) assessment. The IPCC wasn’t very prestigious at the time but Santer agreed after some convincing.
Santer’s job would be to organize a chapter about fingerprinting based on available information—basically, coordinate the writing and put together everyone’s individual parts. His chapter had 36 authors and they met in August 1994 to start work.
In October and November, Santer went to the first drafting session, where all the lead authors of the entire report met. The first hiccup was whether to discuss the uncertainty of the model and observations in Chapter 8. Uncertainty was already covered in other chapters, but Santer wanted to include it in Chapter 8 as well because readers might not read the report in full, and Santer had no say in how it was discussed in other chapters. Santer got his way and his chapter would also include a discussion of uncertainty.
Next was the first round of Chapter 8’s peer review. Approximately 20 detection and attribution scientists, everyone who had participated in writing the chapter, and all the other chapters’ lead authors reviewed Santer’s chapter.
In March 1995, there was a second drafting session, and in May, the whole report and policymakers’ summaries went to reviewers selected by the governments who were members of the IPCC. Santer also sent his work to Nature around the same time for review.
In July 1995, there was a third drafting session. Santer hadn’t received the government reviewers’ notes yet (he had been chosen as the lead author later, so the other chapters were further along), so he presented his unreviewed results. He and his team had been working on the vertical structure of the atmosphere and might have proved that humans had changed the climate.
In September 1995, the whole report was leaked and Santer’s chapter made the news, though the New York Times didn’t quite get the story right. It reported that scientists had just now started to think human activity was likely a cause of global warming. In reality, scientists had thought it was likely for years and had just proven it.
Proof that the warming was anthropogenic was bad news for the MODs. In early November 1995, the Republican majority in Congress held hearings in which they questioned the science. One of the witnesses, ozone-depletion denier and climatologist Patrick J. Michaels explained that the IPCC had gotten it wrong because the model predictions didn’t match the observed temperatures measured by NOAA weather satellites. He’d mentioned this in his peer review of the report but the IPCC had ignored him.
Witness Jerry Mahlman, a NOAA scientist, explained that the model didn’t match the observed weather data because it wasn’t supposed to. The model deliberately ignored the effects of the sun or volcanic dust because the point was to isolate the effect of CO2. However, the satellites measured real life, which included the sun and volcanoes, so obviously the temperatures were different. The model and observations weren’t looking at the same thing, so it didn’t make sense to compare them.
The hearing didn’t get much media attention (everyone knew the Republicans opposed environmental action by now), but it did reinforce Republicans’ confidence in doing nothing.
In November 1995, there was a final IPCC plenary session. At this meeting, Santer received the government reviewers’ comments. When Santer presented his results, two government representatives immediately opposed his chapter (Saudi Arabia and Kuwait) and one delegate (Kenya) thought the report shouldn’t even cover fingerprinting.
The IPCC chairman put together a drafting group to revise the chapter and sort everything out. The drafting group included the lead authors, delegates from other countries, and the delegates who had opposed the chapter (except the Saudis didn’t send anyone). When Santer shared the revised chapter, the Saudis still didn’t like it.
The group put the discussion on hold to finish the Summary for Policymakers. There was disagreement here too—everyone had different opinions about the adjective used to describe the anthropogenic influence on climate. After trying almost 30 words, they ultimately settled on “discernable.”
Santer coordinated implementing the changes requested from the plenary. The major change was organizational—all of the other chapters in the report had summary statements at the beginning, but Chapter 8 had statements and the beginning and end, so the chairman told Santer to take out the statement at the end of his chapter.
In February 1996, before the third IPCC assessment was released, Singer published a letter in Science that claimed:
Tom Wigley, one of the Chapter 8 authors, responded, debunking everything. Singer’s only response was to provide the missing citation.
In May 1996, before the IPCC report was published, Santer and Wigley gave a briefing on Chapter 8 on Capitol Hill. An industry lobbyist and representative of the American Petroleum Institute accused them of secretly editing the report. Santer did make changes after the plenary session to address the discussion, government comments, and structure, as requested. But the MODs accused him of cutting out sections about uncertainty and ignoring other scientists’ dissent.
The Global Climate Coalition (an industry group) attacked the IPCC and put out a report called “The IPCC: Institutionalized Scientific Cleansing.” Nierenberg received a copy and in an interview, repeated its claims even though they were impossible to verify (the report hadn’t been published yet, so there was no way to compare older drafts to the final one to see if anything really had been cut). Nierenberg also claimed the IPCC had cut out bits about uncertainty, which was a lie—Santer had insisted on including a discussion of uncertainty in his chapter back in the first drafting session.
Fred Seitz continued the attack—in June 1996 in the Wall Street Journal, Seitz accused Santer of making unauthorized changes and corrupting the peer review process. He suggested finding a more reputable authority than the IPCC because they couldn’t follow their own rules.
Santer and 40 cosigners wrote to the Wall Street Journal to explain what had really happened—the IPCC chairman told him to make changes and this was normal—it’s how peer review works. The only differences between the IPCC review and standard peer review were:
Santer also pointed out Seitz couldn’t back up his claims—he hadn’t had anything to do with the report and he wasn’t a climate scientist. The Journal initially refused to publish Santer’s explanation. It finally did, but only after editing it and cutting the names of the cosigners. The IPCC chairman and Bert Bolin also wrote the Journal, and it edited their letters too.
The University Corporation for Atmospheric Research and American Meteorological Society boards republished the unedited versions of the letters to show how the Journal had edited them. They called the MODs on systematically merchandising doubt.
In July, Seitz, Singer, and ozone-hole-denier Hugh Ellsaesser responded in the Journal and denied the charges. Singer accused the IPCC of coming to a “feeble” conclusion but also claimed that it was scare-mongering.
Bolin and Santer responded, and Singer tried to attack them again, but the Journal wouldn’t publish him. Singer circulated an email instead with wild claims: There was no evidence for warming, the chapter had been based on “unpublished work,” and Patrick J. Michaels should have been a lead author because he was the only person to publish on climate fingerprinting before 1995. (All of this was verifiably untrue—for example, the first paper on fingerprinting was published in 1979, and Santer circulated his own email to explain.)
In November, Singer kept making claims that were easily fact-checkable—for example, he claimed that the discussion of uncertainty in Chapter 8 had been cut, but the report had been out for months and anyone who looked at it could tell it included six pages about uncertainties. However, the mass media publicized his and other MOD's views and voices. (A 2004 study that looked at media from 1988-2002 discovered that over 50% of articles were balanced and 35% did acknowledge that one position was correct but still included the deniers' views.)
Even though the MOD's claims were false, their work influenced Bush, members of Congress, and millions of members of the public who read the Wall Street Journal. The U.S. withdrew from the Kyoto Protocol, which committed countries to reducing greenhouse gases, in 1997.
In 2001, the IPCC’s latest assessment found more and stronger evidence of anthropogenic climate change. In 2007, the assessment named it “unequivocal.”
Despite the strength of the science, the “debate” persisted. In 2006, only 56% of Americans thought that the average global temperature had increased. 85% thought it was happening but 64% thought there was still a lot of conflict between scientists. In 2009, the number of people who believed there was evidence for global warming actually declined from 71% to 57%.
The original edition of Merchants of Doubt was published in 2010 when, despite the science, the MODs had so effectively merchandised doubt that there was little policy change on climate change. When Oreskes and Conway updated the book in 2020, the MODs were still at work on the subject.
Though doubt-mongering ultimately failed to stop U.S regulation on action on tobacco, the ozone hole, and acid rain, it did delay regulation by decades. The authors argue that we don’t have that kind of time when it comes to climate change—we only have around 10 years to change things before the earth would be irreparably changed.
In this section, we’ll look at some of the highlights from the past decade when it comes to climate change.
(Shortform note: Parts of this section appeared in the original book’s Postscript.)
In 2006, the California Global Warming Solutions Act (AB 32) went into effect and required the reduction of greenhouse gases to 15% below the 1990 levels. The deadline was 2016.
The MODs claimed:
California made most of its reductions in the electrical utility sector and met the target in only 10 years. Regulating emissions didn’t cause any of the problems the MODs warned would come to pass. From 2006-2016, the population increased and the economy grew 41%. This showed that all of the MOD’s claims were unfounded.
Like California, lots of other states have good conditions for solar and wind power, so the MODs keep reprising the doubt-mongering arguments they tried in California.
There is one real problem with wind and solar energy in Texas—sometimes, there’s too much energy, which drives the spot market prices into the negatives. The grid manager in California reduces production to deal with it, but there are better (though currently more expensive) options already in existence:
Moving from fossil-fuel-powered cars to electric ones is another way to reduce emissions, but EVs have been slow to catch on, even in California. In May 2019, less than 3% of passenger vehicles were electric.
Electric cars are less numerous for three reasons:
1. Misinformation. MODs spread disparaging information about EVs, often on social media where there are no gatekeepers such as journalists to refuse to publish them. Their major tactics are:
2. Retail infrastructure. Many people aren’t aware that electric cars perform well or are available because:
3. Charging infrastructure. Charging stations aren’t nearly as widespread as gas stations, which limits where people can drive and live (most people who have EVs charge them at home, and if you rent, you may need your landlord’s signoff to install a home charger).
The doubt-mongering campaign about climate change is still ongoing.
When and how did you first become aware of climate change?
What did you think the state of the science was? Why?
What climate-related doubt-mongering techniques were you subjected to? Did they work on you? Why or why not?
What do you think the state of the science is today? Why?
In the previous chapters, we looked at how merchants of doubt created debate around emerging and unregulated issues to halt or at least delay government interference. In the case of the pesticide DDT, the MODs manufactured doubt when the issue first came up, but they also reprised their doubt-mongering years later, after the matter had been settled and DDT had been banned for years in the U.S.
The goal of the second DDT doubt-mongering campaign was to:
They weren’t trying to get DDT unbanned—it wasn’t even an effective pesticide anymore—they were just trying to defend free-market forces.
The chemical DDT (dichlorodiphenyltrichloroethane) was first invented in 1873, but no one was particularly interested in it. When chemist Paul Müller resynthesized it in 1940, scientists realized that DDT was an effective pesticide and could kill the insects that spread diseases, such as mosquitoes that carried malaria. It seemed perfect—it was cheap, easy to use, and seemed to have no side effects on humans.
In 1947, mosquito control workers in Florida reported that the usual application of DDT was no longer working—the mosquitoes weren’t dying. This was because they’d become resistant. When an insect population is exposed to a pesticide, only the bugs that are naturally resistant survive. Therefore, these resistant bugs are the only ones that breed and pass on their resistant genes to their offspring. With each generation, more and more resistant mosquitoes are born. In just a few years, a whole population can be resistant. (It would take 7-10 years for DDT.)
DDT also had mixed results when it was used as part of the Global Malaria Eradication Campaign. From 1955-1969, the World Health Assembly and the U.S. sprayed indoor surfaces and walls with DDT and other pesticides. This spraying helped end malaria in Australia and Europe and reduced it in parts of Latin America and India, but wasn’t very effective in less developed areas.
Spraying failed for three reasons:
1. Spraying wasn’t the only factor that eliminated malaria. Health care, education, nutrition, and reducing mosquito breeding grounds were also important, and the less developed countries didn’t always have these supporting factors.
2. For spraying to work, the pesticide needed to remain on the walls, so people couldn’t wash them or refinish them. Many people didn’t understand or like this—it conflicted with other public health instructions and made their homes feel dirty.
3. As they had in Florida, mosquitoes were becoming resistant. Indoor spraying doesn’t produce resistance quickly because only a small percentage of the mosquito population (the ones that get into buildings) are exposed to DDT. However, in many of the regions that used indoor spraying, DDT was also used outdoors for agriculture. (It was less toxic than other pesticides in use at the time and could be sprayed from airplanes, which was much more economical than other pest-control methods like clearing brush or draining swamps.) This widespread outdoor spraying exposed most of the mosquito population, so it quickly developed resistance.
Besides creating resistant insects and ceasing to work, DDT had another problem: It was harmful to humans and the environment.
The first people to realize that DDT was damaging the environment (besides possibly government scientists whose wartime work was classified) were scientists at the U.S. Fish and Wildlife Service, notably Rachel Carson. Carson was a biologist and found reports of damage to fish and birds from DDT in government reports or obscure journals. She also found circumstantial evidence that pesticides might be dangerous for people as well. (Carson thought DDT likely caused cancer, but there was little study on human effects, so she made no quantitative claims).
Carson wrote about the dangers of pesticides in a 1962 book called Silent Spring. She covered several pesticides and spent a lot of time on DDT because it bioaccumulated. Bioaccumulation is when a substance that takes a long time to break down travels through the food chain and concentrates at the top as organisms eat other organisms that have been exposed to it. (Shortform example: A small fish might eat one unit of DDT. A medium fish might eat three of the smaller fish, ingesting three units of DDT. A large fish might eat five medium fish, ingesting 15 total units.)
In addition to bioaccumulation, Silent Spring discussed other problems with pesticides:
The year 1962 was around the time U.S. environmentalism shifted from preservation to regulation, and Carson was a major player in this reorientation because she pointed out that putting aside land wasn’t much use if all the animals were dead, and birds didn’t sing in springtime.
The pesticide industry attacked Carson. They accused her of being hysterical, claimed her science was uncertain and bad, and they threatened to sue her publisher.
Other scientists opposed the book as well. Food scientists, who knew the agricultural benefits of DDT, and chemists, who liked to believe pesticides weren’t dangerous if applied properly, didn’t agree with her conclusions. Former university chancellor Emil Mrak testified that Carson’s work didn’t match the scientific consensus.
In this case, the public didn’t respond well to the doubt-mongering. The controversy increased the sales of Carson’s book, and accusing a woman writer of being “hysterical” offended feminists.
President Kennedy asked the President’s Science Advisory Committee (PSAC) to look into DDT in 1962. There was plenty of uncertainty—there wasn’t much available research on DDT’s effect on the environment, and most of the discussion around pesticides in the U.S. was concerned with efficiency and contamination of food.
The PSAC found that even when pesticides were applied properly, they were damaging the environment. This environmental damage would ultimately hurt humans—for example, DDT killed beneficial insects. The PSAC didn’t claim that the dangers were well-understood and called for more research.
The PSAC also called for policy action—while the exact dangers were unknown, they were potentially substantial. The PSAC called on those who wanted to use pesticides to provide most of the proof that they were safe beyond reasonable doubt, similar to how the 1938 Food, Drug, and Cosmetic Act required manufacturers to prove their products were safe before selling them.
There were at least three more federal studies.
In 1972, the Environmental Protection Agency (EPA) banned the use of DDT in the U.S. because it harmed the environment (not because it harmed humans—it was still unknown whether it was a carcinogen). The ban was supported by the public and both political parties, and it included exceptions for public health emergencies.
Scientists kept studying how pesticides harm humans, and two important studies looked at the effects of DDT on humans:
1. Dr. Babara A. Cohn found that women who had been exposed to DDT had a higher risk of breast cancer. She discovered this by studying women who had been in a pregnancy study in the 1960s, which meant they had been children or teenagers when DDT was used widely in the two decades before. Their blood samples still existed and Dr. Cohn reanalyzed them to check DDT levels and then looked for a link with breast cancer rates. The women who had high DDT or its metabolites had a five times higher risk of breast cancer. (Most other studies of DDT and breast cancer were completed after DDT use had fallen off, so the subjects probably didn’t have high exposure).
2. A paper in the Lancet found that DDT contributes to child and infant mortality. Exposure to DDT can cause premature births, birth defects, and DDT in breast milk can cause early weaning and shorter lactation (both of which increase child and infant mortality). The study concluded that if DDT had been used and saved more people from malaria, it would have killed others from developmental interference.
In 2007, decades after things were settled, the merchants of doubt started accusing Carson of being wrong about DDT. The MODs claimed there wasn’t any good science to support the ban and that DDT was the only good way to kill mosquitoes.
Some of the notable players were:
All of this was untrue:
Even though the MOD's claims were false, mainstream newspapers published them. For example, the New York Times’s John Tierney claimed that Carson’s science was bad, and an agricultural bacteriology professor named I.L. Baldwin had it right but was ignored because he was calm, unlike Carson who was hysterical. (Baldwin hadn’t written a paper—he’d written a book review of Silent Spring. He complained that Carson’s book wasn’t balanced and he wished it had instead talked about how science and pesticides were improving our lives and could fix world hunger.)
In the previous chapters, we learned the techniques merchants of doubt use to manufacture doubt, and we looked at several examples of how they successfully employed them. The merchants of doubt continue their work today, and in this chapter, we’ll look at some of the issues they’re tackling and how to protect ourselves from being taken in.
The original edition of Merchants of Doubt was published in 2010 and updated in 2020. Despite the MOD’s victories in the climate change arenas pre-2010, Oreskes and Conway thought things might be improving because:
All of this progress stalled when Donald Trump was elected:
Additionally, today, it’s even easier for MODs to go public with their opinions, no matter how ludicrous or dangerous, because of the Internet. Anything on the Internet is sharable and permanent, and there aren’t gatekeepers on the Internet (for example, journals that will refuse to print an MOD’s letter).
Oreskes and Conway believe the world is in a bad place, and they see this book as a wake-up call. Scientists are often told not to be negative because they’ll depress people, which leads to them giving up rather than acting. Oreskes and Conway agree that negativity can be paralyzing, but reassurance that things aren’t so bad when they really are creates the same effect—inaction.
Doubt-mongering is still alive and well—so how do you avoid being taken in?
First, don’t count on scientists to set the record straight for you. Other than defending Ben Santer, the scientific community as a whole hasn’t done much to counter the MODs. This is likely for some of the following reasons:
However, you also can’t do the science and original research yourself—you don’t have the expertise in every single field you might be interested in. Therefore, you have to rely on information that’s given to you.
When you encounter a piece of information, keep in mind the following:
1. The information tends to be legitimate when it comes from a reputable source like:
Example #1: Benjamin Santer’s papers and presentations about climate change were legitimate because he was a climate modeler and part of the IPCC.
Example #2: Fred Seitz was a scientist, but he was a physicist, not a medical professional, and he was funded by think tanks and the tobacco industry. Therefore, his input on tobacco was more likely to be doubt-mongering than real science.
2. Dissent can be doubt-mongering when the attacker is:
For example, MOD Steve Milloy regularly and dramatically attacked a variety of issues he didn’t agree with (among other things, he accused Rachel Carson of being a mass murderer). He worked with strongly pro-industry organizations.
Doubt-mongering is a powerful tool, but since it doesn’t have science (expert consensus) on its side, it is possible to see through it.
Think of a contentious scientific question that has the potential to change policy. (For example, you might consider plastic pollution.) Who supports policy changes and who opposes them?
What are the credentials of the parties involved? (For example, are they individual practicing scientists, institutions, spokespeople, and so on?)
What research have the parties done in the past, and what are they currently working on? Are they experienced in the same field as the question?
Where do the parties publish their findings? (For example, are they publishing in peer-reviewed journals, media outlets that are notoriously left- or right-wing, and so on?)
Where do the parties get their funding?