Search
Close this search box.
Flaws & blemishes of thinking scientifically

Joshua Rothman, How Does Science Really Work?, The New Yorker, 28 September 2020

Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?

When I was a kid, I’d sometimes spend the day with my dad in his lab, at the National Institutes of Health. For a few hours I’d read, while eating vending-machine crackers and drinking Diet Coke. I’d spend the rest of the time at a lab bench, pipetting—using a long glass eyedropper to draw water out of one set of test tubes and drip it, carefully, into another.

I was seven, eight, maybe nine years old. Still, the lab was an interesting place for me. I understood, loosely, that my dad was investigating addiction in the brain. He believed that it depended on the way certain chemicals bind to certain receptors. To study this, the scientists in his lab performed experiments on rats, then killed them and analyzed their brains. On one of my visits, a lab tech named Victor reached into a centrifuge and removed a large container filled with foamy pink liquid. “Brain juice!” he said, pretending to drink it.

Michael Strevens argues that “shallow explanation” can be singularly powerful. Illustration by Alexander Glandien.

Often, though, we were there on weekends and were the only ones in the lab. The corridors were dim and quiet, the rooms mostly dark and deserted; the metal and linoleum surfaces were beige, gray, white, and green, relieved, occasionally, by a knob or button made of vivid red or blue plastic. Hulking machines stood on the counters—ugly but, according to my dad, incredibly expensive. Chemical showers and eyewash stations loomed; sometimes, in a distant room, a dot-matrix printer burred. In the sci-fi novels I devoured, labs were gleaming and futuristic. But my dad’s seemed worn-in, workaday, more “Alien” than “2001.” I knew that the experiments done there took years and could come to nothing. As I pipetted, I watched my dad in his office, poring over statistical printouts—a miner in the mountains of knowledge.

Later, in college and afterward, I got to see the glamorous side of science. Some researchers had offices with sweeping views, and schedules coördinated by multiple assistants. They wore tailored clothes, spoke to large audiences, and debated ideas in fancy restaurants. Their rivalries, as they described them, evoked titanic struggles from the history of science—Darwin versus Owen, Galileo versus the Pope—in which rationalist grit overpowered bias and folly. Science, in this world, was a form of exploratory combat, in which flexible minds stretched to encompass the truth, pushing against the limits of what was known and thought. It was an enterprise that demanded total human engagement. Even aesthetics mattered. “You live and breathe paradox and contradiction, but you can no more see the beauty of them than the fish can see the beauty of the water,” Niels Bohr tells Werner Heisenberg, in Michael Frayn’s quantum-physics play, “Copenhagen.”

Reading, seeing, learning all of this, I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery. In my senior year, I bonded with my biology professor during fieldwork and in the lab, but found the writing of lab reports so dreary that, after consulting the grading rubric on the syllabus, I decided not to do them. I performed well enough on the exams to get a D—the minimum grade that would allow me to graduate.

Recorded history is five thousand years old. Modern science, which has been with us for just four centuries, has remade its trajectory. We are no smarter individually than our medieval ancestors, but we benefit, as a civilization, from antibiotics and electronics, vitamins and vaccines, synthetic materials and weather forecasts; we comprehend our place in the universe with an exactness that was once unimaginable. I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.

In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science—a scholar charged with analyzing how scientific knowledge is generated. Philosophers of science tend to irritate practicing scientists, to whom science already makes complete sense. It doesn’t make sense to Strevens. “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence. He promises to serve as “the P. T. Barnum of the laboratory, unveiling the monstrosity that lies at the heart of modern science.”

In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.” That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.

Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process. Popper maintained that scientists proceed by “falsifying” scientific claims—by trying to prove theories wrong. Kuhn, on the other hand, believed that scientists work to prove theories right, exploring and extending them until further progress becomes impossible. These two accounts rest on divergent visions of the scientific temperament. For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.

Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective. To illustrate, he tells the story of Roger Guillemin and Andrew Schally, two “rival endocrinologists” who shared a Nobel Prize in 1977 for discovering the molecular structure of TRH—a hormone, produced in the hypothalamus, that helps regulate the release of other hormones and so shapes many aspects of our lives. Mapping the hormone’s structure, Strevens explains, was an “epic slog” that lasted more than a decade, during which “literally tons of brain tissue, obtained from sheep or pigs, had to be mashed up and processed.” Guillemin and Schally, who were racing each other to analyze TRH—they crossed the finish line simultaneously—weren’t weirdos who loved animal brains. They gritted their teeth through the work. “Nobody before had to process millions of hypothalami,” Schally said. “The key factor is not the money, it’s the will . . . the brutal force of putting in sixty hours a week for a year to get one million fragments.”

Looking back on the project, Schally attributed their success to their outsider status. “Guillemin and I, we are immigrants, obscure little doctors, we fought our way to the top,” he said. But Strevens points out that “many important scientific studies have required of their practitioners a degree of single-mindedness that is quite inhuman.” It’s not just brain juice that demands such commitment. Scientists have dedicated entire careers to the painstaking refinement of delicate instruments, to the digging up of bone fragments, to the gathering of statistics about variations in the beaks of finches. Uncertain of success, they toil in an obscurity that will deepen into futility if their work doesn’t pan out.

“Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,

behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.

The allocation of vast human resources to the measurement of possibly inconsequential minutiae is what makes science truly unprecedented in history. Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?

Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.” Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.” Still, he maintains, it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”

Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics. Following the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics. It hasn’t prevented universities and governments from spending billions of dollars on huge machines that further explore the quantum world. Even as we wait to understand the theory, we can refine it, one decimal place at a time.

Compared with other stories about the invention and success of science, “The Knowledge Machine” is unusually parsimonious. Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are. One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree. As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smokescreen behind which the inevitable emotions and ideologies hide.

Strevens offers a more modest story. The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science. The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.” But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”

On one level, it’s ironic to find a philosopher—a professional talker—arguing that science was born when philosophical talk was exiled to the pub. On another, it makes sense that a philosopher would be attuned to the power of how we talk and argue. If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause. Etiquette is what has created the modern world.

Does Strevens’s story have implications outside of science? Today, we think a lot about speech—about its power to frame, normalize, empower, and harm. In our political discourse, we value unfiltered authenticity; from our journalism, we demand moral clarity. Often, we bring our whole selves into what we say. And yet we may be missing something important about how speech drives behavior. At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.” Perhaps speech codes can be building materials for knowledge machines. In that case, our conversations can still be fiery and wide-ranging. But we should write those lab reports, too.