top of page
  • -
  • Sep 10
  • 10 min read

ree

THE WORLD IS A STAGE


THE NECESSITY OF BEING WRONG

MATEO KRUPIN

September 10, 2025



An essay tracing the half-life of knowledge across medicine, psychology, nutrition, economics, physics, and history, “The Necessity of Being Wrong” argues that error is not an accident, but the very engine of discovery—indeed, that wrongness has always been the true condition of thought.

We are trained in school to think of knowledge as cumulative. Each discovery is a stone added to a wall, each experiment another course laid, the edifice rising higher and higher until—someday, we are told—it will be complete. The textbook, with its calm declarative sentences, conspires in this fiction. “It is known that…” the page begins, and a child might imagine permanence, a fortress of fact. The truth is less reassuring. Knowledge is scaffolding. It is erected, used, dismantled, left to rust, and sometimes built again. The philosopher Karl Popper, in his book The Logic of Scientific Discovery (1934), insisted that scientific theories can never be proved true; they can only survive repeated attempts to be proved false. Truth, in this view, is not arrival but endurance. A claim is not knowledge because it is right, but because it has not yet been killed.

 

This is not the romantic picture of science as a march toward certainty. It is more austere: knowledge as provisional, fragile, living under constant threat. The unsettling corollary is that much of what we call knowledge at any given moment is wrong. Sometimes extravagantly wrong, sometimes quietly so. Wrong in a way that bleeds patients, or misguides economies, or deceives generations of students. Each field has its own rate of decay, its epistemic half-life. In medicine, half of accepted practices are overturned within a decade. In psychology, more than half of celebrated findings fail to replicate. In nutrition, over 80 percent of early claims are later contradicted. Economists forecast with the accuracy of coin flips. Physics looks sturdier only because its scaffolding is centuries old; Newton still works until he doesn’t. Even history rewrites itself every generation: Columbus the hero becomes Columbus the villain, the Crusades turn from holy to imperial, the Cold War from defense of liberty to mutual imperialism. Knowledge spoils quickly. The fortress is rebuilt so often, it might be more accurate to say it never stands at all.

 

ree

The sky isn’t falling—until it is.

 

Yet wrongness is not simply an embarrassment to be minimized. It is the lifeblood of inquiry. Popper saw falsification as the engine of discovery: Error is not failure, but progress. Friedrich Nietzsche, writing with different intent, argued that “truths are illusions which we have forgotten are illusions.” Knowledge always overreaches, always gets things wrong, but precisely through wrongness it advances. Paul Feyerabend later pressed the point in Against Method (1975): There is no universal scientific method, no straight path of progress. What history shows is chaos, contradiction, heresy—“anything goes.” The miracle is not that knowledge is wrong, but that it thrives on being wrong.

 

Medicine offers the most visceral illustration. In December 1799, George Washington awoke at Mount Vernon with a sore throat. His physicians arrived promptly and bled him four times in the space of a single day, removing nearly 40 percent of his blood.1 Bloodletting had been the universal cure since Hippocrates; Galen codified it in the second century, and for two millennia it was the cornerstone of practice. Washington’s doctors were not charlatans. They were applying the best available science of their day. They killed him. Two thousand years of certainty collapsed within decades, undone by statistical studies showing that patients tended to do better without intervention.

 

The cruelty of error lies not only in what we got wrong, but in how long we stayed wrong. Consider the lobotomy. António Egas Moniz, a Portuguese neurologist, won the Nobel Prize in 1949 for inventing the prefrontal lobotomy, a procedure that involved drilling into the skull and severing brain connections to “cure” schizophrenia, depression, and anxiety. At its peak, tens of thousands of patients—many of them women, many of them children—were lobotomized in the United States alone. For twenty years, it was hailed as modern medicine. Today it stands as barbarism disguised as knowledge.

 

ree

Nothing says progress like drilling holes in your patients’ heads and calling it therapy.

 

Or thalidomide. Marketed in the late 1950s as a safe sedative for pregnant women, prescribed for morning sickness, it produced thousands of children born with severe limb deformities. The drug was banned in 1961, too late for many. The tragedy is often told as scandal, but it was also epistemology in motion: medicine learned through catastrophe, a reminder that wrongness is not incidental but structural.

 

The rhythm repeats. In 1847, Ignaz Semmelweis noticed that women in the maternity ward run by doctors at Vienna’s General Hospital were dying of puerperal fever at three times the rate of those in the ward run by midwives.2 Doctors also performed autopsies; Semmelweis concluded they were carrying infection on their hands. He mandated washing with chlorinated lime. Mortality plummeted. His colleagues mocked him. The reigning model was still miasma—disease as bad air. Semmelweis was dismissed and later confined to an asylum, where he died after a beating. Only decades later, vindicated by Louis Pasteur’s germ theory, did handwashing enter medical orthodoxy.

 

In 1982, Barry Marshall and Robin Warren proposed that stomach ulcers were caused not by stress but by a bacterium, Helicobacter pylori.3 Their claim was dismissed as implausible. To prove his point, Marshall drank a beaker of the bacteria, developed gastritis, and cured himself with antibiotics. In 2005, he and Warren received the Nobel Prize. The consensus had been wrong.

 

ree

Every decade, a new food villain: margarine, lard, glucose—today’s monster, tomorrow’s miracle cure.

 

These are not isolated embarrassments. They are the metabolism of medicine. In 2005, John Ioannidis published his landmark essay, “Why Most Published Research Findings Are False,” in PLoS Medicine, arguing that bias, small samples, and selective reporting make much of the medical literature unreliable. A 2013 study of more than 1,300 clinical practices in the New England Journal of Medicine found that 40 percent were later contradicted or significantly revised.4 The half-life of medical knowledge is estimated at five to seven years. In oncology, one study found guidelines updated or overturned every eight years.5

 

The philosophy here is stark. Without wrongness, medicine would be theology. What makes medicine science is not that it is correct, but that it is corrigible. Popper’s line holds: Wrongness is not pathology but method. Hans-Georg Gadamer, in The Enigma of Health (1996), went further: Medicine is not a science in the laboratory sense but a hermeneutic art, an interpretive dialogue with uncertainty. Its strength is its fallibility.

 

Psychology has an even shorter half-life. For much of the twentieth century, psychoanalysis was psychiatry. Freud’s case studies—Dora storming out on him, the Rat Man’s obsessions—were treated as data. Freud himself described psychoanalysis as a “science of the unconscious.” Today, psychiatry regards it as untestable. What remains is literature.

 

By mid-century, new paradigms promised rigor. Stanley Milgram’s obedience experiments of 1961 asked volunteers to deliver what they thought were electric shocks to strangers. The majority complied under authority. The results were disturbing—and, critics argue, theatrical. Philip Zimbardo’s Stanford Prison Experiment of 1971, long taught as proof of situational corruption, was later revealed to have been partly staged: guards were coached, outcomes nudged.6 The Stanford Marshmallow Test, where delayed gratification among children supposedly predicted lifelong success, shrank to statistical insignificance once socioeconomic status was controlled.

 

ree

Economists have successfully predicted nine of the last five recessions.

 

By the 2010s, whispers of fragility had grown into a crisis. In 2015, the Open Science Collaboration attempted to replicate one hundred classic psychology experiments. Only 36 percent succeeded.7 In 2018, Colin Camerer and colleagues repeated twenty-one social science experiments published in Nature and Science. Thirteen held, eight failed.8 Entire subfields dissolved. Priming effects—subjects walking more slowly after reading words like “elderly”—vanished. Ego depletion, the idea that willpower is a finite resource, melted away. Mirror neurons, once hailed as the biological basis of empathy, receded into caution. Learning styles—visual, auditory, kinesthetic—turned out to be pedagogical folklore.

 

Popper again would have been satisfied. Theories that cannot be broken are dogmas. Psychology’s vulnerability is its saving grace. Thomas Kuhn, in The Structure of Scientific Revolutions (1962), called such upheavals paradigm shifts: the frameworks that guide research are not eternal but temporary. Psychology’s short half-life is not failure but proof that it is alive. Nietzsche might have smiled: What we call truths are only metaphors that hold for a while.

 

Nutrition, for its part, demonstrates wrongness with almost comic regularity. The French Academy of Sciences once declared potatoes poisonous, associating them with leprosy and animal fodder. By the late eighteenth century they were hailed as staple and savior against famine.9 In the twentieth century, fat was demonized; then sugar was. Eggs were condemned for cholesterol, later acquitted. Butter was replaced by margarine, until trans fats were revealed to be deadly. In the 1990s, red wine was celebrated for the “French paradox,” only to be quietly demoted. Kale was a superfood until it wasn’t. Coconut oil was miracle until it was just saturated fat.

 

A 2013 review in the American Journal of Clinical Nutrition found that over 80 percent of early nutritional claims are later contradicted.10  Dietary advice has the half-life of fashion. The epistemological lesson is Nietzschean: What masquerades as truth is often ideology in disguise. The sugar industry in the 1960s funded research that shifted blame from sugar to fat. “Facts” were subsidized illusions. Feyerabend would have enjoyed the spectacle: Science, in nutrition, looks like cultural fashion, and yet precisely through error it learns.

 

Economics is little better. In 1637, tulip bulbs in the Netherlands sold for the price of houses. The fortunes collapsed overnight. Theories to explain the mania have been proposed and abandoned ever since. In 1929, economists predicted recovery even as the Great Depression deepened. Classical theory assumed that markets self-corrected; unemployment reached 25 percent. John Maynard Keynes’s General Theory (1936) rewrote the field. In 2007, Federal Reserve chair Ben Bernanke declared that the subprime crisis was “likely to be contained.”11 By 2008, the global economy was in freefall.

 

ree

Once upon a time, the unconscious explained everything. Today it explains why we read Freud as literature.

 

Philip E. Tetlock, beginning in the 1980s, tracked expert forecasts in economics and political science. His results, published in Expert Political Judgment (2005), were bleak: Expert predictions were about 50 percent accurate, scarcely better than chance. In Superforecasting (2015), he and Dan Gardner showed that small groups of statistically minded amateurs consistently outperformed professionals. Tetlock invoked Isaiah Berlin’s “foxes” and “hedgehogs”: Foxes, who know many little things, proved better forecasters than hedgehogs, who know one big thing. The philosophy here is modesty. Economics fails not by accident but by necessity: Knowledge collapses under complexity. Wrongness is the rule.

 

Physics looks more stable, but only because its half-life is slower. For 1,400 years, Ptolemy’s geocentric universe reigned. In 1543, Copernicus published De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres). He hesitated for years, fearing ridicule and censure, and the book appeared only in the year of his death. The sun displaced the Earth. Isaac Newton’s Principia Mathematica (1687) reigned for two centuries, describing motion and gravity so accurately that the equations still carry spacecraft to Mars. Yet by 1905 or 1915, Albert Einstein displaced Newton. Space and time curved. Gravity was geometry. Einstein, in turn, refused to accept quantum mechanics, calling it “spooky action at a distance.” He lost. Quantum mechanics has been experimentally confirmed to precision, yet remains irreconcilable with relativity. Each paradigm indispensable, each doomed. Imre Lakatos later argued that science advances not by falsification alone but by “research programs” that degrade until replaced. Physics confirms the point: Wrongness is grandeur in slow motion.

 

Even history, lacking laboratories, obeys the law of provisionality. In 1772, the French Academy of Sciences declared that stones could not fall from the sky. Reports of “thunderstones” were dismissed as folklore. In 1803, a shower of meteorites fell on the town of L’Aigle in Normandy, witnessed by thousands. Farmers gathered them by the basketful. Only then did the Academy reverse itself.12 Christopher Columbus, once the heroic discoverer, is now remembered as the vanguard of conquest and genocide. The Crusades, long celebrated as pious wars, are now taught as campaigns of brutality and greed. The Cold War, told by Arthur Schlesinger Jr. as a noble defense of liberty, is recounted by Howard Zinn as imperial rivalry. History’s half-life is a generation: the time it takes for one set of anxieties to be replaced by another. Nietzsche, in On the Use and Abuse of History for Life (1874), warned that history is never objective, always functional. Each age produces the history it needs. Wrongness is not an error but a necessity.

 

ree

For 1,400 years the Earth sat proudly at the center. Then Copernicus rotated the furniture.

 

The ledger is blunt. Medicine: 40 to 50 percent of practices overturned in ten years. Psychology: 60 percent of studies fail replication. Nutrition: 80 percent of claims later contradicted. Economics: forecasts accurate half the time. Physics: paradigms displaced every few centuries. History: rewritten every thirty years. Knowledge does not accumulate. It decays.

 

Why, then, is wrongness necessary? Popper gave one answer: Only a claim that can be falsified belongs to science. Nietzsche gave another: Truths are illusions that we have forgotten are illusions. Feyerabend pressed further: Progress has always come from heresy, not rule following. Kuhn described revolutions in which the old paradigm collapses and a new one arises.13 Lakatos suggested that research programs die not from single refutations, but from attrition.14 Berlin counseled pluralism: Foxes, not hedgehogs, survive. The moral across them is the same. Wrongness is not something to be eliminated. It is the very condition of thought.

 

The churn of error—the half-lives, the reversals, the contradictions—is not evidence that knowledge fails, but proof that it works. Washington’s bleeding, Semmelweis’s ruin, Marshall’s bacteria, Milgram’s shocks, Keynes’s interventions, Einstein’s equations, meteorites in Normandy—all are moments in the metabolism of knowledge. To demand knowledge without wrongness is to demand knowledge without knowledge at all: doctrine, not science; mythology, not history.

 

The paradox is simple, and not particularly comforting. The only permanent knowledge is that most knowledge is temporary. It comes stamped with an expiration date, like milk. We drink it anyway.

 

 

Mateo Krupin (b. 1958 in Bishkek, Kyrgyzstan) studied geology in Leningrad but never finished, abandoning the field after realizing he was more interested in errors than in stones. He lives in Thessaloniki, publishing essays that blur philosophy, anecdote, doubt, and failed experiments. Known for footnotes that wander away from their subjects, he has been described—unhelpfully—as both a “historian of mistakes” and a “collector of epistemic fossils.”

 

 

Notes

1.     Joseph Ellis, His Excellency: George Washington (2004), 274–76.

2.     K. Codell Carter, Childbed Fever: A Scientific Biography of Ignaz Semmelweis (1992).

3.     Barry Marshall and Robin Warren, “Unidentified Curved Bacilli in the Stomach of Patients with Gastritis and Peptic Ulceration,” The Lancet (1984).

4.     Vinay Prasad, Adam Cifu, and John Ioannidis, “Reversals of Established Medical Practices,” New England Journal of Medicine 368 (2013).

5.     Silvio Garattini et al., “How Long Does It Take for Oncology Guidelines to Become Outdated?,” The Oncologist 8 (2003).

6.     Ben Blum, “The Lifespan of a Lie,” Medium, June 7, 2018.

7.     Open Science Collaboration, “Estimating the Reproducibility of Psychological Science,” Science 349 (2015).

8.     Colin Camerer et al., “Evaluating the Replicability of Social Science Experiments,” Nature Human Behaviour 2 (2018).

9.     Rebecca Earle, Feeding the People: The Politics of the Potato (2020).

10.  David Schoenfeld and John Ioannidis, “Is Everything We Eat Associated with Cancer? A Systematic Cookbook Review,” American Journal of Clinical Nutrition 97 (2013).

11.  Ben Bernanke, “Housing and Monetary Policy,” speech at Jackson Hole, Wyoming, 2007.

12.  Ursula Marvin, Meteorites in History (1996), 105–12.

13.  See Kuhn, The Structure of Scientific Revolutions.

14.  Imre Lakatos, The Methodology of Scientific Research Programmes (1978).


Cover image: When reason fails, drain wildly—that’s one way to rationalize removing half the juice.

Comments


selavy-logo.png

A CABINET OF CURIOSITIES ASSEMBLED IN PROSE

SIGN UP TO RECEIVE UPDATES ON NEW POSTINGS FROM SÉLAVY

EMAIL ADDRESS:

THANKS FOR SIGNING UP!

bottom of page