Friends,
From Wikipedia:
The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
The replication crisis is frequently discussed in relation to psychology and medicine, where considerable efforts have been undertaken to reinvestigate classic results, to determine both their reliability and, if found unreliable, the reasons for the failure. Data strongly indicate that other natural, and social sciences are affected as well.
The phrase replication crisis was coined in the early 2010s as part of a growing awareness of the problem. Considerations of causes and remedies have given rise to a new scientific discipline, metascience, which uses methods of empirical research to examine empirical research practice.
A few weeks ago, the WSJ published:
The Band of Debunkers Busting Bad Scientists (WSJ or unpaywalled)
Excerpt:
…It was a routine takedown for the three scientists—Joe Simmons, Leif Nelson and Uri Simonsohn—who have gained academic renown for debunking published studies built on faulty or fraudulent data. They use tips, number crunching and gut instincts to uncover deception. Over the past decade, they have come to their own finding: Numbers don’t lie but people do.
“Once you see the pattern across many different papers, it becomes like a one in quadrillion chance that there’s some benign explanation,” said Simmons, a professor at the Wharton School of the University of Pennsylvania and a member of the trio who report their work on a blog called Data Colada.
Simmons and his two colleagues are among a growing number of scientists in various fields around the world who moonlight as data detectives, sifting through studies published in scholarly journals for evidence of fraud.
At least 5,500 faulty papers were retracted in 2022, compared with 119 in 2002, according to Retraction Watch, a website that keeps a tally. The jump largely reflects the investigative work of the Data Colada scientists and many other academic volunteers, said Dr. Ivan Oransky, the site’s co-founder. Their discoveries have led to embarrassing retractions, upended careers and retaliatory lawsuits.
The world gets more complex every day. The progression of workhorse running back to committees of 3rd down scatbacks and TD-vulture bowling balls is a seasonally-appropriate metaphor1. Careers are increasingly specialized. It’s the apotheosis of comparative advantage and the principle of economic trade. I’ll know everything there is to know about chromosome 18 and use my academic grant to pay for food grown by one of 4 industrial farming conglomerates.
All this complexity means a major 21st-century skill is being meta — knowing which sources and curators to trust. Distinguishing science from pseudoscience is not easy for laypeople. For every study that says X there seems to be an equal and opposite study that says Not X. Grifters know this. All they need to do is find an audience that pre-believes either X or Not X, serve them confirmation, and, as Fiddy Cent once said, “watch da money pile up”.
Some of these false data revelations are nefarious. I suspect many are just the natural human response to the distant gravitational force of incentives. I take a measurement, it’s hard to interpret it — eh, tie goes to the runner. It just happens that I’m always the runner.
It’s tempting to diagnose and prescribe but the root of such problems is so…forgiveably human. It’s a timeless game of whack-a-mole deeply embedded in the arms race of deception of others and ourselves — motivated reasoning, ego protection, signaling. This is never going away. The best we can do is employ relatively narrow truth-finding tactics. Fraud hounds like the Data Colada crew are a step in the right direction. But they aren’t above their own humanity snagging them.
The tensions here suffuse discourse on regulation, free markets, free speech…all the big boy words. I get the same feeling about “battling misinformation” that I get from one of my all-time favorite movie scenes — utter nihilism.
If you want to dig deeper into these themes, C. Thi Nguyen’s thread is a great starting point. Reprinted with my own emphasis:
New paper out: "Hostile Epistemology"! The basic idea: we seem so willing to attribute the misinformation crisis and the "post-truth" world to bad people being stupid and lazy. An alternate explanation: it's the systematic exploitation of our cognitive vulnerabilities.
Lots of people believe dumb stuff. One standard take is that it's always the believer's fault. In popular culture, this takes the form of "those people over there are lazy, cowardly, and stupid." In philosophy, the sophisticated way to put this is "intellectual vice".
I don't deny there's intellectual vice. But I think we reach too easily for that explanation. There's another possibility: We are rushed, overwhelmed finite creatures facing an overwhelming world. We have to take cognitive shortcuts, which inevitably make us vulnerable.
The world can hack our shortcuts, exploit our vulnerabilities. The world is really good at finding and exploiting every little gap in our reasoning. And we have to leave those gaps because we must constantly reason in a rush. I look at two such gaps:
1. Fake clarity. We limited beings must guesstimate where to spend our attention. We need heuristics about what to inquire into. The world can hack those heuristics.
What this looks like: we often use the "feel of clarity" to guesstimate that we're done investigating, and use the "feel of confusion" to direct our attention. So the world can make fake clarities to re-direct our attention. Like: conspiracy theories and very clear metrics.
2. Trust. We limited beings have to trust others, and trust beyond our ability to verify [Kris: this is why the verify trope is limited…you have dinner to make, you ain’t a freakin’ Mythbuster]. Because the essential state of the world is: there's too much of it. We can't hold it in our heads.
So we have to find the right experts to trust, but we ourselves don't have enough expertise to be sure. So again, we have to use shortcuts: institutional signals. This person is wearing a lab coat in a medical establishment. That person is a professor at Princeton.
But those shortcuts are, again, hackable. What this looks like: alternate networks of institutions, alternate systems of trust, alternate credentialing systems.
The point is that many of our mistakes are because hostile forces are hacking the imperfect signals we use to reason. But we have to use such imperfect signals, because we don't have the time and expertise in ourselves to do it perfectly.
The idea that people are always responsible for their bad beliefs is based on a fantasy: that there is some correct way to reason that is perfectly secure. That is an absurd fantasy given the size and unmanageability of the world. We are all desperate cognitive speedsters.
The big take-away #1: The essential epistemic problem of the modern world is not "what is knowledge"? It's: "who do you trust"? Or, more exactly, "How do you manage your trust in a world too big for your brain?"
The big take-away #2: You might have thought that we would get settled principles of good reasoning. But instead, what we have is an endless arms-race: where we try to optimize our cognitive shortcuts, and the hostile forces hack them, so we change our shortcuts.
Life for limited beings in a frequently hostile, over-sized world is not one where you can figure out the right way to reason. It's an endlessly escalating arms race where we are desperately keeping ahead of the brain-hackers, and we can never rest.
"Hostile Epistemology" was originally a keynote I gave at the North American Society for Social Philosophy. It's just been published, the text is free online here: https://philpapers.org/archive/NGUHEL.pdf
This was supposed to be a big picture paper, setting a framework, and finding connections I was noticing in other stuff I was writing.
It draws heavily on, and tries to put a big picture around, some earlier papers of mine. One is "The Seductions of Clarity", about fake clarity and gaming our attention heuristic: https://philpapers.org/go.pl?aid=NGUTSO-2
Another is "Trust as an Unquestioning Attitude", about how limited beings need to take shortcut by dropping suspicion about some cognitive resources. And how vulnerable that leaves us.
https://philpapers.org/archive/NGUTAA.pdf
Anyway, please forgive the big brush sloppiness of this thing. It was delivered as a keynote, intended to be an attempt to pull together all these threads on what I've been working on, that have been coalescing in my mind into a Big Thing.
Stay groovy ☮️
Substack Meetings
I was invited to be a part of the Substack Meetings beta. You can book a time to chat. I’m more expensive than a 900 number from 1988 and have a less sexy voice.
Book a meeting with Kris Abdelmessih
Moontower On The Web
📡All Moontower Meta Blog Posts
Specific Moontower Projects
🧀MoontowerMoney
👽MoontowerQuant
🌟Affirmations and North Stars
🧠Moontower Brain-Plug In
Curations
✒️Moontower’s Favorite Posts By Others
🔖Guides To Reading I Enjoyed
🛋️Investment Blogs I Read
📚Book Ideas for Kids
Fun
🎙️Moontower Music
🍸Moontower Cocktails
Becoming a patron
The Moontower letter is and will always be free. My writing is a search “for the others”. The “others” are people like you who are unlearning the mental frames that artificially narrow our choices.
If you are here you already understand that inspiration is a tradable good. It’s not as tangible as a cup of coffee, but it packs 10x the adrenaline with an infinitely longer half-life than caffeine.
If you feel inspired, you can upgrade to becoming a patron.
As a NY Giants fan, the NBA season can’t start soon enough
The debunkers reminded me of the people who use data to catch people cheating in marathons and ultra-marathons based on (1) times recorded passing different sensors along the way, and (2) the runner's other races.
And statistics used well quantify uncertainty. Statistics used to lie should be punishable by public ridicule.
Another great blog Kris. They are like intellectually nutritious candy.
I keep going back to the original George Box paper: we are just ridiculously attached to our own sense of certainty, and Bayesian Inference means we're always, ALWAYS wrong about our models of the universe. If our foundations are on sand, how can we have so much faith in our structures? https://www-sop.inria.fr/members/Ian.Jermyn/philosophy/writings/Boxonmaths.pdf