
The Open Science Collaboration replicated 100 influential psychology studies and found only 36% held up, with effect sizes half as large. A larger pooled analysis showed 64% eventually replicated but with effects 32% smaller. A 186-researcher collaboration found 50% of 28 classic findings failed even with large samples. The crisis stems from publication bias (97% of positive industry results published vs 39% negative), researcher bias, and fraud cases. Psychology, medicine, and social sciences are all affected.
“Most published research findings are probably false. The incentive structure of academic publishing has created a crisis of unreliable science.”
From “crazy” to confirmed
The Claim Is Made
This is the moment they called it crazy.
Confirmed: They Were Right
The truth comes out. Officially documented.
Confirmed: They Were Right
The truth comes out. Officially documented.
When the Open Science Collaboration set out to test the foundations of modern psychology, they weren't expecting to find a house built on sand. In 2015, after attempting to replicate 100 landmark studies published in top journals, they discovered something troubling: only 36 percent of the original findings held up under scrutiny. The effect sizes in successful replications were half as large as initially reported.
The psychology establishment had spent decades citing these studies as settled science. Textbooks featured them. Universities taught them as established facts. Researchers built entire careers and funding streams on their results. Yet when independent teams tried to reproduce the experiments under controlled conditions, the evidence crumbled.
Initially, the response from academia was defensive. Critics argued that the replication attempts were flawed, that the original researchers used better methodologies, or that subtle variations in procedure explained the differences. Some dismissed the replication project itself as methodologically questionable. The narrative pushed by some prominent psychologists was that most of the failures reflected incompetence among those attempting to verify the work, not problems with the original research.
But the evidence kept accumulating. A larger pooled analysis examining multiple replication efforts found that while 64 percent of studies eventually replicated in some form, the effect sizes were roughly 32 percent smaller than originally claimed. A subsequent collaboration involving 186 researchers testing 28 classic psychology findings discovered that half failed to replicate, even when given the advantages of large sample sizes and well-funded research institutions. The pattern was unmistakable.
Investigators identified specific culprits driving the crisis. Publication bias played a central role: industry studies reporting positive results were published 97 percent of the time, while those showing negative results saw publication rates drop to 39 percent. This created a distorted scientific record where only successful findings made it into journals. Researcher bias—the unconscious tendency to find what you're looking for—amplified the problem. Outright fraud cases also emerged, with researchers fabricating data to achieve desired results.
Psychology wasn't alone. Medicine and the social sciences showed similar patterns. When researchers examined drug trials, they found comparable inflation of results. Prestigious journals published studies with unrealistic effect sizes that later proved impossible to reproduce.
The implications extended far beyond academic journals. Psychiatric medications approved based on flawed research entered the market. Educational interventions recommended to schools rested on shaky foundations. Policy decisions affecting millions were justified by studies that hadn't withstood scrutiny. The public had trusted that published scientific findings meant something definitive.
What makes this claim significant isn't merely that some studies failed verification. Rather, it revealed systemic incentive structures that actively rewarded false positives and punished null results. Researchers faced pressure to produce headline-grabbing findings to secure grants and promotions. Journals sold more copies with exciting discoveries than with confirmations of existing knowledge. Career advancement depended on novelty, not accuracy.
The replication crisis forced a reckoning with how modern science actually operates versus how we imagined it functioned. It demonstrated that a prestigious byline and peer review offered limited protection against systematic error. The institutions designed to produce reliable knowledge had structural weaknesses that produced unreliable findings with alarming frequency. Understanding this matters because scientific authority shapes medical treatment, policy, and public understanding of human behavior. When that authority is built on foundations that don't replicate, everyone relying on it is standing on ground that shifts.
Get the 5 biggest receipts every week, straight to your inbox — plus an exclusive PDF: The Top 10 Conspiracy Theories Proven True in 2025-2026. No spam. No agenda. Just the papers they couldn't hide.
You just read "64% of landmark psychology studies failed to replicate — the…". We send ones like this every week.
No one's said anything yet. Be the first to drop your take.





