
NHTSA investigations found 467 crashes involving Tesla's Autopilot, resulting in 54 injuries and 14 deaths. When investigating, NHTSA discovered that Tesla told them internal 'data and labeling limitations' prevented the company from uniformly identifying crashes with FSD engaged — effectively admitting possible under-reporting. Tesla's Autopilot design was found insufficient to maintain driver engagement. As of March 2026, NHTSA escalated its investigation to cover 3.2 million vehicles over Full Self-Driving crashes in reduced visibility conditions.
“Tesla's Autopilot is causing fatal accidents and the company is not being transparent about the real crash data. The system gives drivers false confidence.”
From “crazy” to confirmed
The Claim Is Made
This is the moment they called it crazy.
When a company tells regulators it cannot reliably track whether its own safety system was engaged during crashes, that admission raises a question far larger than any single accident: How can we trust the data we're using to evaluate whether technology is safe enough for public roads?
Tesla's Autopilot system has been linked to 467 crashes involving 54 injuries and 14 deaths, according to investigations by the National Highway Traffic Safety Administration. That number alone would justify scrutiny. But what makes this case particularly revealing is not just the crashes themselves—it's what happened when NHTSA tried to understand them.
The company initially downplayed concerns about Autopilot's safety record, suggesting that crash statistics were being misinterpreted and that Autopilot reduced overall accident rates. This became a familiar refrain: the technology is misunderstood, users are the problem, and the data critics cite is incomplete or taken out of context. Tesla argued that comparing Autopilot crashes to human driving statistics was an apples-to-oranges exercise that painted their technology unfairly.
Then NHTSA started digging deeper. What investigators discovered was telling: Tesla informed them that "data and labeling limitations" made it impossible for the company to uniformly identify which crashes occurred with Autopilot or Full Self-Driving engaged. In other words, Tesla's own internal systems couldn't reliably track when their autonomous features were active—the most basic prerequisite for understanding whether those features actually caused the accidents being investigated.
This wasn't a minor technical hiccup. It was an admission that the company had been operating without adequate safeguards to even know when their safety system was in use. It also meant that the numbers regulators were working with could be dramatically undercounting the real problem. If Tesla couldn't tell regulators which crashes involved Autopilot, how many others slipped through the cracks?
Get the 5 biggest receipts every week, straight to your inbox — plus an exclusive PDF: The Top 10 Conspiracy Theories Proven True in 2025-2026. No spam. No agenda. Just the papers they couldn't hide.
You just read "Tesla's Autopilot linked to 467 crashes, 14 deaths — company…". We send ones like this every week.
No one's said anything yet. Be the first to drop your take.
The investigation also found that Autopilot's design was fundamentally insufficient at maintaining driver engagement—essentially confirming what skeptics had warned about for years. The system could not reliably keep drivers alert and ready to take over when needed, yet it was marketed as capable of handling extended periods of autonomous driving.
By March 2026, NHTSA escalated its investigation to cover 3.2 million Tesla vehicles and specifically focused on Full Self-Driving crashes occurring in reduced visibility conditions. The scope kept expanding because each layer of investigation revealed the previous one was incomplete.
This case matters because it exposes a critical vulnerability in how we regulate emerging technology. Companies control the data. They decide what gets tracked, how it gets labeled, and what gets reported. When those same companies have financial incentives to minimize safety concerns, the regulatory system depends entirely on whether companies choose to maintain honest records. Tesla's admission revealed they hadn't even attempted to do that.
The broader lesson transcends Tesla. It demonstrates that testimony from companies about their own safety systems cannot be taken at face value. Regulators need independent verification capabilities built into the process from the beginning—not discovered years later during accident investigations. Until oversight mechanisms can operate independently of corporate recordkeeping, claims that autonomous vehicle technology is safe remain fundamentally unverifiable.
Beat the odds
This had a 0.4% chance of leaking — someone talked anyway.
Conspirators
~100Network
Secret kept
10 years
Time to 95% exposure
500+ years