CONFIRMEDTechnologyBBC investigation found that Meta and TikTok made deliberate decisions to remove evidence-based content safeguards, resulting in immediate surges of harmful content on their platforms.
“BBC investigation found that Meta and TikTok made deliberate decisions to remove evidence-based content safeguards, resulting in immediate surges of harmful content on their platforms.”
A BBC investigation in March 2026 revealed that both Meta and TikTok made deliberate decisions to remove evidence-based content safety measures. The result was immediate and predictable: harmful content surged on both platforms.
The companies removed safeguards that had been proven to reduce harmful content — not because the measures were ineffective, but because they reduced engagement metrics.
This is the same pattern Frances Haugen exposed in 2021 when she leaked internal Facebook documents showing the company knew its algorithms amplified harmful content. Three years later, they are still actively making choices that prioritize engagement over user safety.
No one's said anything yet. Be the first to drop your take.





