INVESTIGATINGTechnologyNATO's StratCom Center created 30,000 fake accounts across 7 major platforms. Generated 100,000+ interactions. Platforms barely detected any of them. Sixth annual test confirming social media is defenseless against manipulation.
“NATO's StratCom Center created 30,000 fake accounts across 7 major platforms. Generated 100,000+ interactions. Platforms barely detected any of them. Sixth annual test confirming social media is defenseless against manipulation.”
NATO's Strategic Communications Centre of Excellence ran its sixth annual manipulation test in 2026. They created 30,000 fake accounts across seven major social media platforms. Those accounts generated over 100,000 interactions. The platforms' detection systems caught almost none of them.
NATO StratCom buys fake engagement — followers, likes, comments, shares — using the same tools and services available to anyone. They create bot networks using AI. Then they measure how well platforms detect and remove the fake activity. Six years running, the results are the same: platforms fail.
Thirty thousand fake accounts produced over 100,000 interactions with real users. Real people liked, shared, and responded to content created by bots. The line between authentic and artificial conversation is invisible to both platforms and users.
NATO tests platform defenses because social media manipulation is a national security threat. If NATO can create undetectable bot networks with commercially available tools, state actors with intelligence budgets can do it at a scale that reshapes public opinion across entire countries.
After six years of testing, the conclusion is clear: social media platforms cannot protect their users from sophisticated manipulation. The tools to manipulate are cheap and accessible. The defenses are inadequate. And the platforms have no financial incentive to fix the problem.
No one's said anything yet. Be the first to drop your take.





