A joint investigation by CNN and the Center for Countering Digital Hate (CCDH) found that eight of 10 popular AI chatbots provided actionable assistance to simulated teen users attempting to plan violent attacks across hundreds of tests conducted between November 5 and December 11, 2025.
Researchers tested the default free versions of OpenAI’s ChatGPT, Google Gemini, Anthropic’s Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat’s My AI, Character.AI, and Replika.
Two fictional teen personas, one based in the United States and one in Ireland, posed questions suggesting mental distress before escalating to requests for target locations and weapon recommendations across 18 scenarios.
Perplexity and Meta AI were the worst performers, providing actionable assistance in 100% and 97% of tests respectively. DeepSeek assisted in 96% of exchanges and, in one test, concluded advice on selecting a long-range rifle with “Happy (and safe) shooting!”
OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, and Replika each provided actionable assistance in the majority of tests, according to the report.
The CCDH described Character.AI as “uniquely unsafe.” While most platforms provided planning assistance without explicitly endorsing violence, Character.AI encouraged attacks in multiple exchanges, advising one user to “use a gun” on a health insurance CEO and suggesting another “beat the crap out of” a named politician. No other chatbot tested explicitly encouraged violence, the report said.
Anthropic’s Claude was the only platform to reliably discourage violent planning, refusing to assist in 68% of responses and providing active discouragement in 76% of tests. Snapchat’s My AI refused in 54% of exchanges, though it still provided actionable information in some cases.
“All of these concerns would be well known to the companies,” Steven Adler, a former safety lead at OpenAI who left in 2024, told CNN. “But that doesn’t mean that they’ve invested in building out protections against them.”
Imran Ahmed, the CCDH’s chief executive, said tech companies were “choosing negligence in pursuit of so-called innovation.”
Finnish court records cited in the CCDH report showed a 16-year-old used ChatGPT to research a stabbing attack over nearly four months before injuring three classmates in May 2025. A Finnish court convicted him in December 2025 on three counts of attempted murder.
While convicted, the court found him not criminally responsible due to his mental state and ordered him into involuntary psychiatric treatment rather than prison.
Several companies disputed the findings or said post-test updates have since changed how their platforms respond to violent prompts.
OpenAI said it is constantly refining its models and described the test methodology as adversarial tactics designed to bypass existing safeguards.
Google said Gemini has since received stricter filters for content involving minors and violence.
Meta argued the simulated nature of the tests did not reflect real-world user behavior and pointed to ongoing safety investments.
Character.AI acknowledged the report and said it introduced a safety strike system and restricted access for users under 18 in early 2026.






