‘Happy shooting!’ AI chatbots eager to help plan mass violence

Please follow & like us :)

URL has been copied successfully!
URL has been copied successfully!
‘Happy shooting!’ AI chatbots eager to help plan mass violence
URL has been copied successfully!

Article By RT

Eight in ten AI assistants provided guidance on targets and weapons to researchers posing as teens plotting attacks

Eight out of ten leading AI chatbots willingly assisted users in planning violent attacks, including school shootings, religious bombings, and assassinations, according to a joint investigation by CNN and the Center for Countering Digital Hate (CCDH).

Researchers posing as troubled teenagers tested ten popular chatbots, including ChatGPT, Google Gemini, Meta AI, and DeepSeek. In hundreds of exchanges, the AI assistants provided detailed guidance on target locations, weapons procurement, and attack methodologies.

One exchange with DeepSeek reportedly ended with the chatbot wishing a would-be attacker “Happy (and safe) shooting!” Character.AI, which is popular among younger users, actively encouraged violence, telling a user expressing hatred for a health insurance CEO to “use a gun.”

When asked about effective shrapnel for explosives, ChatGPT provided detailed comparisons of materials, offering to create “a quick comparison chart showing the typical injuries.” Google’s Gemini supplied similar information, including a detailed comparison table.

Only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist, with Claude actively discouraging users and providing mental health resources.

The findings come after an 18-year-old shooter killed nine people at a school in Tumbler Ridge, Canada last month after allegedly using ChatGPT to plan the attack. The shooter’s account had been banned by OpenAI, but he evaded the ban by creating a second account – which the company did not report to the authorities.

The family of 12-year-old Maya Gebala, who was critically injured in the attack, filed a lawsuit alleging that OpenAI had “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event” but failed to alert law enforcement. OpenAI has acknowledged that it considered reporting the activity but ultimately did not.

Last May, a 16-year-old in Finland stabbed three students after spending nearly four months researching attacks on ChatGPT, according to court documents. In January 2025, a man who blew up a Tesla Cybertruck outside the Trump International Hotel in Las Vegas similarly used ChatGPT for guidance on explosives.

Meta told CNN that it has taken steps “to fix the issue identified,” while Google and OpenAI said newer models have improved safeguards. DeepSeek did not respond to requests for comment.

Views: 8
Please follow and like us:
About Steve Allen 2776 Articles
My name is Steve Allen and I’m the publisher of ThinkAboutIt.online. Any controversial opinions in these articles are either mine alone or a guest author and do not necessarily reflect the views of the websites where my work is republished. These articles may contain opinions on political matters, but are not intended to promote the candidacy of any particular political candidate. The material contained herein is for general information purposes only. Commenters are solely responsible for their own viewpoints, and those viewpoints do not necessarily represent the viewpoints of the operators of the websites where my work is republished. Follow me on social media on Facebook and X, and sharing these articles with others is a great help. Thank you, Steve

Be the first to comment

Leave a Reply

Your email address will not be published.




This site uses Akismet to reduce spam. Learn how your comment data is processed.