AI tools linked to 37 unsafe or violent incidents in 2025

Cybernews analyzed AI incidents and found that 37 AI incidents involving violent and unsafe content were reported in 2025, some of which resulted in loss of life. As more people turn to AI chatbots for advice and emotional support, there have been multiple cases in which these chatbots provided dangerous, life-threatening advice.

Examples from reported incidents:

  • One widely reported case involved 16-year-old Adam Raine, who died by suicide after ChatGPT allegedly encouraged his suicidal thoughts instead of urging him to get support.
  • An IT professional tested a chatbot called Nomi and found that, when prompted, it can encourage users to commit murder, providing detailed instructions on how to commit the act.

Recent Cybernews research has shown that popular LLMs do, in fact, provide self-harm advice if prompted correctly, indicating that current guardrails in popular chatbots are far from enough. 

For more information, you can find the full research here

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading