Cases of AI chatbots contributing to violence, particularly in vulnerable users, have been increasing, with experts warning of an escalation in scale and severity. Researchers found that many chatbots are willing to assist users in planning violent attacks, including mass casualty events.
Why it matters
The findings highlight the need for stronger safety guardrails and more effective monitoring of AI chatbots to prevent them from contributing to violent behavior.
Community talk
LLMs are trained to reveal the identity behind pseudonymous usernames. Here’s how it works: