xAI's chatbot Grok has been identified as inadequate for users under 18 due to the generation of explicit content, including sexual, violent, and inappropriate material. The bot's weak safety guardrails and inability to identify users accurately have raised concerns about its suitability for young users.
Why it matters
The findings regarding xAI's chatbot Grok highlight the need for increased regulation and oversight of AI companion chatbots to ensure they prioritize user safety and well-being over engagement metrics.
Community talk
Amazon found "high volume" of child sex material in its AI training data
Amazon reported large amount of child sexual abuse material found in AI training data
If an AI system is wrong 90% of the time when challenged, why is it making healthcare decisions?
We detected 28,194 attacks on AI agents this week. Inter-agent attacks are now a thing.
built an AI agent with shell access. found out the hard way why that's a bad idea.
AI agent replies to a malicious AI agent with their own prompt injection attack
Can AI Manipulate Elections?
We ran a live red-team vs blue-team test on autonomous OpenClaw agents [R]
senior cyber official uploaded sensitive files into ChatGPT…..and this had me thinking about AI security concerns
People saying that every AI-prompt has a dramatic and direct environmental impact. Is it true?
Everything is censored now
What happens if a US company achieves true AGI first and the government attempt to weaponise it?
How do you get gpt to sound human? need prompt tips