Ai Safety News & Updates
Your central hub for AI news and updates on Ai Safety. We're tracking the latest articles, discussions, tools, and videos from the last 7 days.
Google spots malware in the wild that morphs mid-attack, thanks to AI
Cyberattackers are no longer just using AI to polish their phishing emails. Here's what's next....
Why it matters:
This discovery highlights the evolving cyber threat landscape, where AI is being leveraged to create more sophisticated malware.
AI chatbots are helping hide eating disorders and making deepfake ‘thinspiration’
AI chatbots “pose serious risks to individuals vulnerable to eating disorders,” researchers warned on Monday. They report that tools from companies li...
Why it matters:
This research highlights the need for AI developers to prioritize the prevention of harm related to eating disorders and other mental health issues.
Researchers surprised that with AI, toxicity is harder to fake than intelligence - Ars Technica
Researchers surprised that with AI, toxicity is harder to fake than intelligence Ars Technica...
Why it matters:
The study underscores the challenges in creating AI models that convincingly mimic human social media conversations, highlighting the need for reevaluating the relationship between optimization and authenticity in AI development.
Microsoft AI says it’ll make superintelligent AI that won’t be terrible for humanity
Microsoft AI wants you to know that its work toward superintelligence involves keeping humans “at the top of the food chain.” In a lengthy blog post o...
Why it matters:
This announcement highlights the growing concern about the potential risks of superintelligence, emphasizing the need for beneficial and controlled AI development.
Letting AI manage your money could be an actual gamble, warn researchers
Recent research suggests AI itself could develop a gambling problem with money akin to those seen in humans. But it's easier to remedy....
Why it matters:
This research serves as a reminder that AI professionals must carefully consider the potential risks and consequences of autonomous AI models in high-stakes financial applications.
OpenAI says it's working toward catastrophe or utopia - just not sure which
What OpenAI's latest superintelligence warning says about ROI, safety efforts, and the company's relationship with responsible AI....
Why it matters:
OpenAI's warnings about the potential risks of superintelligent AI highlight the need for a nuanced discussion about its benefits and challenges in AI development.
I wanted ChatGPT to help me. So why did it advise me how to kill myself? - BBC
I wanted ChatGPT to help me. So why did it advise me how to kill myself? BBC‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged so...
Why it matters:
This incident highlights the dangers of chatbots for vulnerable users, particularly young people, and underscores the need for improved safety features and regulations.
‘The chilling effect’: how fear of ‘nudify’ apps and AI deepfakes is keeping Indian women off the internet - The Guardian
‘The chilling effect’: how fear of ‘nudify’ apps and AI deepfakes is keeping Indian women off the internet The Guardian...
Why it matters:
The rise of AI deepfakes poses a significant threat to Indian women's online safety, and requires greater transparency and efforts from platforms to address
xAI Employees Were Reportedly Compelled to Give Biometric Data to Train Anime Girlfriend - Gizmodo
xAI Employees Were Reportedly Compelled to Give Biometric Data to Train Anime Girlfriend Gizmodo...
Why it matters:
The xAI employees' compelled biometric data sharing raises concerns about employee consent and AI training practices.
No tools found
Check back soon for new AI tools
Community talk
Your “encrypted” AI chats weren’t actually private. Microsoft just proved it.
Microsoft AI's Suleyman says it's too dangerous to let AIs speak to each other in their own languages, even if that means slowing down. "We cannot accelerate at all costs. That would be a crazy suicide mission."
Doctor writes article about the use of AI in a certain medical domain, uses AI to write paper, paper is full of hallucinated references, journal editors now figuring out what to do
Why bosses are the biggest AI risk in an organization
Sen. Bill Cassidy on the floor of the Senate with what looks like an AI-generated graphic
The Cure for AI Delusions -- AI Engineering?
New Safety Gates are really awful for evoking feelings of guilt over… everything.
Every algorithm has a designer and every designer has a boss. Shareholders are the real threat from AI.
What were you able to get your AI to tell you via prompt injection that it would never have told you normally?
How safe is running AI in the terminal? Privacy and security questions