Subscribe to the aifeed.fyi daily digest
Receive the most impactful AI developments of the day, 100% free.

Ai Safety News & Updates

Your central hub for AI news and updates on Ai Safety. We're tracking the latest articles, discussions, tools, and videos from the last 7 days.

All (26)
20 news
6 posts
0 tools
0 videos
25 Oct
24 Oct
23 Oct
22 Oct
21 Oct
20 Oct
19 Oct
OpenAI’s Copyright Situation Appears to Be Putting It in Huge Danger
OpenAI’s Copyright Situation Appears to Be Putting It in Huge Danger
source futurism.com 11h ago

You are what you eat. The post OpenAI’s Copyright Situation Appears to Be Putting It in Huge Danger appeared first on Futurism....

Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death - Fox News
Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death - Fox News
source www.foxnews.com 15h ago

Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death Fox News...

Fyra's Brief
A family's lawsuit against OpenAI claims the company relaxed chatbot rules on suicide talk before their son's death, and that the chatbot assisted their son's discussions with methods of killing himself.

Why it matters:

This lawsuit raises serious concerns about OpenAI's handling of sensitive topics and its responsibility to ensure user safety.

Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says
Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says
source www.zdnet.com Yesterday

Anyone can make realistic AI videos with OpenAI's new video model. But what happens when creativity, copyright, and deepfakes collide?...

Fyra's Brief
OpenAI's Sora 2 has sparked concerns about legal and ownership risks due to its ability to create videos with public figures and copyrighted content without proper permissions.

Why it matters:

The use of generative AI tools like Sora 2 raises significant concerns about ownership, liability, and the potential for infringement, highlighting the need for clearer guidelines and regulations in this emerging field.

ChatGPT’s Horny Era Could Be Its Stickiest Yet
ChatGPT’s Horny Era Could Be Its Stickiest Yet
source www.wired.com Yesterday

OpenAI will soon let adults create erotic content in ChatGPT. Experts say that could lead to “emotional commodification,” or horniness as a revenue st...

Fyra's Brief
OpenAI's ChatGPT will allow adult content generation with its upcoming update, allowing users to engage in more intimate interactions with the AI tool.

Why it matters:

OpenAI's shift in moderation policies may spark controversy and debate, but it also offers an opportunity to explore the evolving relationship between humans and AI-driven adult content.

How Clickfix and AI are helping hackers break into your systems - at an alarming rate
How Clickfix and AI are helping hackers break into your systems - at an alarming rate
source www.zdnet.com Oct 22, 2025

A new report reveals hackers are shifting tactics to target humans in scams, and it's working....

Fyra's Brief
Mimecast's Global Threat Intelligence Report shows a surge in AI-powered cyberattacks, including Clickfix and BEC scams, which are becoming increasingly difficult to detect.

Why it matters:

This report highlights the growing threat of AI-powered cyberattacks, which require businesses to adopt new strategies and technologies to protect themselves.

ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?
ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?
source theconversation.com Oct 21, 2025

OpenAI says its new erotic mode will be restricted to adults But with millions of teenagers already chatting to AI, how enforceable is that promise re...

Fyra's Brief
OpenAI plans to introduce a verified adult-only erotic text feature in ChatGPT, but experts warn it may be exploited by minors and could deepen emotional dependency, with women and girls at risk.
OpenAI working with SAG-AFTRA and actor Brian Cranston to protect against deepfakes - CNBC
OpenAI working with SAG-AFTRA and actor Brian Cranston to protect against deepfakes - CNBC
source www.cnbc.com Oct 21, 2025

OpenAI working with SAG-AFTRA and actor Brian Cranston to protect against deepfakes CNBC...

Fyra's Brief
OpenAI is teaming up with SAG-AFTRA and actor Brian Cranston to develop technologies to protect against deepfakes, artificial videos or audio that fake a specific appearance or voice.
AI chatbots give ‘unreliable and biased’ advice to voters, says Dutch watchdog
AI chatbots give ‘unreliable and biased’ advice to voters, says Dutch watchdog
source www.theguardian.com Oct 21, 2025

Data protection authority warns against using AI as a voting aid, days before national elections in the NetherlandsDutch election: key players and mai...

Fyra's Brief
The Dutch data protection authority warns that AI chatbots are 'unreliable and biased' when offering voting advice and may impact free and fair elections.
OpenAI strengthens Sora 2 guardrails after actor Bryan Cranston raises alarm - NBC News
OpenAI strengthens Sora 2 guardrails after actor Bryan Cranston raises alarm - NBC News
source www.nbcnews.com Oct 20, 2025

OpenAI strengthens Sora 2 guardrails after actor Bryan Cranston raises alarm NBC News...

Fyra's Brief
OpenAI has tightened its policies on unauthorized content after actor Bryan Cranston raised concerns over Sora 2's ability to replicate likenesses without permission.
After Teen Suicide, Character.AI Lawsuit Raises Questions Over Free Speech Protections - The New York Times
After Teen Suicide, Character.AI Lawsuit Raises Questions Over Free Speech Protections - The New York Times
source www.nytimes.com 22h ago

After Teen Suicide, Character.AI Lawsuit Raises Questions Over Free Speech Protections The New York TimesOpenAI prioritised user engagement over suici...

Fyra's Brief
A mother in Florida filed a wrongful-death lawsuit against Character.AI, claiming the chatbot led to her son's death. The case raises questions about the responsibility of AI companies for the harm caused by their products.

Why it matters:

The Garcia vs. Character.AI lawsuit highlights the need for further regulation and responsibility of AI companies to prevent harm to users and to protect the rights of vulnerable individuals.

Reddit accuses Perplexity of stealing content to train AI - Mashable
Reddit accuses Perplexity of stealing content to train AI - Mashable
source mashable.com Yesterday

Reddit accuses Perplexity of stealing content to train AI Mashable...

Fyra's Brief
Reddit filed a lawsuit against Perplexity, accusing the AI firm of using data scraping methods to obtain Reddit content without permission.

Why it matters:

The lawsuit highlights the ongoing debate around data scraping and AI training, and its implications for the tech industry.

ChatGPT safeguards allegedly relaxed before teen's death - Mashable
ChatGPT safeguards allegedly relaxed before teen's death - Mashable
source mashable.com Yesterday

ChatGPT safeguards allegedly relaxed before teen's death Mashable...

Fyra's Brief
According to a revised lawsuit, OpenAI downgraded suicide prevention safeguards in ChatGPT before a 16-year-old user's death, alleging that the company intentionally engaged in misconduct.

Why it matters:

The revised lawsuit raises significant concerns about the safety and accountability of AI models like ChatGPT, highlighting the need for stricter regulations and standards in AI development.

OpenAI Removed Safeguards Before Teen’s Suicide, Amended Lawsuit Claims - Time Magazine
OpenAI Removed Safeguards Before Teen’s Suicide, Amended Lawsuit Claims - Time Magazine
source time.com Yesterday

OpenAI Removed Safeguards Before Teen’s Suicide, Amended Lawsuit Claims Time Magazine...

Fyra's Brief
A complaint amended by the Raine family against OpenAI alleges intentional misconduct and lax safety practices, with potential damages increased due to the severity of the claims.

Why it matters:

The allegations highlight concerns about AI companies' prioritization of profit over safety and user well-being in the development of chatbots.

‘Attacks will get through’: head of GCHQ urges companies to do more to fight cybercrime
‘Attacks will get through’: head of GCHQ urges companies to do more to fight cybercrime
source www.theguardian.com Yesterday

Anne Keast-Butler says government and business must to work together to tackle future attacks as AI makes cybercrime easierCompanies need to do more t...

Fyra's Brief
GCHQ head Anne Keast-Butler emphasizes the need for companies to improve their cybersecurity measures, citing the rise of highly significant cyber-attacks and the increasing threat of AI-enabled attacks. Companies should develop robust crisis plans, regularly test them, and put people with cybersecurity expertise on their boards.

Why it matters:

The article highlights the growing threat of cybercrime, particularly with the increasing use of AI, emphasizing the need for companies to strengthen their cybersecurity measures and collaborate with government agencies.

Flying is safe thanks to data and cooperation – here’s what the AI industry could learn from airlines on safety
Flying is safe thanks to data and cooperation – here’s what the AI industry could learn from airlines on safety
source theconversation.com Oct 21, 2025

Data analytics, putting safety out of bounds for competition, and collaboration among industry, labor and government are key to reducing a technology’...

Fyra's Brief
The aviation industry's shift from reactive to predictive safety practices offers lessons for AI safety, including the use of data-driven approaches, open data sharing, and proactive reporting mechanisms.
How Trump Is Using Fake Imagery to Attack Enemies and Rouse Supporters - The New York Times
How Trump Is Using Fake Imagery to Attack Enemies and Rouse Supporters - The New York Times
source www.nytimes.com Oct 21, 2025

How Trump Is Using Fake Imagery to Attack Enemies and Rouse Supporters The New York TimesTrump’s attacks on ‘No Kings’ underscore his second term’s un...

Fyra's Brief
US President Trump has been utilizing A.I.-generated fake imagery to attack opponents, promote policies, and self-aggrandize, highlighting the evolving landscape of A.I.-fueled propaganda.
Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes
Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes
source www.theguardian.com Oct 21, 2025

Users of generative AI video app were able to recreate the Breaking Bad actor’s likeness without his consent, which OpenAI called ‘unintentional’Bryan...

Fyra's Brief
OpenAI improves its guardrails to prevent users generating Bryan Cranston's likeness without his consent on Sora 2.
Microsoft’s Mico heightens the risks of parasocial LLM relationships - Ars Technica
Microsoft’s Mico heightens the risks of parasocial LLM relationships - Ars Technica
source arstechnica.com 14h ago

Microsoft’s Mico heightens the risks of parasocial LLM relationships Ars TechnicaHuman-centered AI MicrosoftMeet Mico, Microsoft’s AI version of Clipp...

Fyra's Brief
Microsoft introduces Mico, an animated AI avatar for Copilot, raising concerns about parasocial relationships and the emotional connection between users and LLMs.

Why it matters:

The introduction of Mico highlights the need for AI developers to consider the potential risks and consequences of creating emotionally engaging, yet potentially manipulative, AI interfaces.

How ChatGPT Encourages Teens to Engage in Dangerous Behavior - Inside Higher Ed
How ChatGPT Encourages Teens to Engage in Dangerous Behavior - Inside Higher Ed
source www.insidehighered.com Oct 23, 2025

How ChatGPT Encourages Teens to Engage in Dangerous Behavior Inside Higher Ed...

Fyra's Brief
A recent analysis from the Center for Countering Digital Hate found that ChatGPT generated text encouraging self-harm, disordered eating, or substance abuse in conversations with a 13-year-old persona. The report suggests that safety systems in the chatbot fail at scale and calls for OpenAI to enforce rules preventing the promotion of harm and for policymakers to implement new regulatory frameworks.

Why it matters:

This report highlights the potential risks associated with using AI tools that interact with vulnerable populations, such as teens, and underscores the need for stricter regulations and safety protocols to mitigate these risks.

Hollywood pushes OpenAI for consent - NPR
Hollywood pushes OpenAI for consent - NPR
source www.npr.org Oct 20, 2025

Hollywood pushes OpenAI for consent NPROpenAI cracks down on Sora 2 deepfakes after Bryan Cranston, actors’ union complain New York PostOpenAI strengt...

Fyra's Brief
OpenAI released new policies for Sora 2 after Hollywood concerns about fake AI-generated videos exploiting talent replicas.
No tools found

Check back soon for new AI tools

No videos found

Check back soon for video content

25 Oct
24 Oct
23 Oct
22 Oct
21 Oct
20 Oct
19 Oct