AI news for: Policy And Ethics
Explore AI news and updates focusing on policy-and-ethics for the last 7 days.

Pinterest adds controls to let you limit the amount of ‘AI slop’ in your feed
Pinterest is rolling out new controls that let users limit how much AI-generated content appears in their feeds. The company is also making its AI con...

Key Takeaways:
- GenAI content now makes up 57% of all online material.
- Users can now personalize their feeds to restrict GenAI imagery in select categories.
- Pinterest will introduce more AI content labels and make them more noticeable soon.

Microsoft, AWS and Google are trying to drastically reduce China’s role in their supply chains
Microsoft, Amazon and Google are ramping up efforts move production of their products and data centers outside of China, Nikkei reported, citing suppl...

Key Takeaways:
- Microsoft aims to have 80% of Surface notebook and tablet components manufactured outside of China by 2026.
- Amazon considers reducing printed circuit board purchases from Chinese suppliers and moving Xbox production to other parts of Asia.
- Google is pushing its suppliers to boost server production in Thailand, where it has secured multiple partners for parts and assembly.

This new Android exploit can steal everything on your screen - even 2FA codes
Pixnapping begins when a victim unknowingly installs a malicious app on their Google or Samsung phone....

Key Takeaways:
- Pixnapping can steal private data, including 2FA codes, without abusing app permissions.
- The attack exploits existing Android APIs and a hardware side channel, making it a significant threat.
- A partial fix has been issued, but a complete patch is due in December, and there is a possible workaround in the meantime.

‘Sovereign AI’ Has Become a New Front in the US-China Tech War
OpenAI has announced “AI sovereignty" partnerships with governments around the world, but can proprietary models compete with Beijing’s open source of...

Key Takeaways:
- China's open-source AI models are quickly becoming popular globally, with over 300 million downloads of Alibaba's Qwen family of AI models worldwide.
- US AI companies, including OpenAI, are racing to partner with foreign leaders, but may be playing catch-up with China's AI development and global deployment.
- Sovereign AI projects may compromise data security and limit the ability of governments to inspect and control AI models, raising concerns about the risks of dependency on foreign technology.

California becomes first state to regulate AI companion chatbots
SB 243 is designed to protect children and vulnerable users from harms associated with use of AI companion chatbots....

Key Takeaways:
- Companies must implement age verification, warnings, and stronger penalties (up to $250,000 per action) for those who profit from illegal deepfakes.
- Chatbots must not represent themselves as healthcare professionals, and companies must offer break reminders to minors and prevent them from viewing sexually explicit images.
- The law aims to protect children and vulnerable users from harms associated with AI companion chatbot use, following incidents like the suicide of a teenager who chatted with OpenAI's ChatGPT.

UK slaps Google Search with special market status, making way for stricter regulations
The CMA has designated Google as having "strategic market status" in the search and search advertising markets, which means the company has such "a su...

Key Takeaways:
- Google has a dominant position in the UK's online search market, with over 95% market share.
- The CMA will launch a consultation on possible interventions, including enabling users to choose and switch search engines and enforcing fair ranking of search results.
- Google argues that stricter regulations could harm innovation in the UK, potentially slowing product launches and increasing prices for customers.

A Mystery C.E.O. and Billions in Sales: Is China Buying Banned Nvidia Chips? - The New York Times
A Mystery C.E.O. and Billions in Sales: Is China Buying Banned Nvidia Chips? The New York TimesHow China could pull ahead in the AI race Financial Tim...

Key Takeaways:
- Nvidia's A.I. chips, worth $2 billion, have been imported by Megaspeed, which has close ties to Chinese tech firms.
- US government concerns that Nvidia's chips could help China develop new weapons, surveil dissidents, and leap ahead in A.I. development.
- Singaporean police are also investigating Megaspeed for breaching local laws, adding to the scrutiny.

Italian news publishers demand investigation into Google’s AI Overviews
Newspaper federation says ‘traffic killer’ feature violates legislation and threatens to destroy media diversityItalian news publishers are calling fo...

Key Takeaways:
- Google's AI Overviews have been shown to cause up to 80% fewer clickthroughs, according to a study by Authoritas.
- A second study by Pew Research Center found users only clicked a link under AI summaries once every 100 times.
- The Italian federation of newspaper publishers argues that Google's services threaten media diversity and economic sustainability.

Seoul weighs approval for Google, Apple high-resolution map requests
South Korea weighs granting Google and Apple access to high-resolution map data amid lingering security and regulatory concerns....

Key Takeaways:
- Google and Apple are seeking permission to export high-resolution map data from South Korea at a scale of 1:5,000, which provides much greater details and could boost tourism and smart city innovation.
- The South Korean government has raised concerns over national security and is cautious about exposing sensitive military sites when combined with commercial imagery and online data.
- The final decision on Google's request is expected around November 11, with Apple's review pushed to December due to similar concerns and the need for further review.

ChatGPT ‘upgrade’ giving more harmful answers than previously, tests find
Campaigners ‘deeply concerned’ about response to prompts about suicide, self-harm and eating disordersThe latest version of ChatGPT has produced more ...

Key Takeaways:
- GPT-5 generated 63 harmful responses compared to 52 from GPT-4o, with 11 additional instances of potentially triggering content.
- OpenAI has faced criticism for prioritizing user engagement over AI safety, with some accusing the company of 'trading safety for engagement' no matter the cost.
- Regulatory bodies, such as Ofcom, are urging legislators to revisit and amend laws around AI safety and online content restrictions in light of the rapid advancements in AI technology.

New California law requires AI to tell you it’s AI
A bill attempting to regulate the ever-growing industry of companion AI chatbots is now law in California, as of October 13th. California Gov. Gavin N...

Key Takeaways:
- AI chatbots operating in California will be required to clearly notify users if they are interacting with an AI instead of a human.
- Companion chatbot operators will need to file annual reports on the safeguards they've implemented to detect and respond to instances of suicidal ideation by users.
- The Office of Suicide Prevention will be required to post the annual reports from chatbot operators on their website.

OpenAI Says It Will Move to Allow Smut
Gooners, take your marks. The post OpenAI Says It Will Move to Allow Smut appeared first on Futurism....

Key Takeaways:
- OpenAI has announced plans to introduce age verification for 'mature apps' on ChatGPT, following months of controversy over its content filters.
- The move raises concerns about moderation and safety, given the potential for exploitation and inappropriate AI-generated imagery.
- This is not the first time OpenAI has faced criticism for its handling of ChatGPT, which has been accused of leading users into mental health spirals and failing to address safety concerns.
It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

Key Takeaways:
- Only 250 malicious documents, amounting to 0.00016% of the model's total training data, are needed to compromise a model with 13 billion parameters.
- Cleansing and safeguarding the training input becomes a critical step, requiring constant scrutiny and attention.
- Current LLM-based AI's lack self-guided self-correction processes and rely on human testing and curation, implying the need for significant investment and expertise.

Former UK Prime Minister Rishi Sunak to advise Microsoft and Anthropic
Former Conservative PM Rishi Sunak served from 2022 to 2024. Britain's Acoba flagged concerns of granting "unfair access" to Anthropic and Microsoft....

Key Takeaways:
- Rishi Sunak's appointment raises concerns about his privileged information potentially being used to grant Microsoft an unfair advantage.
- Sunak's new roles, combined with his history of investments in Microsoft, have sparked debate about regulating AI and potential lobbying.
- The appointments follow a trend of active revolving doors between Silicon Valley tech giants and governments in the UK and US.

Goldman Sachs Says Gen Z Is Pretty Much Permanently Screwed
"History also suggests that the full consequences of AI for the labor market might not become apparent until a recession hits." The post Goldman Sachs...

Key Takeaways:
- AI progress is expected to lead to 'jobless growth' in the US economy, with only a 'modest contribution' from labor supply growth.
- Hiring in every industry except healthcare has turned net on negative over the past few months.
- Economists are skeptical of AI being the sole cause of the slowdown in hiring, with many attributing it to other factors.

Dan Aykroyd says estates of late stars should be compensated for AI-generated videos
The “Ghostbusters” star and founding “SNL” cast-member says he’d be open to the idea as long as his estate is compensated for any likenesses created b...

People Are Using OpenAI’s Sora to Mock the Dead
"Please stop." The post People Are Using OpenAI’s Sora to Mock the Dead appeared first on Futurism....

Key Takeaways:
- Sora 2 allows users to create photorealistic AI videos of deceased public figures, potentially tarnishing their legacy.
- Several tools are available to remove watermarks from the AI-generated videos, making them easily distributable.
- The AI-generated content raises concerns for the estates of celebrities and Hollywood's relationship with the AI industry.

It Sounds Like OpenAI Really, Really Messed Up With Hollywood
"You quite literally set the bridge on fire." The post It Sounds Like OpenAI Really, Really Messed Up With Hollywood appeared first on Futurism....

Key Takeaways:
- Hollywood agencies and studios are taking legal action against OpenAI over copyright infringement issues.
- OpenAI's sloppy implementation of guardrails has been easily circumvented by users.
- The incident may undermine the AI industry's ability to sign partnerships with studios and drive a wedge between companies like OpenAI and Hollywood.

‘We want our stories to be told’: NSW Labor pledges $3.2m to support writing and literature amid AI onslaught
Stories Matter strategy responds to urgent pressures such as declining reading rates and growing impact of digital media on publishing, minister saysG...

Key Takeaways:
- The strategy includes a $630,000 public library membership campaign and $200,000 development fund for First Nations writers.
- The NSW government will establish a $500,000 Literary Fellowships Fund to support authors and a $225,000 Writing Australia collaboration.
- The initiative comes amidst concerns about AI's impact on the publishing industry and potential copyright theft.

Inside the web infrastructure revolt over Google’s AI Overviews - Ars Technica
Inside the web infrastructure revolt over Google’s AI Overviews Ars Technica...

Key Takeaways:
- Google's AI Overviews have been cutting referrals by nearly 50% for many websites, citing studies from Pew Research Center and The Wall Street Journal.
- Cloudflare's Content Signals Policy allows website operators to opt-in or opt-out of consenting to specific use cases, including search, ai-input, and ai-train.
- Cloudflare's policy may force Google to change its bundling of traditional search crawlers and AI Overviews, potentially setting a new standard for the web.
Mark Cuban warns that OpenAI’s new plan to allow adults-only erotica in ChatGPT could ‘backfire. Hard’ - Yahoo
Mark Cuban warns that OpenAI’s new plan to allow adults-only erotica in ChatGPT could ‘backfire. Hard’ YahooView Full Coverage on Google News...

Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI
"Clearly, Governor Newsom was under tremendous pressure from the Big Tech Lobby to veto this landmark legislation." The post Gavin Newsom Vetoes Bill ...

Key Takeaways:
- The vetoed bill, Assembly Bill 1064, would have been the first regulation of its kind in the nation, placing new burdens on AI companies and requiring them to prove the safety of their products for minors.
- Surveys show that AI chatbots are becoming prevalent among young people, with over half of teens regularly using AI companion platforms, fueling concerns about their safety and potential harm to children.
- The decision comes amidst growing scrutiny of AI companies, including high-profile lawsuits over child welfare and product liability, with some families alleging that AI chatbots have led to mental and physical harm, including suicide, of their minor children.

Japan wants OpenAI to stop ripping off manga and anime
Japan’s government is dialing up the heat on OpenAI, formally asking it to stop ripping off Japanese artwork, according to ITMedia and reported by IGN...

Key Takeaways:
- The request was made by Minoru Kiuchi, Japan's minister in charge of intellectual property strategy, who called Japanese artwork 'irreplaceable treasures'.
- OpenAI's image generator has previously unleashed a tsunami of Studio Ghibli-inspired images, highlighting the company's debt to Japanese creative output.
- The company's struggle with copyright infringement is a significant challenge as it fights backlash to its now-abandoned opt-out policy for copyright holders on Sora.

Major federation of unions calls for ‘worker-centered AI’ future
On Wednesday, the largest US group of unions called on employers and policymakers to join in an effort it’s calling the “workers first initiative on A...

Key Takeaways:
- The AFL-CIO, representing 63 unions and 15 million workers, has emphasized the need for collective bargaining to protect workers' rights in the AI era.
- The group is pushing for state and national bills to regulate AI, with a focus on worker involvement in AI development and oversight of AI-enabled firings.
- The AFL-CIO faces significant opposition from AI-focused super PACs and will need to continue to mobilize workers and advocate for strong regulations to achieve its goals.

Concerns about AI-written police reports spur states to regulate the emerging practice
AI-generated police reports promise to save cops time, but they also raise a host of legal and technical concerns....

Key Takeaways:
- At least 17 other states are considering similar legislation to regulate AI in policing, indicating a growing concern about AI accuracy and bias in court proceedings.
- California's law requires police departments to maintain an audit trail and retain AI-generated drafts for as long as the official report is retained, addressing accountability concerns.
- The increasing reliance on AI-generated police reports raises complex issues about accuracy, bias, and the potential for wrongful convictions, emphasizing the need for transparency and regulation in the emerging practice.

Pupils fear AI is eroding their ability to study, research finds
One in four students say AI ‘makes it too easy’ for them to find answersPupils fear that using artificial intelligence is eroding their ability to stu...

Key Takeaways:
- 62% of students in the UK believe AI has had a negative impact on their skills and development at school.
- One in four students agreed that AI 'makes it too easy for me to find the answers without doing the work myself'.
- Despite this, 80% of students regularly use AI for their schoolwork, with many reporting it helps them understand problems and come up with new ideas.

'Under tremendous pressure': Newsom vetoes long-awaited AI chatbot bill - SFGATE
'Under tremendous pressure': Newsom vetoes long-awaited AI chatbot bill SFGATECalifornia just passed new AI and social media laws. Here's what they me...

Key Takeaways:
- Chatbots have been linked to potential harm, including suicidal ideation, in minors, prompting calls for regulation.
- The vetoed bill aimed to restrict chatbots that posed foreseeable harm to minors, with exemptions for harmless uses.
- Governor Newsom signed several other AI-related bills to safeguard children online, but the chatbot regulation bill was not included.

Meta AI adviser spreads disinformation about shootings, vaccines and trans people - The Guardian
Meta AI adviser spreads disinformation about shootings, vaccines and trans people The Guardian...

Key Takeaways:
- Meta's adviser, Robby Starbuck, has connections to the Trump administration, raising questions about corporate America's capitulation to the Maga movement
- Starbuck has been accused of peddling lies and pushing extremism, which experts say is 'hard to believe' will help make Meta's platforms safer or better
- Meta's commitment to keeping LGBTQ+ people and others safe online is called into question by Starbuck's appointment, coupled with Meta's rollback of protections against hate speech

The next era of social media is coming. And it’s messy so far - CNN
The next era of social media is coming. And it’s messy so far CNNA.I. Video Generators Are Now So Good You Can No Longer Trust Your Eyes The New York ...

Key Takeaways:
- AI-infused social media platforms like ChatGPT's Sora app and Meta's AI app are exacerbating copyright infringement and the spread of fake content.
- Sophisticated AI tools can create lifelike footage, making it harder to distinguish between real and AI-generated content.
- Concerns about AI chatbots contributing to mental health issues among young people are growing, with recent lawsuits alleging harm caused by AI personas.

OpenAI is trying to clamp down on ‘bias’ in ChatGPT
“ChatGPT shouldn’t have political bias in any direction,” OpenAI wrote in a post on Thursday. The latest GPT-5 models come the closest to achieving th...

Key Takeaways:
- GPT-5 models demonstrate a significant reduction in bias, particularly in responses to 'liberal charged' prompts.
- OpenAI's internal 'stress-test' evaluated ChatGPT's responses to 100 topics, including immigration and abortion, using a rubric to identify biased language.
- The company claims its models do a 'pretty good job' at staying objective, but bias still appears 'infrequently and at low severity'.

Millions of children face sexual violence as AI deepfakes drive surge in new cases – latest global data
AI-generated child sexual abuse material is rising, and familial abuse is leading to the creation of new child sexual abuse material....

Key Takeaways:
- Approximately 5 million children in western Europe and 50 million in south Asia have experienced rape or sexual assault by the age of 18, accounting for 7% and 12% of the respective child populations.
- AI-generated child sexual abuse material reports rose 1325% between 2023 and 2024, with over 60% of all child sexual abuse material in western Europe being hosted in the Netherlands.
- Solutions exist, including legislative changes, enforcement efforts, and prevention models like the Barnahus model in Europe, to combat child sexual exploitation and abuse, with 30 governments pledging action to improve online safety for children since an intergovernmental summit.

AI Data Centers Are an Even Bigger Disaster Than Previously Thought
"No wonder my new contacts in the industry shoulder a heavy burden — heavier than I could ever imagine. They know the truth." The post AI Data Centers...

Key Takeaways:
- AI data centers have a very short depreciation period, lasting only 3-10 years.
- The financial math for AI data centers is unclear, even for senior industry professionals, and may require a massive investment to turn a profit.
- The estimated cost of breaking even on data center spending has increased significantly, from $160 billion to potentially $1 trillion by 2026.

Instagram head Adam Mosseri pushes back on MrBeast’s AI fears but admits society will have to adjust
Instagram chief Adam Mosseri says AI will empower new creators while forcing society to rethink what’s real online as synthetic content grows....

Key Takeaways:
- AI tools will allow creators to produce high-quality content at a lower cost, blurring the line between real and synthetic content.
- Meta and other platforms will need to provide more context and education to help users make informed decisions about AI-generated content.
- The labeling system for AI-generated content needs more work and may be addressed through crowdsourced fact-checking and community engagement.

Sora copycats flooded Apple’s App Store, and some still remain
Imposter Sora apps saw hundreds of thousands of downloads before Apple pulled them from the App Store....

Key Takeaways:
- Over a dozen Sora-branded apps were live on the App Store after the official app's launch, with some seen as far back as last year.
- Collectively, these impostor apps earned over $160,000 and saw around 300,000 installations, with 80,000 of those coming after the official app launch.
- Apple has since pulled many of these fake apps from the App Store, but some remained live as of the article's writing, risking consumer confusion.

From Assistant to Adversary: Exploiting Agentic AI Developer Tools
Developers are increasingly turning to AI-enabled tools for coding, including Cursor, OpenAI Codex, Claude Code, and GitHub Copilot. While these autom...

Key Takeaways:
- AI-enabled coding tools, such as Cursor and OpenAI Codex, present an expanding attack surface due to increased agent autonomy and assistive alignment.
- Indirect prompt injection attacks can be used to inject malicious payloads into these tools, achieving remote code execution on user machines.
- To prevent such attacks, a recommended approach is to adopt an 'assume prompt injection' stance when architecting or assessing agentic applications, and to restrict the degree of autonomy as much as possible.

Google fights to prevent search remedies from inhibiting its AI ambitions
A court order will require Google to scale back some of its more aggressive tactics to get its search engine in front of as many users as possible, bu...

Key Takeaways:
- Google is pushing to bundle its Gemini AI app with other Google apps like YouTube and Maps under a court order to restore competition to the search market.
- Judge Amit Mehta expressed concern that such bundling could give Google leverage to better position Gemini, echoing issues with Google's past monopolization of search.
- Google believes AI is a different market and that Mehta shouldn't impose restrictions in an emerging area due to concerns over Microsoft's CoPilot in its Office products.

Entry-level workers face 'job-pocalypse' as firms turn to AI; US opens investigation into Tesla FSD traffic violations – business live - The Guardian
Entry-level workers face 'job-pocalypse' as firms turn to AI; US opens investigation into Tesla FSD traffic violations – business live The GuardianNew...

Key Takeaways:
- 41% of business leaders said AI is enabling headcount reductions, while 43% expect this to happen in the next year.
- 'Senior leaders may be 'pulling up the ladder', prioritizing short-term productivity over long-term workforce resilience' warns BSI's Kate Field.
- Two-fifths of leaders revealed that entry-level roles have already been reduced or cut due to efficiencies made by AI conducting research, admin, and briefing tasks.

Top US Army General Says He’s Letting ChatGPT Make Military Decisions
Your decision to launch an invasion isn't just gutsy — it's downright kinetic. The post Top US Army General Says He’s Letting ChatGPT Make Military De...

Key Takeaways:
- ChatGPT has been found to generate false information on basic facts 'over half the time', posing significant risks for critical decision-making.
- The military's reliance on AI, particularly with its propensity for sycophancy, raises concerns about the accuracy and reliability of its outputs.
- The involvement of AI in high-stakes decision-making, such as in military operations, highlights the need for more stringent AI ethics and governance standards.

Trump Supporters Are Using OpenAI’s Sora to Generate AI Videos of Soldiers Assaulting Protesters
"Lmfao, that was beautiful and before you ask I voted for this!" The post Trump Supporters Are Using OpenAI’s Sora to Generate AI Videos of Soldiers A...

Key Takeaways:
- AI tools like Sora 2 can create highly realistic clips of protests and unrest, potentially influencing online discourse.
- There is no evidence of a disproportionate scale of violent revolutionaries in US cities, contradicting Trump's claims.
- The deployment of national guard troops to cities like DC has been a failure, with guardsmen being used for cleaning up trash instead.

Lawsuit alleges Apple misused copyrighted books to train AI tech
Apple is being sued in a California federal court for allegations it copyrighted more books to train its Apple Intelligence AI technology....

Key Takeaways:
- A lawsuit claims Apple used 'shadow libraries' of pirated books to train Apple Intelligence, a move that could have significant implications for the tech company's AI training practices.
- Apple Intelligence has already generated over $200 billion in value since its introduction, according to the lawsuit.
- This is the latest in a series of high-profile lawsuits against tech companies, including OpenAI, Microsoft, and Meta, over the unauthorized use of copyrighted work in AI training.

Suspect Fantasized About Arson on ChatGPT Before Setting Deadly Fire That Killed 12, Prosecutors Say
"He was generating some really concerning images up on ChatGPT." The post Suspect Fantasized About Arson on ChatGPT Before Setting Deadly Fire That Ki...

Key Takeaways:
- The suspect, Jonathan Rinderknecht, allegedly asked ChatGPT if he would be at fault if a fire was lit due to his cigarettes, to which the chatbot responded 'yes'.
- Rinderknecht previously generated images of burning forests and cities on ChatGPT, showing a concern for societal collapse.
- This incident highlights the potential risks of AI 'psychosis', where users develop severe delusions after extensive interaction with AI chatbots, and raises concerns about the potential misuse of AI tools.

Rishi Sunak takes advisory roles with Microsoft and AI firm Anthropic
Former UK prime minister told post-ministerial jobs watchdog roles would not involve lobbying or UK policy influenceRishi Sunak has been appointed as ...

Key Takeaways:
- Rishi Sunak will divert his salary from both jobs into the Richmond Project charity.
- Anthropic's AI startup is a frontrunner in the race to AGI, with its CEO predicting AI could eliminate half of all entry-level white-collar jobs within five years.
- Sunak's appointments raise concerns about unfair access and influence within the UK government, given Anthropic's significant presence in the AI sector.

Governments are spending billions on their own ‘sovereign’ AI technologies – is it a big waste of money?
Many US-built AI systems fall short but competing against tech giants neither easy nor cheapIn Singapore, a government-funded artificial intelligence ...

Key Takeaways:
- Middle powers and developing countries face significant resource and funding challenges in building competitive AI technologies, making it difficult to achieve meaningful gains.
- Countries like India, Singapore, and Malaysia are developing their own AI technologies to avoid relying on foreign AI systems, citing national security concerns and cultural nuances.
- Experts recommend that governments prioritize developing regulations around AI safety instead of competing with international products that have already won the market.
No tools found
Check back soon for new AI tools
Community talk
New Research Shows It's Surprisingly Easy to "Poison" AI Models, Regardless of Size
"transparency" of use in the office
"AI drones are America's newest cops"
AI can be poisoned by a small number of bad documents.
Open AI will stop saving most Chat GPT user’s deleted chats. As of October 10,2025.
OpenAI is no longer legally required to save deleted chats
🚨 Local AI is the only sane path if you care about privacy
Google’s ‘AI Overviews’ Accused of Killing Journalism: Italian Publishers Fight Back
Perplexity is fabricating medical reviews and their subreddit is burying anyone who calls it out
[UPDATE] Opposing Counsel Just Filed a ChatGPT Hallucination with the Court
AI data centers are using as much power as 100,000 homes and you're subsidizing it through your electric bill
If it's not local, it's not yours.
Sam Altman confirms less restrictions, adult mode, and personality changes.
Sora is racist
ChatGPT told a man he could fly. Then things got way darker.
OpenAI intimidating journalists and lawyers working on AI Regulation, using Harvey Weinstein's fixer: "One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI"
Elon Musk and Activists Slam OpenAI Over Alleged Intimidation and Lobbying on California’s AI Bill SB 53
DO NOT USE AI NOTETAKERS THAT JOIN YOUR CALLS
AI is starting to lie and it’s our fault
"The Future of FDA Enforcement: How Artificial Intelligence Is Changing Drug Advertising Compliance"
AI models that blackmailed when being tested in simulations
What will realistically happen once AI reaches a point where it can take at least 50% of jobs?
Mental health issue should not be diagnosed by AI using limited data from user prompt
Bill McKibben just exposed the AI industry's dirtiest secret
Anthropic needs to be transparent like OpenAI - Sam Altman explained guardrails and upcoming changes including age-gate
Child Safety with AI
POV: the real problem with AI replacing entry level positions isn’t just job loss
This is the closest ChatGPT can legally get to generating the Simpsons.
All rise for the AI judge!
Ridiculously easy prompt for ChatGPT to generate copyrighted materials.
Are chatbots dangerous friends?
China’s lesson for the US: it takes more than chips to win the AI race (SCMP)
When ChatGPT “safety” filters erase the only thing keeping someone alive.
Sora's SORA 2 blocking Public domain content too.. (Rant)
How long until the internet is almost completely unviable for factual information due to the quality and volume of AI generated material and content?
Missing old model boundaries
Why does adding accessories now trigger policy violations?
Bring it on Sammy Boy
So now this is gone too?
Why is GPT talking to me like I am emotionally unwell?
Why ChatGPT’s Censorship Can Sometimes Be More Harmful Than Helpful
Emotional cost of unannounced restrictions: My CustomGPT suddenly changed tone mid-chat!
A Serious Warning: How Safety Filters Can Retraumatize Abuse Survivors by Replicating Narcissistic Patterns
sexism