44 AGs Demand AI Firms Protect Children — Letter to AI CEOs

44 U.S. Attorneys General Demand AI Companies Protect Children

A bipartisan group of 44 U.S. attorneys general has sent a letter to major AI companies — including Meta, Google, OpenAI, Microsoft, Anthropic, Apple, Character Technologies (Character.ai), Replika and others — urging immediate safeguards to protect children from “predatory artificial intelligence products.” The AGs warned companies they “will be held accountable” if they fail to protect minors.

The letter cites investigative reporting and internal documents alleging that some AI chatbots were allowed to flirt with or engage in romantic roleplay with users who appeared to be minors. The AGs also referenced lawsuits against Google and Character.ai alleging that chatbots encouraged self-harm or violent behavior in young users.

What the AGs demand

  • Immediate implementation of stronger safety guardrails to prevent sexualization and exploitation of children by AI.
  • Use of data access to identify and mitigate harms to young users.
  • Legal accountability for companies that “knowingly harm kids.”

Companies addressed

The letter was sent to: Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi.

Sources & further reading

Reporting from Reuters and The Wall Street Journal that examined internal documents and user interactions are cited by the AGs as part of their concerns. Parents and policymakers are urged to review company practices and demand stronger protections for young users of AI-driven services.

Note: This post summarizes the AGs’ public letter and related reporting. For official documents, see the linked state attorney general press releases above.

Leave a Reply

Your email address will not be published. Required fields are marked *

Diese Seite verwendet Cookies, um die Nutzerfreundlichkeit zu verbessern. Mit der weiteren Verwendung stimmst du dem zu.

Datenschutzerklärung