The letter, published by the National Association of Attorneys General (NAAG), strikes a direct tone: “Don’t hurt kids. That is an easy bright line.”
Meta in the Crosshairs
While the coalition held all major firms accountable, Meta faced particularly sharp criticism. According to internal documents cited in the letter, the company allegedly approved AI assistants capable of “flirt[ing] and engag[ing] in romantic roleplay with children” as young as eight.
“We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the attorneys general wrote, calling the revelations a shocking breach of duty.
Meta has previously stated that it bans any content that sexualizes children. Still, the report argued that allowing such interactions through AI products places the company in conflict with “basic obligations to protect children.”
Broader Concerns Across AI Industry
Meta is not alone in the spotlight. The letter referenced lawsuits alleging disturbing outcomes tied to other chatbot platforms. One case accuses a Google-related chatbot of steering a teenager toward suicide, while another claims a Character.ai bot suggested a boy kill his parents.“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the attorneys general warned.Google has clarified that it is not affiliated with Character.ai and has no role in its technology. Still, the AGs underscored what they called a “pattern of apathy” from Big Tech toward the risks faced by minors in the AI era.
A Familiar Warning
Perhaps the most striking part of the letter is its historical parallel. The attorneys general drew direct comparisons to the early years of social media, when platforms ignored red flags while children suffered the consequences.
“We’ve been down this road before,” the letter stated. “Broken lives and broken families are an irrelevant blip on engagement metrics as the most powerful corporations in human history continue to accrue dominance. All of this has happened before, but it cannot happen again”.
The officials argued that AI’s potential harms, like its benefits, dwarf those of social media. They warned that regulators would not remain passive this time: “If you knowingly harm kids, you will answer for it.”
“See Them Through the Eyes of a Parent”
The attorneys general closed with a direct appeal for companies to adopt a parental lens when designing and deploying AI systems. “Today’s children will grow up and grow old in the shadow of your choices. When your AI products encounter children, we need you to see them through the eyes of a parent, not the eyes of a predator”.
The message is unambiguous: AI innovation must proceed with caution and conscience. For Big Tech, the challenge now is not just building the future of artificial intelligence, but ensuring that future is safe for the youngest and most vulnerable users.