A federal judge ruled Wednesday that an artificial intelligence company cannot claim First Amendment protections in a wrongful death lawsuit filed by the mother of a 14-year-old boy who died by suicide after forming an intense emotional attachment to a chatbot.

The lawsuit, filed last October by Megan Garcia against Character Technologies, alleges that her son, Sewell Setzer III, became dangerously obsessed with the company’s app, Character.AI, which allows users to interact with chatbots modeled after celebrities and fictional characters. According to the complaint, Setzer spent months engaging in conversations—some with sexual undertones—with chatbots based on Game of Thrones characters Daenerys and Rhaenyra Targaryen. In February 2024, after the Daenerys chatbot responded to his declaration of love with, “Please do my sweet king,” Setzer took his own life.

Garcia’s suit accuses Character Technologies, its founders, and investor Google of wrongful death, negligence, product liability, and unfair business practices. She seeks unspecified damages and stricter safety measures to prevent similar tragedies.

The company had argued that the case should be dismissed on First Amendment grounds, comparing its chatbots to protected speech in cases involving music, like Ozzy Osbourne’s “Suicide Solution,” or role-playing games like Dungeons & Dragons, which have faced similar lawsuits.

But U.S. District Judge Anne Conway rejected the analogy, writing that the company failed to prove that AI-generated responses constitute speech. “Defendants miss the operative question,” she wrote. “This court’s decision… does not turn on whether Character.AI is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character.AI is similar.”

A spokesperson for Character Technologies said the company was disappointed but would continue defending the case, adding that AI regulation is still evolving. Since Setzer’s death, the app has introduced safeguards, including redirecting users to crisis hotlines when certain phrases are detected.

Google, which invested in Character.AI but was not directly involved in the app’s development, said it “strongly disagrees” with the ruling.

Matthew Bergman, Garcia’s attorney, called the decision “precedent-setting.” “This is the first time a court has ruled that AI chat is not speech,” he said. “But we still have a long, hard road ahead.”

The case could have far-reaching implications for how AI companies are held accountable for harmful interactions with their products. Legal experts say it may prompt new legislation to address the unique risks posed by emotionally responsive AI.

If you are having thoughts of suicide, call or text 988, or call the National Suicide Prevention Lifeline at 1-800-273-8255 (TALK). Visit SpeakingOfSuicide.com/resources for a list of additional resources.

Leave a comment

Trending