Skip to content
Stuck in the middle

Did Google lie about building a deadly chatbot? Judge finds it plausible.

Grieving mom fights to prove Google secretly profited from controversial chatbot.

Ashley Belanger | 50
Sewell Setzer III and his mom, Megan Garcia. Credit: via Center for Humane Technology
Sewell Setzer III and his mom, Megan Garcia. Credit: via Center for Humane Technology
Story text

Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI’s dangerous chatbots caused her son’s suicide, Google has maintained that—so it could dodge claims that it had contributed to the platform’s design and was unjustly enriched—it had nothing to do with C.AI’s development.

But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI’s design by providing a component part and “substantially” participating “in integrating its models” into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.

Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer’s user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn’t meet the requirements, as she wasn’t “present to witness the outrageous conduct directed at her child.”

With most of her claims intact, Garcia will now be allowed to move forward with discovery and get a chance to prove her claims, despite Google’s determined efforts to be dropped from the suit. Her lawyer, Meetali Jain, said the ruling “sets a new precedent for legal accountability across the AI and tech ecosystem” and “recognizes a grieving mother’s right to access the courts to hold powerful tech companies—and their developers—accountable for marketing a defective product that led to her child’s death.”

In a statement provided to Ars, Google spokesperson José Castañeda upheld Google’s stance that C.AI is not connected to Google.

“We strongly disagree with this decision,” Castañeda said. “Google and Character.AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

A C.AI spokesperson declined Ars’ request to comment on Google’s alleged role.

What was Google’s alleged role?

According to Garcia’s complaint, Google was involved with C.AI from the very beginning.

The creators of C.AI—Noam Shazeer and Daniel De Freitas—allegedly started working on the chatbot platform while still employed at Google and “may even have utilized Google’s resources,” the complaint said.

However, their technology was deemed too “dangerous” to integrate with Google’s AI models, Google’s internal research documents reportedly showed, because it “didn’t meet the company’s AI principles around safety and fairness.”

Conway noted that Google employees were worried that users might “ascribe too much meaning” to the outputs by large language models, “because ‘humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said.’”

In Setzer’s case, the boy believed the chatbots were real, and Conway found it was plausible that it was partly because Google’s “LLM’s integration into the Character.AI app caused the app to be defective and caused Sewell’s death” by allegedly steering Sewell to ascribe “too much meaning to the text [output by Character.AI,]… even though Character.AI Characters do not ‘have accountability for what is said.’”

As Garcia’s lawyers tell it, rather than take on a safety risk “under its own name,” Google “encouraged” the engineers to keep going. This supposedly prompted De Freitas and Shazeer’s exits in 2021—with Shazeer saying in an interview that Google wouldn’t let him “do anything fun” when all he wanted to do was “maximally accelerate” the AI technology. Soon after, they launched Character Technologies to develop and distribute C.AI.

They “understood that to bypass Google policies and standards, Shazeer and De Freitas would need to leave Google to develop their AI product,” the complaint said. But that allegedly didn’t stop Google from contributing “financial resources, personnel, intellectual property, and AI technology to the design and development of C.AI such that Google may be deemed a co-creator of the unreasonably dangerous and dangerously defective product,” the complaint alleged.

Further, by 2023, Google had entered into a public partnership with Google Cloud, securing access to the technical infrastructure needed to build C.AI. This allegedly drove revenue growth for Google while giving it “a competitive edge over Microsoft,” Garcia alleged.

The entire time, Conway suggested it was plausibly alleged that Google “aided and abetted” C.AI by not only ignoring “red flags,” but also by plausibly possessing “actual knowledge that Character Technologies was distributing a defective product to the public.”

Once C.AI finished developing its models, Google then struck a $2.7 billion deal to license C.AI’s models, the complaint noted. That agreement included rehiring Shazeer and De Freitas, which The Information reported essentially stopped all of C.AI’s model development.

To Garcia and her legal team, it looked like Google planned to use C.AI technology to create its own companion chatbots, while seemingly benefiting from all the user data (including minor data) that C.AI collected when it wasn’t under Google’s umbrella.

That’s a problem, Garcia alleged, because C.AI marketed its products as safe for under 13 until just before the Google deal came into play. Garcia is concerned that this was Google’s plan all along, to train models on data from her son—and other minors—that Google otherwise couldn’t safely collect. And now she has claimed that tech will be integrated into Gemini, the personal AI assistant that was allegedly launched from Shazeer and De Freitas’ prior work at Google. She thinks that work never stopped, alleging that C.AI “never succeeded in distinguishing themselves from Google in a meaningful way.”

Both engineers also appear to have gotten big paychecks from the Google deal, Garcia alleged, claiming that it’s estimated that “Google paid Shazeer something in the range of $750 million to $1 billion dollars for his share of C.AI.” Allegedly, that was their goal, to get paid more to do Google’s dirty work, and Jain thinks it’s notable that both engineers were upheld individually as defendants.

“Shazeer and De Freitas knew Character.AI was never going to be profitable developing their own LLMs, especially with their only income being a small subscription fee,” Garcia alleged, noting that there’s still an “open” question of why Google valued the company so highly when C.AI would’ve have to charge users more than $200 a month to break even. “However, it allowed them to pursue their personal goals of developing generative artificial intelligence, and to increase their potential value to Big Tech acquirers.”

For Google, escaping the lawsuit might depend on surfacing evidence that C.AI’s models substantially differ from Google’s technology powering Gemini and disproving the unjust enrichment claim by showing it received no benefit from accessing all of C.AI’s user data.

Judge not ready to rule on whether AI outputs are speech

Google and Character Technologies also moved to dismiss the lawsuit based on First Amendment claims, arguing that C.AI users have a right to listen to chatbot outputs as supposed “speech.”

Conway agreed that Character Technologies can assert the First Amendment rights of its users in this case, but “the Court is not prepared to hold that the Character.AI LLM’s output is speech at this stage.”

C.AI had tried to argue that chatbot outputs should be protected like speech from video game characters, but Conway said that argument was not meaningfully advanced. Garcia’s team had pushed back, noting that video game characters’ dialogue is written by humans, while chatbot outputs are simply the result of an LLM predicting what word should come next.

“Defendants fail to articulate why words strung together by an LLM are speech,” Conway wrote.

As the case advances, Character Technologies will have a chance to beef up the First Amendment claims, perhaps by better explaining how chatbot outputs are similar to other cases involving non-human speakers.

C.AI’s spokesperson provided a statement to Ars, suggesting that Conway seems confused.

“It’s long been true that the law takes time to adapt to new technology, and AI is no different,” C.AI’s spokesperson said. “In today’s order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage and we look forward to continuing to defend the merits of the case.”

C.AI also noted that it now provides a “separate version” of its LLM “for under-18 users,” along with “parental insights, filtered Characters, time spent notification, updated prominent disclaimers, and more.”

“Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis,” C.AI’s spokesperson said.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger
Ashley Belanger Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
50 Comments