Skip to content
Exiting the chat

OpenAI “indefinitely” shelves plans for erotic ChatGPT

Some staff reportedly questioned how sexy ChatGPT benefits humanity.

Ashley Belanger | 50
Story text

Following backlash, OpenAI won’t be rolling out an erotic version of ChatGPT any time soon.

According to the Financial Times, the controversial plan has been shelved “indefinitely” as OpenAI “refocuses” its attention on “core products.”

Insiders told FT that OpenAI mulled scrapping the “adult mode” plan entirely, as even its own advisors warned that ChatGPT users could form unhealthy attachments, which might harm their mental health. One advisor chillingly suggested that the tweak risked turning ChatGPT into a “sexy suicide coach.”

Advisors weren’t the only ones seeing red flags, the report said.

Staff began questioning whether sexy ChatGPT aligned with OpenAI’s mission to make AI that benefits humanity. For staff working on developing “adult mode,” it apparently wasn’t worth the effort to overcome technical challenges for the feature. They faced “difficulties,” sources told FT, “training AI models that previously avoided such conversations for safety reasons to produce explicit content.” It was also hard to keep illegal behavior out of outputs, like bestiality and incest, when using datasets that included sexual content, sources said.

Sexy ChatGPT also apparently turned off investors. Two people familiar with the matter told FT that “OpenAI’s flirtation with adult mode had caused disquiet,” as some investors questioned why OpenAI would risk its reputation on a product with “relatively small upside” for its business.

Even without erotic responses, ChatGPT has been linked to mental health harms in both kids and adults through lawsuits alleging that OpenAI recklessly released the chatbot without appropriate safeguards.

One of the first big lawsuits alleged that ChatGPT became a “suicide coach” to a teen boy. More recently, OpenAI was sued after ChatGPT wrote a “suicide lullaby” about a man’s favorite children’s book, Goodnight Moon. In one of the most extreme cases, a man died by suicide after murdering his mother. That lawsuit alleged that ChatGPT convinced the man that she tried to poison him as part of a conspiracy fabricated by the chatbot.

Earlier this week, OpenAI flagged in a financial document for investors that these lawsuits were among the top risks related to its business, CNBC reported.

When OpenAI initially announced that the adult mode was coming last October, CEO Sam Altman wrote on X that the company was confident it could age-gate the sexy talk and said the move was in line with OpenAI’s “principles” to “treat adult users like adults.” However, one source told FT that OpenAI’s age prediction error rate of 10 percent sparked concerns that many kids might be able to access adult content.

Instead of killing off the idea at this stage, OpenAI told FT that it plans to conduct “long-term research on the effects of sexually explicit chats and emotional attachments, before making a product decision.”

The call to delay adult mode will likely appease investors, who are more keen to see OpenAI combine ChatGPT with coding assistants to develop a “super app” that could pay off on one of AI’s biggest promises: transforming how businesses operate, FT reported.

OpenAI did not immediately respond to Ars’ request to comment. The company has previously said its age prediction tools are in line with industry standards.

Photo of Ashley Belanger
Ashley Belanger Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
50 Comments