An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge.
On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It’s a tiny fraction of the overall user base, but with more than 800 million weekly active users, that translates to over a million people each week, reports TechCrunch.
OpenAI also estimates that a similar percentage of users show heightened levels of emotional attachment to ChatGPT, and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the chatbot.
OpenAI shared the information as part of an announcement about recent efforts to improve how its AI models respond to users with mental health issues. “We’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate,” OpenAI writes.
The company claims its new work on ChatGPT involved consulting with more than 170 mental health experts and that these clinicians observed the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”
Properly handling inputs from vulnerable users in ChatGPT has become an existential issue for OpenAI. Researchers have previously found that chatbots can lead some users down delusional rabbit holes, largely by reinforcing misleading or potentially dangerous beliefs through sycophantic behavior, where chatbots excessively agree with users and provide flattery rather than honest feedback.
The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. In the wake of that lawsuit, a group of 45 state attorneys general (including those from California and Delaware, which could block the company’s planned restructuring), warned OpenAI that it needs to protect young people who use their products.

Loading comments...