The news: OpenAI detailed new ChatGPT mental health safety measure results on Monday, alongside an internal analysis that shows potentially millions of users’ conversations indicate emotional reliance on the chatbot.
Digging into the data: OpenAI’s analysis estimates 0.15% of users have conversations with ChatGPT that indicate heightened emotional attachment, while another 0.15% of users have conversations that include “explicit indicators of potential suicidal planning or intent” in a given week.
Zooming out: OpenAI initially announced new safeguards for people showing signs of mental health crises. And the results posted this week show improved responses and fewer “undesired” answers in ChatGPT-5. Results were evaluated by more than 170 participating psychiatrists, psychologists, and primary care physicians.
Why it matters: With ongoing shortages of mental healthcare providers and rising healthcare costs, budget-conscious consumers are turning to AI chatbots for informal therapy.
Our take: Retroactive mental health and wellness safeguards are necessary in AI chatbots, and OpenAI’s new safety updates show progress. But they’re only one part of the ecosystem. Healthcare and tech companies also need to adopt more monitoring and clinician oversight for people already using these tools. Marketers should educate parents of teens and young adults about safe AI use, and emphasize best practices like clinician collaboration, backup safety measures, and transparent data policies.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com