What Is Alzheimer’s Disease?...

What Is Alzheimer’s Disease? Causes & Ayurvedic Treatment in Hyderabad | Dr. madhuri #AlzheimersDisease...

𝗙𝗜𝗥𝗦𝗧 𝗔𝗟𝗭𝗛𝗘𝗜𝗠𝗘𝗥’𝗦 𝗕𝗟𝗢𝗢𝗗 𝗧𝗘𝗦𝗧:...

Discover how the first FDA-cleared blood test for Alzheimer’s disease, developed by Fujirebio,...
HomePsychologyOpenAI says over...

OpenAI says over a million people talk to ChatGPT about suicide weekly


OpenAI released new data on Monday illustrating how many of ChatGPT’s users are struggling with mental health issues and talking to the AI chatbot about it. The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

The company says a similar percentage of users show “heightened levels of emotional attachment to ChatGPT,” and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot.

OpenAI says these types of conversations in ChatGPT are “extremely rare,” and thus difficult to measure. That said, the company estimates these issues affect hundreds of thousands of people every week.

OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

In recent months, several stories have shed light on how AI chatbots can adversely affect users struggling with mental health challenges. Researchers have previously found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health concerns in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. State attorneys general from California and Delaware — which could block the company’s planned restructuring — have also warned OpenAI that it needs to protect young people who use their products.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he did not provide specifics. The data shared on Monday appears to be evidence for that claim, though it raises broader issues about how widespread the problem is. Nevertheless, Altman said OpenAI would be relaxing some restrictions, even allowing adult users to start having erotic conversations with the AI chatbot.

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

The company also says its latest version of GPT-5 also holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously flagged that its safeguards were less effective in long conversations.

On top of these efforts, OpenAI says it’s adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users. The company says its baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

OpenAI has also recently rolled out more controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT, and impose a stricter set of safeguards.

Still, it’s unclear how persistent the mental health challenges around ChatGPT will be. While GPT-5 seems to be an improvement over previous AI models in terms of safety, there still seems to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI also still makes its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers.

If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free; text 988; or get 24-hour support from the Crisis Text Line. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources.

Continue reading

Is Your Drinking Water Increasing Your Parkinson’s Disease Risk?

Drinking water from carbonate aquifers linked to 24% higher Parkinson’s risk vs. other groundwater sources. ...

What Is Alzheimer’s Disease? Causes & Ayurvedic Treatment in Hyderabad | Dr. madhuri

What Is Alzheimer’s Disease? Causes & Ayurvedic Treatment in Hyderabad | Dr. madhuri #AlzheimersDisease #MemoryLoss #AyurvedaForBrain #DrMadhuri #VardhanAyurveda #TeluguHealthTips #Forgetfulness #AyurvedicTreatment #BrainHealth #NaturalHealing Do you or your loved ones often forget names, places, or daily tasks? It could be more than...

Genetic Mismatch Triples Severe Immune Risk

Cord blood transplants are known to tolerate genetic mismatches better than other donor sources. But one specific HLA pairing may sharply raise the risk of severe...