Sam Altman flags privacy gaps as more users turn to ChatGPT as a digital therapist

While growing numbers of young users are seeking emotional support from ChatGPT, OpenAI CEO warns the platform lacks legal confidentiality, and a Stanford study raises concerns over chatbot responses to mental health crises

author-image
BestMediaInfo Bureau
New Update
Sam Altman
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00


New Delhi: OpenAI CEO Sam Altman has cautioned that conversations with ChatGPT are not protected by legal confidentiality in the way that sessions with therapists, doctors, or lawyers are. The warning comes amid rising anecdotal evidence that users, particularly younger individuals, are turning to the chatbot for emotional support and personal advice, as per the report.

“People talk about the most personal sh*t in their lives to ChatGPT,”  Altman said in a podcast conversation with comedian Theo Von. “People use it, young people especially use it, as a therapist, a life coach; having these relationship problems and [asking] what should I do?” he added.

Unlike traditional therapy or legal consultations, conversations with AI tools like ChatGPT do not enjoy protections such as doctor–patient or attorney, client privilege. Altman acknowledged that the legal framework for these types of interactions is yet to be defined. “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it… And we haven’t figured that out yet for when you talk to ChatGPT,” he said.

Altman also confirmed that deleted chats may still be accessible for legal and security reasons, further underlining the limits of privacy in these AI interactions.

According to the report, his comments follow a study by Stanford University researchers, which found that AI-powered therapy chatbots are not currently fit to function as reliable mental health advisers. The yet-to-be-peer-reviewed paper warned that such models often respond inappropriately to users describing mental health conditions, potentially reinforcing harmful stereotypes and failing to detect moments of crisis.

“We find that these chatbots respond inappropriately to various mental health conditions, encouraging delusions and failing to recognise crises,” the authors stated. The study also found that the underlying Large Language Models (LLMs) exhibited bias, showing stigma towards users describing conditions such as schizophrenia or alcohol dependence, while being more neutral or supportive when responding to cases of depression.

“These issues fly in the face of best clinical practice,” the report noted, adding that while human therapists are expected to treat all patients with equal care and sensitivity, the chatbots did not consistently uphold the same standard.

 

ChatGPT OpenAI Sam Altman AI Chatbot data privacy laws
Advertisment