/bmi/media/media_files/2025/07/28/sam-altman-2025-07-28-11-50-09.png)
New Delhi: OpenAI CEO Sam Altman has cautioned that conversations with ChatGPT are not protected by legal confidentiality in the way that sessions with therapists, doctors, or lawyers are. The warning comes amid rising anecdotal evidence that users, particularly younger individuals, are turning to the chatbot for emotional support and personal advice, as per the report.
“People talk about the most personal sh*t in their lives to ChatGPT,” Altman said in a podcast conversation with comedian Theo Von. “People use it, young people especially use it, as a therapist, a life coach; having these relationship problems and [asking] what should I do?” he added.
Sam Altman tells Theo Von about how people use ChatGPT as a therapist and there needs to be new laws on chat history privacy:
— Bearly AI (@bearlyai) July 27, 2025
“If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit, we could be required to produce that.”pic.twitter.com/xbqiMsx5Du
Unlike traditional therapy or legal consultations, conversations with AI tools like ChatGPT do not enjoy protections such as doctor–patient or attorney, client privilege. Altman acknowledged that the legal framework for these types of interactions is yet to be defined. “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it… And we haven’t figured that out yet for when you talk to ChatGPT,” he said.
Altman also confirmed that deleted chats may still be accessible for legal and security reasons, further underlining the limits of privacy in these AI interactions.
According to the report, his comments follow a study by Stanford University researchers, which found that AI-powered therapy chatbots are not currently fit to function as reliable mental health advisers. The yet-to-be-peer-reviewed paper warned that such models often respond inappropriately to users describing mental health conditions, potentially reinforcing harmful stereotypes and failing to detect moments of crisis.
“We find that these chatbots respond inappropriately to various mental health conditions, encouraging delusions and failing to recognise crises,” the authors stated. The study also found that the underlying Large Language Models (LLMs) exhibited bias, showing stigma towards users describing conditions such as schizophrenia or alcohol dependence, while being more neutral or supportive when responding to cases of depression.
“These issues fly in the face of best clinical practice,” the report noted, adding that while human therapists are expected to treat all patients with equal care and sensitivity, the chatbots did not consistently uphold the same standard.