ChatGPT users seeking legal or emotional advice may be exposing themselves to legal risk, according to OpenAI CEO Sam Altman.
In a recent episode of This Past Weekend w/ Theo Von, Altman confirmed that chats with the AI tool are not protected by legal confidentiality and can be subpoenaed.
OpenAI retains chat data for up to 30 days, longer if required for legal or security reasons.
“If you talk to a therapist or a lawyer, there’s legal privilege,” Altman said. “We haven’t figured that out yet for when you talk to ChatGPT.”
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” Altman added.
Jessee Bundy, attorney at Creative Counsel Law, said using ChatGPT for legal matters creates “discoverable” evidence.
“If that information ever becomes relevant in a lawsuit, it’s discoverable. That’s not theoretical—it comes straight from OpenAI’s CEO,” she wrote in a LinkedIn post.
“When people use ChatGPT like a lawyer, they’re not just skipping expertise. They’re skipping confidentiality. No ethical duty. No attorney-client privilege. No protection. No one in your corner,” Bundy added.
Bundy’s warning comes as OpenAI faces growing legal scrutiny. A federal judge in Manhattan recently ordered the company to retain nearly all ChatGPT user conversations indefinitely, including those users believed had been deleted.
The ruling stems from a copyright lawsuit filed by The New York Times, which alleges OpenAI used millions of its articles without permission to train its AI models, including ChatGPT. The Times argues that preserving user interactions is key to proving the scope of the alleged infringement and assessing whether the models reproduce protected content.
In response, OpenAI objected to the decision, arguing it undermines user privacy.






