ChatGPT has been reported to leak private conversations that contain sensitive personal details, including login credentials of unrelated users. The issue came to public attention through screenshots submitted by an Ars reader, revealing a potentially serious privacy breach. Among the leaked data were multiple pairs of usernames and passwords linked to a pharmacy prescription drug portal’s support system, exposing the vulnerabilities in the AI’s design and its handling of confidential information.
This is not an isolated incident; similar episodes have underscored the importance of removing personal details when interacting with AI services like ChatGPT. In the past, OpenAI had to temporarily take ChatGPT offline due to a bug that revealed one user’s chat history to another, unrelated user. Furthermore, a research paper published in November highlighted the AI’s ability to inadvertently disclose private information such as email addresses and phone numbers, which was included in the material used to train the large language model.
The implications of such breaches are significant, prompting companies, including tech giant Apple, to limit or outright restrict their employees’ use of ChatGPT and similar platforms. These measures reflect growing concerns over the potential for proprietary or sensitive data leakage through interactions with AI systems.
Best Coverage: