If you thought ChatGPT was your personal digital diary for privacy, prepare yourself for a cold shower. A court order has just turned every conversation you have with AI into a potential trial bomb. OpenAI must now store everything: from your midnight crises to your business ideas, from your existential doubts to your relationship problems.
The reason? The New York Times took them to copyright court, and now judges want (and can) search your chats for evidence. Altman frets, but the reality is that ChatGPT’s privacy promise was just marketing. And now millions of users discover that they have been talking not to a confidant, but to an always-on tape recorder.
ChatGPT Privacy as a Bargaining Commodity
Judge Ona T. Wang issued an order on May 13 that will make anyone who has ever shared anything personal with ChatGPT tremble.. OpenAI must “preserve and segregate all output data that would otherwise be deleted,” including conversations that users have deliberately deleted. The irony is bitter: the company promised permanent cancellations within 30 days, and instead now everything will remain blocked indefinitely.
The case arises from the legal battle between the New York Times and OpenAI, where the newspaper accuses the company of having used millions of copyrighted articles to train its models. But the real collateral victim will be the users, transformed from customers into involuntary providers of trial evidence. Every conversation becomes a potential legal “ammunition”.
Brad Lightcap, COO of OpenAI, denounced how this request “fundamentally conflicts with the privacy commitments we have made to our users at ChatGPT”. Words that ring hollow when the reality is that those commitments have melted like snow in the sun before the first order of the court.

The Illusion of the Digital Confidant
Sam Altman responded with a proposal as ambitious as it was belated: the concept of “AI privilege.” In X he wrote: “In my opinion, talking to an AI should be like talking to a lawyer or a doctor”. The idea of professional secrecy applied to artificial intelligence it's fascinating and I agree with it, but it arrives when the omelette is already done.
Millions of people have already poured their mental health problems, relationship crises, entrepreneurial projects and personal fragilities into ChatGPT chats. As I pointed out some time ago, chatbots are becoming our new digital confessors, capable of offering 24/24 support without (apparently) judgment.
The problem is that this confessor has the memory of an elephant and the character of a village gossip. Everything you tell them ends up recorded, analyzed and potentially used against you when you least expect it..
ChatGPT Privacy, the Paradox of the Connected Future
This raises profound questions about the future of our relationship with artificial intelligence. If AIs truly become our personal assistants, virtual psychologists, and trusted advisors, what guarantees do we have about privacy?
The case has sparked concerns far beyond the confines of copyright. We are witnessing the birth of a dangerous precedent: every time someone sues an AI company, our private conversations could end up under judicial seizure. The right to be forgotten, which we thought we had acquired, evaporates when someone else needs our words as proof..
Between Marketing and Reality
OpenAI has always leveraged the promise of privacy to attract users. Their policies talk about automatic deletion, user control over their data, transparency. But when the courts come knocking, these promises prove to be as fragile as soap bubbles..
People, of course, are not as stupid as they say, and they don't fall off the pear tree. Investigations show that 73% of consumers already worry about privacy when interacting with chatbots. This case will give them concrete reasons to be even more wary.
The truth is uncomfortable but clear: Today's AIs are not confessors, they are digital "chatterboxes" who ultimately spill everything they know when it suits someone. Every secret shared, every vulnerability exposed, every idea whispered in the dim light of a late-night chat can resurface at the most inopportune moments.
Towards what future?
Altman’s proposed “AI privilege” could represent a first step toward a more mature relationship with artificial intelligence. But it will take time to develop legal frameworks that truly protect our digital privacy.. In the meantime, the lesson is brutal but necessary: every word spoken to a chatbot is potentially a deferred public confession.
The best time to rethink our relationship with these digital assistants was yesterday. The second best time is today. They are not infallible confessors, but powerful tools that require the same caution we would use with any other technology that records and stores our lives.
Privacy in the AI era is not an acquired right, but a privilege to be earned every day. And the battle, my dears, has only just begun.