AI privilege and ChatGPT encryption
Technical protections of chat confidentiality should strengthen the case for legal protections
OpenAI’s Sam Altman has been publicly floating the idea of “AI privilege,” a form of legal confidentiality to protect interactions between users and AIs. According to Altman: “We should apply as much protection as when you talk to ... your human doctor or your human lawyer, as you do when you talk to your AI doctor or AI lawyer.” For OpenAI, their legal battle with the New York Times over disclosure of user conversations with ChatGPT illustrates why this is needed (see the new blog post from OpenAI’s CISO Dane Stuckey). Whether some form of AI privilege should be legally recognised raises a myriad of fascinating jurisprudential questions, which I plan to explore later in more depth. But one thing in Stuckey’s text that struck me in particular. OpenAI’s CISO wrote:
“Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We believe these features will help keep your private conversations private and inaccessible to anyone else, even OpenAI.”
This seems to me a rather significant declaration, also from a business perspective. Without access to user conversations, OpenAI may be limited in its ability to use ChatGPT conversations to train new models or to provide some functionalities (e.g. memory, some forms of advertising). I wrote about some technological measures for AI privacy in Trustworthy privacy for AI: Apple’s and Meta’s TEEs. Of course, such limitations can be overcome, but there would likely be additional friction and a need for direct user involvement. So it’s a costly direction to pursue.
How does this relate to AI privilege? To some extent, technical protections are substitutes for legal protections: if OpenAI has no access to some data, it cannot be legally forced to disclose it. However, an AI provider could also be legally forced to undermine the technical protections for future conversations. Merely adopting technical protections doesn’t guarantee their longevity.
I have two rough thoughts on the relationship between legal protections like AI privilege and technical protections.
First, to be sustainable, technical protections will likely need legal protections. The argument that you can’t currently access user conversations may not carry the day against a legal order to implement such functionality, unless some legal bar would apply.
Second, technical protections are an argument for legal protections.
By technically restricting their own access to user chats, an AI provider signals that it treats confidentiality seriously. In turn, this is likely to affect how many users approach the service. Given the expectation of confidentiality, users will be even more likely to have conversations on topics that are highly sensitive to them. Users may (and some probably already do) treat AI chats not as ordinary services, but as extensions of their own minds, having conversations of the kind they otherwise would only have in their own thoughts. This may also be important for the success of Seb Krier’s idea of AI agents outlined in his Coasean Bargaining at Scale.
If anyone accesses such chats, users would legitimately see that as a serious violation. Hence, by affecting user habits and expectations, the existence of technical protections would strongly support legal protections of confidentiality. At the very least, protections against undermining the technical safeguards.
What will likely complicate this, at least in the public debate, is the question of monetization (see Eric Seufert’s analysis of the likely directions of development in this space: Affiliate links, personalized ads, and chatbot revenue optimization).
Both personalised advertising and affiliate links could be perceived as lowering the technical protections, e.g. because they might involve tool calls beyond the encrypted environment. Even though this is solvable technically, the perception of AI providers monetizing chats may still weigh heavily against legal protections in the public debate. This could be partially due to simplistic analogies with professional confidentiality. Some may fail to notice that their medical or legal conversations can be used, perfectly reasonably, by the professional e.g. to “upsell” customers on other services.
Of course, what is important in the medical and legal context is that the service provider acts in the best interest of the client (even when upselling). So, to make the case for legal protections, it may be necessary for AI providers to show that they also act in the client’s best interest.
OpenAI’s CISO added that the AI provider plans to deploy “fully automated systems to detect safety issues.” He added that:
Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers.
This will naturally raise questions whether such automated detection will take place on device or at least within a trusted execution environment. As I discussed in Trustworthy privacy for AI: Apple’s and Meta’s TEEs, to build trust in such technical protections, businesses should be very transparent about the technical details and enable external auditing.
In general, I think that the technical steps OpenAI plans to adopt to protect chat confidentiality should strengthen the case for legal protections, including some form of AI privilege.

