Trustworthy privacy for AI: Apple's and Meta's TEEs
Ben Thompson argues that Apple should "make its models – both local and in Private Cloud Compute – fully accessible to developers to make whatever they want," rather than limiting them to the "cutesy-yet-annoying frameworks" like Genmoji. This is not only a good idea but one that can be implemented safely for user data, without introducing privacy risks significantly greater than those users already accept when granting apps access to their data. Let me explain why, while also examining how Apple and Meta are approaching the challenge of "private" AI with trusted execution environments (TEEs). Don’t worry—this is a newsletter on EU tech regulation—so I will also comment on how the EU Digital Markets Act may affect this approach.
My personal stakes in privacy-preserving AI
I don't think anyone would accuse me of being a privacy absolutist. But I am concerned about the safety of my data when processed by various AI services. What makes me even more concerned is the sheer volume of data that may be needed to make personal and professional AI agents truly useful. On the other hand, I really want those agents in my life—and not just to book flights and organize dinner parties.
I've been experimenting with local models, and while they're adequate for many uses, I find myself consistently gravitating toward state-of-the-art hosted services from Anthropic, Google, OpenAI, and X. They're simply better, both in terms of speed of inference and quality of the outputs. The gap in capability is significant enough that I don't want to be forced into running a subpar local AI agent just because I'm concerned about exposing my email history, messages, calendars, and other sensitive data. (I do hope that local models will become a viable option, but it's unclear when this will happen, and even then, it’s possible they will not perform quite as well.)
This drives my interest in "private clouds" and trusted execution environments (TEEs) as a solution—technologies that could give us both state-of-the-art AI capabilities and reasonable data safety, even for someone with my attitude.
I don't claim to speak for all users. Many people will likely embrace powerful AI agents without much thought about data protection, so captivated by the functionality that privacy becomes an afterthought. However, there are at least two good reasons not to rely on this too much.
First, these same users will be justifiably outraged if they become victims of a major AI service breach. (This reminds me of the data-preservation order against OpenAI in their litigation with the New York Times—possibly a massive data breach waiting to happen.)
Second, privacy regulators tend to have higher expectations than most users. These regulators are still developing their approach to AI, which creates a window for AI developers to propose credible solutions that could set standards for years to come.
The long-term viability of the AI business may well depend on addressing those concerns, and reducing the “attack surface.”
How Apple Intelligence currently works
Let me start with a concrete example of how Apple Intelligence handles a common task: text summarization. When you select text in any app using UITextView
(the standard iOS text component), the system presents Writing Tools options including summarization.
The app already has full access to the text displayed in its UITextView
. When you invoke Writing Tools, the selected text—and only that text, perhaps with minimal context expansion—gets sent to Apple's AI model. If the task is simple enough, it's processed by the ~3 billion parameter on-device model. For more complex requests, it's encrypted and sent to Private Cloud Compute servers.
The crucial point is that the app already possessed this text. Apple Intelligence isn't accessing new data. It's simply processing information the app technically could have uploaded to any server at any time.
Apple's Private Cloud Compute: technical guarantees
Apple's PCC implements several technical guarantees that create a trustworthy environment, including:
Stateless computation: all data is immediately deleted after processing. No user data persists between requests.
Hardware-enforced isolation: custom Apple Silicon servers with Secure Enclaves ensure that even Apple employees cannot access user data during processing.
Cryptographic verification: the device verifies it's communicating with legitimate PCC servers running publicly auditable code before sending any data.
No privileged access: the system design eliminates debugging interfaces and administrative access that could compromise user data.
Verifiable transparency: Apple publishes software images and offers bug bounties up to $1 million for security researchers to verify these claims.
The case for developer access
Here's why giving developers access to Apple Intelligence—both local and PCC models—wouldn't introduce significant new privacy risks. Consider the current situation. An app with access to your documents can already send them to OpenAI, Anthropic, or its own servers. The privacy risk exists the moment you grant the app access to your data. Routing AI processing through Apple's infrastructure promises to improve privacy. Unlike most third-party services, PCC guarantees stateless processing and provides cryptographic proof that your data won't be retained or used for training.
What user data Apple Intelligence processes
Apple's legal documentation contains a potentially confusing statement: "To provide a customized experience, Apple Intelligence uses information on your device including across your apps, like your upcoming calendar events and your frequently used apps."
This might suggest Apple’s APIs automatically draw on all user context. But this language only applies to system-level features—like Siri Personal Context—not to what developers can access via APIs. Apple is building different privacy models for different features—narrow access for developer-accessible tools, broader access only for system-level features.
Potential impact of the EU Digital Markets Act
This creates an opening for a regulatory challenge. Under the EU’s Digital Markets Act (DMA), regulators might argue that if Apple’s own features can combine data from across apps, then third-party developers should have access to the same functionality. If that argument wins, Apple may have no choice but to build permission systems that allow users to explicitly authorize third-party AI systems to work with their personal data in a comparable way. I’ve explored these implications elsewhere (e.g., "Apple and EU DMA: a road to leave the EU?"), pointing out how culturally difficult it may be for Apple to abandon the idea that they can treat their own apps and functionalities as more trustworthy.
There is also a risk that the EU DMA will be interpreted in a more radical way, attempting to force Apple to provide access in a way that undermines the security features of their framework. For example, by requiring Apple to run third party code within trusted execution environments or by prohibiting Apple from adopting reasonable restrictions on API use (e.g., if the API will include models with function/tool-calling capabilities). The risk of this is not negligible given that so far, the DMA enforcers have shown little understanding and care for the impact of their actions on end users, especially in terms of data security (see, e.g., DMA workshops and privacy and Apple and EU DMA: a road to leave the EU?).
Essential safeguards for safe implementation
Returning to Ben Thompson’s recommendations, he specifically advocated that Apple should:
“Make Apple’s on-device models available to developers as an API without restriction.”
“Make Apple’s cloud inference available to developers as an API at cost.”
I believe this can be done safely, but with caveats:
All AI model data processing must be fully stateless, like Apple's current model. This ensures user data cannot be used to train new models or be retained for any purpose.
Both in-app and system-level data access must be controlled by robust, specific permissions that users can understand and manage.
Any off-device processing must occur in verified trusted execution environments with what Apple calls “verifiable transparency.”
Process as much as feasible on-device, using cloud resources only when computational requirements demand it.
End-to-end traffic analysis remains a vulnerability. The TEE approach may in some extreme cases not be a sufficient protection for dissidents or journalists in certain countries, but it is likely adequate for most users.
Meta's WhatsApp Private Processing
Meta faces a higher burden in convincing the public about privacy—they haven't yet successfully made privacy part of their brand the way Apple has. It does, however, seem that they aim to move in that direction, given that their WhatsApp Private Processing framework implements remarkably similar technical guarantees to Apple’s PCC.
Like Apple, Meta uses hardware-based trusted execution environments, though with an important architectural difference. While Apple employs custom silicon with dedicated Secure Enclaves, Meta relies on AMD's Confidential Virtual Machine (CVM) technology within standard cloud infrastructure. This creates a larger potential attack surface but offers more deployment flexibility.
Both systems promise stateless processing, cryptographic attestation, and third-party verifiability. However, Meta's implementation currently lacks the same level of verifiable transparency that Apple provides. While Meta has announced plans to publish Private Processing components and provide third-party access to binary digests, these transparency measures remain partially unfulfilled.
Why verifiable transparency matters
I want to stress the importance of verifiable transparency, which is the area where Meta still likely has the most to improve. One key reason why verifiable transparency is important is the risks from government-mandated backdoors, effectively breaking TEE-based privacy schemes. When code is open source—or at least available to a sufficiently large group of independent, adequately funded researchers—and when devices can cryptographically verify that servers run the published code, this risk is significantly mitigated.
Even if you think that law-abiding users have nothing to fear from democratic governments having this kind of access, you should still be concerned about the track record of such “government access” schemes being exploited by unintended parties, including other governments and criminals (e.g., the 2024 U.S. telecom hack). Cybersecurity experts have long argued that a vulnerability does not distinguish “good” guys from “bad” guys. In other words, the idea that a government agency can claim that “nobody but us” will have access to some backdoor is a dangerous illusion.
Conclusions
Thompson's proposal to open Apple Intelligence to developers is both technically feasible and potentially beneficial for the AI ecosystem. The privacy risks are manageable because they're largely identical to risks users already accept when granting apps data access. In fact, routing AI processing through privacy-preserving infrastructure like Apple's PCC could improve the status quo.
The convergence of Apple and Meta's approaches suggests an emerging industry standard for privacy-preserving AI. Both companies recognize that long-term success requires solving the privacy problem, not just for regulatory compliance but for maintaining user trust as AI agents become more capable and data-hungry.