<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[How EU Law Influences Tech]]></title><description><![CDATA[Commentary on how EU law affects technology businesses, with a focus on privacy, AI, and big tech. Bridging the gap between technology and law for social media, online search, advertising, and financial markets.]]></description><link>https://eutechreg.com</link><generator>Substack</generator><lastBuildDate>Sat, 16 May 2026 08:20:45 GMT</lastBuildDate><atom:link href="https://eutechreg.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mikołaj Barczentewicz]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[barczentewicz@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[barczentewicz@substack.com]]></itunes:email><itunes:name><![CDATA[Mikołaj Barczentewicz]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mikołaj Barczentewicz]]></itunes:author><googleplay:owner><![CDATA[barczentewicz@substack.com]]></googleplay:owner><googleplay:email><![CDATA[barczentewicz@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mikołaj Barczentewicz]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Meta AI's Incognito Chat: a new bar for confidential AI]]></title><description><![CDATA[Meta has launched Incognito Chat with Meta AI, a way to talk to an AI assistant on WhatsApp and the Meta AI app under privacy guarantees that, as Mark Zuckerberg puts it, are &#8220;similar to how end-to-end encryption means no one can read your conversations, even Meta or WhatsApp.&#8221; This is server-side AI inference running inside a hardware-isolated environment &#8212; a &#8220;trusted execution environment&#8221; or TEE &#8212; that Meta itself cannot inspect, paired with stateless processing, anonymous routing, and (still partial) verifiable transparency.]]></description><link>https://eutechreg.com/p/meta-ais-incognito-chat-a-new-bar</link><guid isPermaLink="false">https://eutechreg.com/p/meta-ais-incognito-chat-a-new-bar</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 14 May 2026 12:19:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Meta has <a href="https://about.fb.com/news/2026/05/incognito-chat-whatsapp-meta-ai/">launched</a> Incognito Chat with Meta AI, a way to talk to an AI assistant on WhatsApp and the Meta AI app under privacy guarantees that, as Mark Zuckerberg <a href="https://www.threads.com/@zuck/post/DYSAIo_FL77">puts it</a>, are &#8220;similar to how end-to-end encryption means no one can read your conversations, even Meta or WhatsApp.&#8221; This is server-side AI inference running inside a hardware-isolated environment &#8212; a &#8220;trusted execution environment&#8221; or TEE &#8212; that Meta itself cannot inspect, paired with stateless processing, anonymous routing, and (still partial) verifiable transparency.</p><p>I also want to flag two design trade-offs that Meta has, in my view, made the right calls on: personalised advertising becomes harder, and so does any &#8220;trust-and-safety&#8221; approach that relies on human review of flagged conversations. Both will get pushback &#8212; inside the company and from outside &#8212; and both are worth defending now.</p><p>I have argued previously (&#8221;<a href="https://eutechreg.com/p/trustworthy-privacy-for-ai-apples">Trustworthy privacy for AI: Apple&#8217;s and Meta&#8217;s TEEs</a>&#8220;) that hardware-backed confidential computing is the most credible path to combining frontier-grade AI capabilities with reasonable data safety, and (&#8221;<a href="https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption">AI privilege and ChatGPT encryption</a>&#8220;) that voluntary technical self-restraint by AI providers strengthens the case for legal protections of AI conversations. Incognito Chat is the first major consumer AI product that, on the face of it, satisfies both halves of that argument. So my main reaction is: more of this, please &#8212; and from every provider.</p><h2><strong>What does &#8220;Incognito&#8221; actually mean here?</strong></h2><p>The privacy claim is meaningful only if you understand how this works, at least more or less. End-to-end encryption secures a message between two endpoints. The computer serving the AI model is itself an endpoint that has to see the plaintext to answer. Without further engineering, &#8220;AI in WhatsApp&#8221; simply means a copy of your message landing on Meta&#8217;s servers as cleartext. The trick is to make the AI endpoint into something that Meta cannot read either.</p><p>That trick is the TEE. In the implementation Meta <a href="https://engineering.fb.com/2025/04/29/security/whatsapp-private-processing-ai-tools/">describes</a> in its Private Processing white paper, the AI workload runs inside an AMD confidential virtual machine &#8212; paired with NVIDIA confidential computing GPUs &#8212; whose memory is hardware-encrypted, whose contents cannot be inspected by the host operating system, the hypervisor, system administrators, or Meta employees. The important guarantees include that:</p><ul><li><p>The code running on the confidential virtual machine is <strong>attested</strong>: before your device sends anything, it cryptographically verifies that the server is running the specific, publicly logged software image Meta says it is running (and nothing else).</p></li><li><p>Processing is <strong>stateless</strong>: the request is decrypted, used, and discarded; no key-value cache or persistent log of your messages remains.</p></li><li><p>Routing is <strong>oblivious</strong>: requests pass through a third-party relay (Fastly) that hides the user&#8217;s IP from Meta, with anonymous credentials preventing Meta from associating a request with an identified user.</p></li><li><p>Critical artifacts are committed to a third-party append-only transparency log hosted by Cloudflare, so that <strong>altered code cannot be silently deployed</strong> without leaving a public trace.</p></li></ul><p>This is not perfect. NVLink connections between GPUs are not yet encrypted on the deployed hardware, GPU RAM is not encrypted (Meta acknowledges both), AMD does not warrant against certain side-channel attacks within its threat model, and a number of the verifiable-transparency commitments &#8212; particularly third-party researcher access to auditable artifacts &#8212; remain partially unfulfilled. I flagged the transparency gap in my <a href="https://eutechreg.com/p/trustworthy-privacy-for-ai-apples">earlier piece</a> and I am still flagging it. But the architecture is right, and the residual risks are the right residual risks: vulnerabilities that require sophisticated, scaled attacks rather than a casual subpoena or an insider with the right database credentials.</p><p>For the user, the practical upshot is that no one, not even Meta, can read these conversations. For now, Incognito Chat is text-only, conversations are not saved by default, and messages disappear unless the user chooses otherwise.</p><h2><strong>Why this matters &#8212; especially for health</strong></h2><p>Users ask ChatGPT, Claude, and Meta AI about their lab results, their medications, their mental health, their relationships, their finances, and their legal exposures. There are several reasons to take this seriously rather than wave it off as the user&#8217;s responsibility.</p><p>First, this is precisely the data that is least appropriate to leak &#8212; in part because it is sensitive in itself, in part because under the GDPR most of it would qualify as &#8220;special categories&#8221; of personal data under Article 9, and in part because a substantial breach (or a sufficiently aggressive data-preservation order &#8212; recall the <a href="https://cases.justia.com/federal/district-courts/new-york/nysdce/1%3A2023cv11195/612697/551/0.pdf">one issued</a> against OpenAI in the New York Times litigation) could expose intimate, decontextualised fragments of conversations between an AI system and millions of people.</p><p>Second, the costs of self-restraint are real: a person who does not feel safe asking an AI assistant about a worrying symptom may simply not ask, which forecloses a category of benefit &#8212; better access to information, better access to expression &#8212; that European fundamental-rights doctrine treats as having significant weight.</p><p>This connects to the argument I made about &#8220;<a href="https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption">AI privilege</a>.&#8221; Sam Altman has been talking about extending professional-secrecy-style protections to conversations with AI, and I am sympathetic to that direction. But the strongest version of the argument depends on a credible technical floor: providers cannot just promise discretion, they have to show that they could not betray it even if they wanted to (or were compelled to). Meta&#8217;s Incognito Chat is that demonstration in the WhatsApp setting. In other words, it establishes a technical baseline that legal protections can sensibly map onto &#8212; and, crucially, that regulators and courts can reasonably expect a privacy-by-design AI deployment to approach rather than dismiss as exotic. If we want courts to treat AI conversations as something closer to private reflection than ordinary cloud data, the industry has to make those conversations look, technically, more like private reflection. This is how.</p><h2><strong>The trade-offs Meta is accepting</strong></h2><p>I&#8217;d like to flag to trade-offs, which are real costs Meta is bearing. I expect they will produce pushback (internal and external). But I also think that both are reasons to give Meta credit.</p><h3><strong>Personalized advertising becomes harder</strong></h3><p>Inference inside a TEE with stateless processing and no exfiltration channel means Meta cannot read what users are asking Meta AI, and therefore cannot train ad-targeting models on those conversations or feed them into real-time auctions. For a company whose business model rests substantially on behavioral advertising, this is not a small concession &#8212; and it is precisely the sort of concession that, in my experience, tends to generate internal friction long before it ships.</p><p>That said, this is not a permanent ban on monetization, and it should not be. One can imagine designs that route ads in but tightly constrain what comes out &#8212; for example, ad creatives delivered into the TEE and matched against conversation content there, with only deterministic, pre-determined, aggregated, and otherwise constrained signals (such as bucketed impression counts) flowing back out. The constraints would have to be enforced by the attested code, published in the transparency log, and ideally subject to differential-privacy-style noise so that the exit channel cannot be turned into a covert exfiltration vector.</p><p>There are non-trivial design questions here (what to do about click-through, for instance, when the click leaves the TEE) and there is plenty of room for ad-tech advocates to overreach. But the right starting point is the one Meta has chosen: no targeting from conversation content unless and until a privacy-preserving mechanism is built, attested, and made transparently auditable. That is the inverse of at least one kind of RTB-style architecture, in which user data is broadcast first and rules about it are written second.</p><h3><strong>&#8220;Safety&#8221; becomes harder &#8212; and this is the more interesting trade-off</strong></h3><p>Meta&#8217;s representative&#8217;s framing in the Reuters <a href="https://www.reuters.com/legal/litigation/meta-launch-incognito-chat-private-ai-conversations-whatsapp-2026-05-13/">briefing</a> is notable for what it does not say. He told reporters that &#8220;the AI will also have built-in safety guardrails, refusing to answer problematic questions or steering conversations in different directions.&#8221; Read carefully, that describes <em>model-level</em> guardrails &#8212; refusals trained into, or prompted into, the model running in the TEE &#8212; and nothing else. There is no mention of human review, no mention of flagging &#8220;problematic&#8221; content for downstream moderation, no mention of an out-of-band classifier reporting on user prompts. The whitepaper&#8217;s &#8220;non-observability&#8221; principle is consistent with this: classification can happen inside the TEE, but even the size of the traffic between any classifier and the orchestrator must not allow outside inference about its output.</p><p>To appreciate why this is significant, it helps to compare with what the major web-UI chatbots typically do. In a standard ChatGPT, Claude, or Gemini deployment, a user&#8217;s prompt is processed in cleartext on the provider&#8217;s servers, with an array of trust-and-safety stages running alongside the model:</p><ul><li><p>classifier models that score prompts and outputs for categories like self-harm, CSAM, weapons or fraud;</p></li><li><p>rule-based denials and topic blocklists;</p></li><li><p>refusal behaviors trained in via RLHF;</p></li><li><p>and &#8212; critically &#8212; escalation paths that flag a subset of conversations for human review and, in narrow cases, referral to authorities.</p></li></ul><p>OpenAI&#8217;s recent statement about its <a href="https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption">planned client-side encryption</a> made this stack explicit: &#8220;fully automated systems to detect safety issues,&#8221; with human review reserved for &#8220;serious misuse and critical risks &#8212; such as threats to someone&#8217;s life, plans to harm others, or cybersecurity threats.&#8221; That escalation step is the part Meta does not appear to be reproducing inside Incognito Chat. Doing so would require an exit channel from the TEE &#8212; and the no-access promise would no longer hold.</p><p>I think Meta&#8217;s call here is the right one. Yes, model-level guardrails are imperfect, and motivated bad actors can sometimes work around them &#8212; just as they can work around classifier stacks. But the <em>honesty</em> of the model-level approach is its strength: a refusal that happens inside the TEE creates nothing that leaves the TEE. The moment a flagging mechanism is bolted onto this architecture, the user faces a pair of questions they cannot answer for themselves: which kinds of content trigger a flag, and what happens to that content once it crosses the boundary of the confidential environment? Whatever is exfiltrated to a human-review queue is, by definition, data that Meta now holds in cleartext outside the TEE. Which puts it squarely back inside the threat model the TEE was built to neutralise: subpoenas, broad preservation orders, other forms of compelled disclosure, breach by an external attacker, and misuse by an insider. The &#8220;no one &#8212; not even Meta &#8212; can read your conversations&#8221; promise quietly becomes &#8220;no one can read most of your conversations, but we can&#8217;t even tell you in advance which ones we will read.&#8221;</p><p>If at some point this becomes politically untenable &#8212; if a regulator concludes that purely model-level safety is inadequate for a consumer product at WhatsApp&#8217;s scale &#8212; there is a fallback worth considering, but it is a worse design. One could run a second LLM safety classifier <em>inside</em> the TEE that watches conversations and, in narrow cases, exfiltrates flagged messages for human review. Whether such a system is attestable in a meaningful sense is itself debatable. Even if it is, two problems remain.</p><p>First, motivated actors will game the classifier as readily as they game model-level guardrails &#8212; that is what red teams routinely demonstrate.</p><p>Second, using an LLM as the gate makes the user-uncertainty problem essentially unsolvable. There is no stable rule book that even Meta could publish, because the classifier is a probabilistic system whose outputs shift with phrasing, surrounding context, and silent model updates. The honest user who came to the system to think out loud about a worrying symptom or a medication interaction has no way to know &#8212; ex ante &#8212; whether their words will be classified as &#8220;self-harm-adjacent&#8221; or &#8220;drug-related&#8221; and routed for review, and therefore no way to know whether those words will end up in a queue subject to all the discovery and breach risks just described. The privacy promise, in practice, collapses into &#8220;we promise to be reasonable about what we exfiltrate,&#8221; which is the promise that Incognito Chat is, refreshingly, trying not to make.</p><h2><strong>What I hope happens next</strong></h2><p>The first thing I hope is that Meta closes the <a href="https://eutechreg.com/p/trustworthy-privacy-for-ai-apples">verifiable transparency</a> gap quickly. Publishing the binaries corresponding to entries in the Cloudflare transparency log, broadening eligibility for third-party researcher access to auditable artifacts, and shortening coordinated-disclosure timelines where it can are the obvious next steps. The whitepaper says all the right things; the product needs to ship the rest of them.</p><p>The second is that the other major AI providers &#8212; OpenAI, Anthropic, Google, and the cloud platforms hosting them &#8212; converge on this design, with appropriate variations. OpenAI&#8217;s <a href="https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption">client-side encryption announcement</a> is a promising signal but, as I argued at the time, the implementation details (in particular, whether safety detection runs on-device or in a TEE, and how transparently) will determine whether it is a comparable confidentiality story or a softer one. Anthropic, which has built much of its public identity around AI safety, has an obvious incentive to demonstrate that &#8220;safety&#8221; and &#8220;the provider can read every conversation&#8221; are separable propositions.</p><p>The third is that EU policymakers treat Incognito Chat as a benchmark rather than a problem. There is a real risk that a TEE-based architecture will be received, in some quarters, as inconvenient. I have written before (&#8221;<a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">Apple and EU DMA: a road to leave the EU?</a>&#8220;, &#8220;<a href="https://eutechreg.com/p/dma-workshops-and-privacy">DMA workshops and privacy</a>&#8220;) about the tendency to treat strong privacy architectures as obstacles rather than goals. The right reading is the opposite: this is a fundamental-rights-respecting design that other providers should be encouraged &#8212; and, where appropriate, required &#8212; to match.</p><p>The fourth, and most speculative, is that we develop a legal vocabulary for AI privilege that takes technical architecture seriously. Privilege in the legal sense has always rested on a combination of professional duty and practical constraint &#8212; the lawyer cannot be made to disclose what the lawyer never knew. Confidential computing offers a similar combination in the AI setting: a duty (the provider&#8217;s privacy commitment), a constraint (the provider cannot, in fact, read the data), and an external audit trail (verifiable transparency). If we want this technology deployed at scale for the most sensitive use cases &#8212; health, mental health, legal, financial &#8212; then the law arguably should reward providers who adopt this architecture, not penalise them for the resulting blind spot.</p>]]></content:encoded></item><item><title><![CDATA[The Digital Omnibus Won’t Work Without Enforcement Reform]]></title><description><![CDATA[Now that I&#8217;m emerging from a slower period of paternity leave, you can expect more regular programming.]]></description><link>https://eutechreg.com/p/the-digital-omnibus-wont-work-without</link><guid isPermaLink="false">https://eutechreg.com/p/the-digital-omnibus-wont-work-without</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Fri, 13 Mar 2026 13:43:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aCLA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Now that I&#8217;m emerging from a slower period of paternity leave, you can expect more regular programming. Today, I&#8217;m sharing an edited excerpt from my latest comments on the planned GDPR reform.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aCLA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aCLA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aCLA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96952022-046a-429a-a472-423b22344469_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:299089,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://eutechreg.com/i/190835331?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aCLA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aCLA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96952022-046a-429a-a472-423b22344469_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">by Gemini / Nano Banana 2</figcaption></figure></div><p>The European Commission&#8217;s <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2025:837:FIN">Digital Omnibus proposal</a> is, on its face, the most significant attempt to fix the GDPR since the regulation entered into force. It would clarify the definition of personal data and confirm that legitimate interest can support AI training (while <a href="https://eutechreg.com/p/europe-is-not-so-back-why-cookie">failing</a> to do anything about the cookie consent mess, alas). Today, ICLE submitted <a href="https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14855-Simplification-digital-package-and-omnibus/F33380698_en">formal comments</a>, in which I argue that the co-legislators should adopt most of the changes. If you are interested in my responses to the EDPB/EDPS Joint Opinion or the NOYB report, you will find them in the full comments.</p><p>Here, I want to focus on the argument that most commentators - and the Commission itself - are avoiding. The changes proposed in the Digital Omnibus are welcome, but they are far from being sufficient. The same authorities that produced the current dysfunction will interpret the new rules. And unless the enforcement architecture changes, textual clarification will deliver only a fraction of the promised benefit. The Commission <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2025:837:FIN">acknowledges</a> that &#8220;more consistent and harmonised interpretation and enforcement across Member States&#8221; is needed. However, it provides no mechanism to achieve it.</p><h2><strong>Privacy myopia</strong></h2><p>Data protection authorities across the EU are, structurally, single-issue campaigners. This is not a criticism of the individuals who staff them - they are, for the most part, competent professionals who believe (correctly) that data protection matters. The problem lies in institutional design. DPAs have immense power to impose fines of hundreds of millions of euros. But they face <a href="https://eutechreg.com/p/a-serious-target-for-improving-eu">no robustly enforced obligation</a> to consider the consequences of their actions for interests other than privacy.</p><p>The result is what I have called &#8220;<a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">privacy myopia</a>&#8220;: a regulatory culture in which maximising data protection is treated as the sole objective, and any consideration of competing interests - innovation, economic security, freedom of expression, public health - is treated as someone else&#8217;s problem. No countervailing institutional voice represents those interests within the enforcement process. Competition authorities occasionally intervene, but their mandate is narrower than the full range of what is at stake. Political authorities, whose responsibilities span economic security and public welfare, are excluded because their involvement would supposedly violate the independence requirement of Article 52 GDPR.</p><p>This has consequences: consider, for instance, the EDPB&#8217;s <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en">Opinion 28/2024 on AI models</a>. It provides a lengthy list of compliance measures while offering no guarantee that following them will satisfy enforcement authorities. It grants regulators near-limitless discretion while providing minimal practical guidance to those trying to innovate responsibly. Businesses who invest in GDPR compliance for AI training have no assurance their efforts will be judged sufficient in a year or two. As I <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">argued at the time</a>, the uncertainty is as chilling as an outright prohibition - and for any entrepreneur aware of this risk (to be fair, many aren&#8217;t) and with a choice of jurisdiction, the rational response is to avoid the EU entirely.</p><h2><strong>The EDPB lacks accountability</strong></h2><p>In theory, the EDPB issues non-binding guidelines and opinions. In practice, these instruments function as the law in action: they shape DPA enforcement priorities, drive market behaviour, and are cited by national courts as authoritative. Controllers that deviate from EDPB guidance face the realistic prospect of enforcement action, regardless of whether the guidance accurately reflects the GDPR&#8217;s text or the CJEU&#8217;s case law.</p><p>The EDPB&#8217;s <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-22023-technical-scope-art-53-eprivacy-directive_en">guidelines on Article 5(3) of the ePrivacy Directive</a> illustrate the mechanism. The guidelines dramatically expanded what requires prior consent - treating generic URL parameters and standard browser transmissions as &#8220;access to information stored on the terminal equipment.&#8221; This reversed the narrower interpretation previously adopted by several national authorities (including the German <a href="https://www.datenschutzkonferenz-online.de/media/oh/20211220_oh_telemedien.pdf">Datenschutzkonferenz</a>). The expansion occurred through guidelines, not legislation. It was not subject to impact assessment, parliamentary oversight, or robust stakeholder consultation of the kind required for legislative or even delegated acts (the consultation that took place served as a window-dressing exercise). The result was a reshaping of the regulatory environment for every website operator in the EU - through a nominally non-binding instrument that no one can effectively challenge in court. (I explored the consequences in &#8220;<a href="https://truthonthemarket.com/2026/01/26/why-europe-cant-kill-the-cookie-banner/">Why Europe Can&#8217;t Kill the Cookie Banner</a>.&#8221;)</p><p>The EDPB/EDPS <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/edpbedps-joint-opinion/edpbedps-joint-opinion-22026-proposal_en">Joint Opinion 2/2026</a> on the Digital Omnibus reveals the same institutional logic. The EDPB argues that its own forthcoming guidance on pseudonymisation is a &#8220;better vehicle&#8221; than the Commission&#8217;s proposed Article 41a implementing acts. The pattern is consistent: wherever the Commission proposes implementing acts or legislative clarification, the EDPB argues it should have exclusive or primary control instead. In other words, the enforcer wants to keep setting the scope of the law it enforces.</p><h2><strong>Can the courts fix this?</strong></h2><p>Three recent sets of proceedings involving Meta and WhatsApp expose why the answer is: not yet, and not without help.</p><p>First, in <em>WhatsApp Ireland v EDPB</em> (Case C-97/23 P), the CJEU Grand Chamber <a href="https://curia.europa.eu/juris/liste.jsf?num=C-97/23">held</a> in February 2026 that EDPB binding decisions under Article 65 GDPR are &#8220;acts open to challenge&#8221; under Article 263 TFEU. This was a significant ruling. But the case was filed in 2021 and the admissibility ruling took five years. The merits remain pending. In this case, the EDPB had directed the Irish DPC to increase its proposed fine against WhatsApp from approximately EUR 30 million to EUR 225 million - a 7.5-fold increase. Even with access to the courts, meaningful review takes five or more years.</p><p>Second, in <em>Meta v EDPB</em> (Case T-319/24), the General Court <a href="https://curia.europa.eu/juris/liste.jsf?num=T-319/24">held</a> that the EDPB&#8217;s Opinion 8/2024 on &#8220;consent or pay&#8221; models &#8220;does not produce binding legal effects vis-&#224;-vis third parties&#8221; and was therefore not an act open to challenge. The Court acknowledged the opinion used mandatory language (&#8221;should,&#8221; &#8220;should not&#8221;) but characterised this as &#8220;calling for an in-depth consideration&#8221; rather than imposing obligations. Meta&#8217;s appeal is pending. Even if Meta prevails, the ruling would address only Article 64(2) opinions --- not the broader category of EDPB guidelines and recommendations that constitute the bulk of its quasi-legislative output. (I discussed the implications in &#8220;<a href="https://eutechreg.com/p/meta-v-edpb-and-what-really-needs">Meta v EDPB and What Really Needs to Change in the GDPR</a>.&#8221;)</p><p>Third, in <em>DPC v EDPB</em> (Joined Cases T-578/21, T-579/21 and T-580/21), the General Court upheld the EDPB&#8217;s power to direct national authorities to broaden their investigations into areas the national authority had not originally examined - applying no meaningful scrutiny of whether the EDPB&#8217;s substantive positions were balanced.</p><p>The three cases together expose the accountability gap. The EDPB&#8217;s most influential outputs - guidelines, opinions, recommendations - are the instruments most shielded from judicial review. Its binding decisions are now challengeable, but only after years of litigation and without guidance on what standard of substantive scrutiny courts should apply. The courts have treated the EDPB as a neutral authority that considers the general public interest. Whether that characterisation is accurate is, at best, contestable.</p><h2><strong>What the Joint Opinion does not say</strong></h2><p>The EDPB/EDPS Joint Opinion on the Digital Omnibus runs to 47 pages. It contains no reflection on whether the enforcement architecture is adequate. It mentions DPA resource constraints only in the context of requesting more resources. It never acknowledges the structural problems that lead to inconsistent enforcement. It never discusses the EDPB&#8217;s own accountability or the quasi-legislative nature of its opinions and guidelines. It never engages with the argument that separating investigation from adjudication could improve the quality and legitimacy of enforcement decisions.</p><p>This silence is telling. A regulatory body invited to comment on a comprehensive reform of its own legal framework devotes no space to asking whether its own institutional design might contribute to the problems the reform is designed to solve. I am not suggesting the EDPB is acting in bad faith. The more plausible inference is that the institutional incentives run against self-examination. As <a href="https://digidata.substack.com/p/the-entrenchment-move">Mark Leiser puts it</a>, the Joint Opinion is &#8220;a regulatory counteroffensive&#8221; that &#8220;deploys the language of rights not primarily to protect individuals, but to reassert institutional centrality, supervisory reach, and interpretive authority.&#8221; The observation is sharp. But the incentive structure produces the behaviour, whether or not any individual regulator intends it.</p><h2><strong>Is proportionate enforcement possible?</strong></h2><p>The answer is yes - we have examples.</p><p>The CNIL has demonstrated that pragmatic enforcement is achievable within the existing framework. Its <a href="https://www.cnil.fr/fr/ia-et-rgpd-la-cnil-publie-ses-nouvelles-recommandations-pour-accompagner-une-innovation-responsable">AI guidance</a> acknowledges that GDPR compliance for AI training is possible and provides workable compliance pathways --- a marked contrast with the EDPB&#8217;s strategically ambiguous Opinion 28/2024. But one national authority cannot shift EU-wide practice. The CNIL&#8217;s approach faces opposition among more absolutist privacy officials and activists, and there is no prospect that privacy enforcers will voluntarily adopt it across the EU. The structural incentives run the other way: DPAs that interpret the law broadly expand their own regulatory reach and receive institutional validation from the EDPB&#8217;s consistency mechanism. A DPA that interprets proportionately risks being overridden by the EDPB - as the Irish DPC&#8217;s experience demonstrates.</p><p>The UK provides an even more instructive example. The Information Commissioner&#8217;s Office <a href="https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/07/ico-opens-door-to-privacy-first-advertising-models-with-proposed-new-enforcement-approach/">announced in July 2025</a> a risk-based enforcement approach for online advertising, identifying six advertising capability categories where low-risk activities could proceed without consent. The ICO is doing something EU DPAs seem structurally incapable of (at least when acting as the EDPB): exercising enforcement discretion to achieve proportionate outcomes, distinguishing between extensive behavioural profiling (which requires consent) and low-risk activities like frequency capping and ad delivery (which may not).</p><p>Moreover, this is not just a single enforcement initiative. Section 91 of the <a href="https://www.legislation.gov.uk/ukpga/2025/27/contents">Data (Use and Access) Act 2025</a> rewrites the Information Commissioner&#8217;s statutory duties. It inserts a principal objective requiring the Commissioner to secure an appropriate level of protection for personal data &#8220;having regard to the interests of data subjects, controllers and others and matters of general public interest.&#8221; It imposes explicit duties to consider innovation, competition, crime prevention, and public security. Section 120C requires a strategy framed accordingly, and section 120D requires consultation with other regulators, backed by annual reporting to Parliament.</p><p>This shows that data-protection enforcement need not be structured as a one-dimensional privacy mandate. A legislature can preserve regulatory independence while imposing explicit duties of balance, strategy, consultation, and public accountability.</p><h2><strong>What should the EU do?</strong></h2><p>I have <a href="https://eutechreg.com/p/a-serious-target-for-improving-eu">proposed</a> a five-part structural reform. It does not question the importance of data protection. It targets the institutional design that converts competent professionals into single-issue enforcers.</p><p><strong>First, separate investigation from adjudication.</strong> DPAs would retain their investigative powers - the resources, technical expertise, and local knowledge they have built over years. But for consequential cross-border cases, they would present their findings and enforcement recommendations to an independent decision-making body, rather than serving as both prosecutor and judge.</p><p><strong>Second, create an independent multidisciplinary EU tribunal.</strong> The tribunal would issue binding decisions, including fines, for consequential cross-border enforcement actions. Critically, it would not be composed exclusively of data protection specialists. A diverse panel - including economists, generalist judges, and experts in the regulated sectors - would weigh data protection against other fundamental rights and Europe&#8217;s broader public-interest objectives.</p><p><strong>Third, mandate formal balancing of all interests.</strong> The tribunal&#8217;s legal framework would require it to articulate, explicitly, in each decision, how it balances data protection against other fundamental rights and the economic realities of the regulated activity. The GDPR already requires this balancing (Article 6(1)(f), Recital 4). The current enforcement architecture provides no institutional mechanism to ensure it occurs.</p><p><strong>Fourth, appoint advocates general for non-data-protection interests.</strong> Drawing on the CJEU model, the tribunal would include judicial officers specifically tasked with highlighting and defending non-data-protection interests - innovation, freedom of expression, economic security, public health - that are affected by enforcement decisions but have no institutional voice under the current system.</p><p><strong>Fifth, require tribunal review of EDPB outputs.</strong> The tribunal would review EDPB guidelines, opinions, and recommendations before they become final. If the EDPB produces guidance that fails the proportionality standard - as in its Article 5(3) ePrivacy guidelines, which effectively require website operators to obtain consent from would-be fraudsters before deploying anti-fraud measures - the tribunal could refuse to approve it.</p><p>This does not require Treaty change. Article 257 TFEU already empowers the European Parliament and Council to establish &#8220;specialised courts attached to the General Court.&#8221; <a href="https://constitutionofinnovation.eu/">Garicano, Holmstr&#246;m, and Petit&#8217;s </a><em><a href="https://constitutionofinnovation.eu/">Constitution of Innovation</a></em> proposes using exactly this Treaty basis to create specialised commercial courts for internal market enforcement - regional tribunals with fast-track procedures, English-language proceedings, and EU-wide injunctive powers. Their courts are designed for mutual-recognition and trade-barrier disputes, but the institutional architecture is directly transferable. A specialised EU regulatory tribunal under Article 257 TFEU could house both the commercial jurisdiction they envision and the data protection enforcement function I propose. It would also embed data protection enforcement within a broader framework of EU regulatory adjudication, ensuring that privacy is weighed alongside the other interests that the EU&#8217;s regulatory instruments affect.</p><p>The Unified Patent Court already provides a working precedent for specialised EU judicial bodies in complex technical and economic matters. The real issue is whether the political will exists to apply the same institutional logic to data protection - where the current enforcement architecture serves the institutional interests of the bodies that would need to cede authority.</p><h2><strong>Who else sees the problem?</strong></h2><p>I am not alone. <a href="https://zenodo.org/records/18598414">Luca Bolognini and Enrico Capparelli</a> propose that supervisory authorities should be required to carry out a prior analytical assessment of the effect that their decisions would have on innovation, competitiveness, and other fundamental rights - and that this assessment should be notified to the addressee with the provisional decision. They also propose mandatory participatory prior-consultation procedures for EDPB guidelines, with public availability of consultation results.</p><p><a href="https://www.linkedin.com/pulse/making-gdpr-realistic-authorities-only-want-part-peter-craddock-pktxe/">Peter Craddock observes</a> that DPA opposition to the entity-relative definition of personal data cannot be treated as neutral, because when information ceases to be perceived as personal data, it falls outside the authorities&#8217; remit: &#8220;one must be careful not to see their position as neutral.&#8221;</p><h2><strong>What happens if nothing changes?</strong></h2><p>If the enforcement architecture remains unchanged, the substantive reforms in the Digital Omnibus will be interpreted by the same authorities, using the same maximalist lens, that produced the problems the reforms are designed to solve. The entity-relative definition of personal data will be narrowed through guidance and enforcement practice. The legitimate interest framework for AI training will be hedged with conditions that recreate the uncertainty it was meant to resolve. The ePrivacy exemptions will be read as narrowly as the exemptions they replace.</p><p>The Commission should know this. Its own Explanatory Memorandum acknowledges the enforcement consistency problem. But the Digital Omnibus contains no mechanism to address it. The co-legislators have an opportunity - arguably their best opportunity in a decade - to fix not just the text but the institutions that apply it. The question is whether they will take it, or whether they will settle for a reform that looks good on paper and changes less than it promises.</p>]]></content:encoded></item><item><title><![CDATA[Comments on the interplay of the EU DMA and the GDPR]]></title><description><![CDATA[The European Commission and the European Data Protection Board (EDPB) are preparing guidelines on the interplay of the Digital Markets Act with the EU&#8217;s main privacy law - the General Data Protection Regulation (the GDPR).]]></description><link>https://eutechreg.com/p/comments-on-the-interplay-of-the</link><guid isPermaLink="false">https://eutechreg.com/p/comments-on-the-interplay-of-the</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Fri, 05 Dec 2025 13:20:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The European Commission and the European Data Protection Board (EDPB) are preparing <a href="https://digital-markets-act.ec.europa.eu/document/download/8ba0913f-2778-4a6d-9c58-10f8c7ead009_en?filename=Joint_COM-EDPB_GLS_interplay_DMA_GDPR_for_public_consultation.pdf">guidelines</a> on the interplay of the Digital Markets Act with the EU&#8217;s main privacy law - the General Data Protection Regulation (the GDPR). This is a topic I covered here extensively, most recently in <a href="https://eutechreg.com/p/apples-new-dma-criticism-eu-only">Apple&#8217;s new DMA criticism, EU-only feature delays, and what&#8217;s next</a>, <a href="https://eutechreg.com/p/debating-the-eu-pay-or-consent-decision">Debating the EU &#8220;pay or consent&#8221; decision against Meta</a>, and <a href="https://eutechreg.com/p/eu-dma-workshops-google-amazon-apple">EU DMA workshops: Google, Amazon, Apple, Meta, and Microsoft</a>. While preparing the guidelines, the authorities asked for comments and I prepared a response to that call, which the International Center for Law &amp; Economics <a href="https://laweconcenter.org/resources/icle-comments-on-the-interplay-between-dma-and-gdpr/">submitted</a> yesterday. Below, you&#8217;ll find the contents of my comments.</p><div><hr></div><h1>Executive Summary</h1><p>The approach adopted in the Draft Joint Guidelines will lead to a new proliferation of &#8220;consent popups&#8221; at a time when the Commission is seeking to address similar past failures of EU law regarding &#8220;cookie banners&#8221;. Users of digital services regulated under the DMA are likely to see further declines in the quality of their experience, and this could further damage the reputation of EU law. The DMA&#8217;s goals do not necessitate this approach, and we encourage both the EDPB and the Commission&#8217;s DMA Team to address the issue of consent in a more user-friendly way.</p><p>The guidelines disproportionately contradict the aims of EU data protection and cybersecurity laws by requiring that gatekeepers do not warn users about clear and realistic risks that attend data portability (especially real-time and continuous portability) and by preventing gatekeepers from excluding known bad actors as recipients of user data. Effectively, the Draft Joint Guidelines would even mandate that gatekeepers support and enable campaigns by criminal or foreign-enemy-state actors to collect EU personal data, so long as those actors inform the gatekeeper&#8212;even falsely&#8212;that they&#8217;re EU organisations and don&#8217;t plan to transfer personal data outside the European Economic Area (EEA).</p><p>More generally, the guidelines depart from previously stated Commission policy, supported directly by the DMA, that gatekeepers have a duty to ensure DMA-implementation measures comply with other laws, including those on data protection and cybersecurity. Instead, the Draft Joint Guidelines propose to prohibit gatekeepers from implementing the most effective measures to achieve such compliance.</p><p>The guidelines interpret the DMA and the GDPR inconsistently. Without appropriate justification, some serious data-protection risks (<em>e.g.</em>, those related to data portability) are not addressed as robustly as others (<em>e.g.</em>, the sharing of search data).</p><p>The Draft Joint Guidelines also omit interoperability mandates under Article 6(7) DMA, incorrectly suggesting that such obligations pose no serious privacy issues. In fact, such issues include questions regarding the interplay between the DMA and the GDPR, as well as the ePrivacy Directive.</p><h2><strong>I. PROLIFERATION OF CONSENT REQUESTS (ARTICLES 5(2), 6(10) DMA)</strong></h2><p>The European Commission last month <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal">proposed legislation</a> to address the problem of &#8220;cookie consent fatigue&#8221; by changing rules that are &#8220;outdated and inadequate for contemporary privacy and data needs&#8221;. It is therefore surprising to see the EDPB and the Commission&#8217;s DMA team pursue <a href="https://digital-markets-act.ec.europa.eu/document/download/8ba0913f-2778-4a6d-9c58-10f8c7ead009_en?filename=Joint_COM-EDPB_GLS_interplay_DMA_GDPR_for_public_consultation.pdf">guidelines</a> that would vastly increase the proliferation of consent requests with which EU users of digital services are bombarded.</p><p>The excessive reliance on ever-more consent requests, without any investigation of ways to reduce the need for them in accordance with the law, flies in the face of the current research on data protection. Even vocal critics of the ways that digital services use personal data <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4333743">recognize</a> that the consent-centric approach has been a failure.</p><p>We should expect more from public authorities than the unthinking application of the consent paradigm, while failing to address the actual problems of data protection and security.</p><p>The Draft Joint Guidelines contribute to the disproportionate proliferation of consent requests in the following ways:</p><ul><li><p>Consent requests for the use of third-party data for online advertising, combining and cross-using personal data (Article 5(2) DMA) are meant to be further divided into various &#8220;purposes&#8221; like &#8220;personalisation of content, personalisation of advertisements, and service development&#8221; (at [31]). While the guidelines direct gatekeepers to combine DMA and DMA-GDPR requests, avoiding the need to further double many consent requests, this doesn&#8217;t address the increased consent fatigue that users will experience (at [42]).</p></li><li><p>The Draft Joint Guidelines&#8217; section on &#8220;ensuring user-friendly choices and consent designs&#8221; is a stark example of what has been dubbed the &#8220;nerd harder&#8221; approach (at [40]-[45]), without providing much actionable guidance on consent design. In its preoccupation to present choices in a &#8220;neutral&#8221; manner, the section fails to propose even the contours of a positive guiding example (<em>g.</em>, must &#8220;yes&#8221; and &#8220;no&#8221; buttons have same the colour and contrast even if it&#8217;s a basic user-experience practice to give them different colours, thus enabling rather stifling choice?). Moreover, the section presents vague goals regarding user comprehension as a benchmark, merely paraphrasing the legal texts (at [45]). In doing so, it fails to adopt a realistic assessment of users&#8217; level of interest and familiarity with legal frameworks like the DMA and GDPR.</p></li><li><p>The guidelines are unclear regarding separate consent requests for processing special category data under Article 5(2). They may be read as imposing a new disproportionate requirement to seek such consent, together with Article 5(2) DMA consent, even where there is no requirement to ask for it under the GDPR (<em>g.</em>, when the gatekeeper has already obtained consent) (at [37]).</p></li><li><p>The Draft Joint Guidelines acknowledge that users could be overwhelmed by a large number of requests from businesses for user-data portability under Article 6(10) DMA (&#8220;in particular where requests are repetitive or disruptive of the end user&#8217;s experience&#8221;) (at [172]). Even with regard to obvious cases of abuse, however, the guidelines fail to state clearly that gatekeepers can protect users from &#8220;repetitive or disruptive&#8221; requests. Instead, they offer a vague statement about &#8220;layered and intuitive consent interfaces&#8221;, which does not address whether gatekeepers can refuse to convey some consent requests.</p></li></ul><h2><strong>II. ANTI-DATA PROTECTION AND SECURITY REQUIREMENTS (ARTICLES 6(9) AND 6(10) DMA)</strong></h2><p>The DMA&#8217;s provisions on data portability, in Article 6(9) and 6(10), indisputably create data-protection and security risks. This is true not only for end users of DMA-regulated services, but also for any other person whose data happens to be in the user&#8217;s service account or&#8212;on the guidelines&#8217; broad interpretation&#8212;even on the user&#8217;s device. These include <em>new </em>risks that users would neither expect, nor understand. Consider the following example.</p><p>A user, currently logged into a major social-networking service (the Gatekeeper), encounters a viral third-party application promising a &#8220;Digital Nostalgia&#8221; service. The application claims to use artificial intelligence (AI) to scan the user&#8217;s history and generate a sentimental video montage of their friendships. To initiate this process, the user is forwarded from the third-party website to the Gatekeeper&#8217;s authorization screen.</p><p>Because the user is already authenticated on the platform, they do not need to enter credentials; they are immediately presented with a standard consent popup. This popup bears the social network&#8217;s familiar and trusted branding. It lists the permissions the third-party app requires, which include access to historical photos&#8212;including images shared only with close friends or romantic partners&#8212;private message archives, and contact lists.</p><p>Crucially, the user sees only the Gatekeeper&#8217;s familiar interface and instinctively applies the trust they have in that established platform to the unknown third party. The Gatekeeper&#8217;s consent screen effectively launders the legitimacy of an unvetted requestor. Conditioned by years of &#8220;consent fatigue&#8221; from cookie banners and terms-of-service updates, the user performs a cursory scan of the permission list&#8212;indistinguishable from dozens of benign requests they have approved before&#8212;and clicks &#8220;Allow&#8221; to access the promised feature.</p><p>In that single instant, the nefarious application triggers the data-portability interfaces mandated by the DMA. This grants the third-party actor immediate access to a massive trove of sensitive historical data, allowing them to exfiltrate years of private correspondence and intimate media without further interaction from the user. The application need not even deliver the promised nostalgia video, although this may be helpful to attract more victims. The Gatekeeper&#8217;s trusted UI effectively cloaks the risk of the unknown third-party requestor, creating a &#8220;Trojan Horse&#8221; effect where the ease of portability is weaponised against the user&#8217;s privacy.</p><p>Under the requirement for &#8220;continuous and real-time&#8221; access, the threat extends beyond the user&#8217;s historical data. By granting this permission, the user has inadvertently authorised a persistent data stream&#8212;effectively a &#8220;live wire&#8221; connected to their account. Even after the user closes the &#8220;Digital Nostalgia&#8221; tab and forgets the application exists&#8212;not having received any ongoing notification that data sharing continues (at least for months)&#8212;the third-party service retains a valid access token or webhook subscription. This allows the nefarious actor to instantaneously receive copies of every new private message sent, every new photo uploaded, and every location tag created. The user is under active surveillance, with their ongoing digital life being mirrored to a malicious server.</p><p>This asymmetry compounds the harm: a single click grants access, but revocation&#8212;if the user even remembers the application exists&#8212;can only break the link for future data. Any data already exfiltrated is, of course, already in the attacker&#8217;s possession. The harvested information opens multiple vectors for exploitation: intimate images may be leveraged for sextortion; private messages mined for credentials, security questions, or blackmail material; and complete social graphs may be sold to data brokers, stalkers, or abusive ex-partners. All this stems from a single, momentary click on a consent form that took less time to approve than to read.</p><p>This textbook example of <a href="https://www.microsoft.com/en-us/security/blog/2021/07/14/microsoft-delivers-comprehensive-solution-to-battle-rise-in-consent-phishing-emails">&#8220;consent phishing&#8221;</a> has some resemblance with the infamous Cambridge Analytica case. But much less information was available to Facebook apps like &#8220;This Is Your Digital Life&#8221; in 2014-15 than would be under the DMA; importantly, there was no risk to private photos or messages. This is what is <em>new </em>about the risk: at a single click, an entire account and its most private contents could be sent to a third party (not to mention setting up ongoing surveillance).</p><p>Neither the European Commission, nor any other EU body, is currently conducting a public-information campaign to address this issue. This is despite the obvious fact that users of at least some DMA-regulated services have grown accustomed, with good reason, to trust that the manufacturer/service provider would not allow users to harm themselves so easily. But this is now <a href="https://truthonthemarket.com/2025/02/04/the-dmas-challenge-to-user-safety-lessons-from-apples-porn-app-controversy">possible under the DMA</a>.</p><p>Indeed, the Draft Joint Guidelines are generally dismissive of DMA-created risks to data protection and data security, which is surprising given the participation of the EDPB. Instead, they are preoccupied with interpreting the DMA&#8217;s explicit security and privacy provisions as narrowly as possible. The example detailed above illustrates two key problems with the guidelines:</p><ol><li><p>They require gatekeepers <em>not to warn</em> users about clear and realistic risks of data portability; and</p></li><li><p>They prevent gatekeepers from excluding known bad actors as recipients of user data.</p></li></ol><p>The guidelines state that &#8220;portability options and the wording used to describe them should be provided in a neutral and objective manner and should not nudge end users towards a specific choice&#8221; (at [126]). In other words, they require gatekeepers to act as if every choice to port data&#8212;even a choice that would expose an entire service account (with private messages, etc.) or all data on a user device&#8212;is equally neutral and safe, which is obviously not the case.</p><p>Moreover, the guidelines explicitly forbid gatekeepers from restricting third parties&#8217; receipt of user data&#8212;even when, <em>e.g.</em>, the third party has previously been fined or convicted for GDPR violations or is known to engage in practices like &#8220;consent phishing&#8221;. The Draft Joint Guidelines state (at [131]):</p><blockquote><p>Gatekeepers should also not gather information pertaining to the authorised third party&#8217;s compliance measures under the GDPR, including potential administrative or judicial proceedings the third party has undergone in relation to compliance with the GDPR, or whether the third party has suffered breaches of data security in the past.</p></blockquote><p>Gatekeepers would only be permitted to request &#8220;third parties&#8217; identity details and information on whether, and to what extent&#8217; the data to be ported&#8221; will involve transfers outside the EEA (to a country without an adequacy decision) (at [130]). The guidelines do not mention the possibility of gatekeepers being allowed to verify whether such minimum information is even provided truthfully. In effect, gatekeepers are entirely disarmed from protecting users, even where they know the user is about to become a victim of consent phishing or another kind of attack.</p><p>The shortsightedness of this approach is especially staggering given the current geopolitical situation and well-known cyber operations of unfriendly states against the EU. The answer that the Draft Joint Guidelines provides to the issue of potential violations by third-party DMA beneficiaries is that such third parties are subject to the GDPR and potential GDPR enforcement (see the next section). Obviously, data-protection authorities will deter no criminal or state actor, and it is trivially easy to concoct front businesses putatively based in the EU, especially when gatekeepers are barred from verifying any information.</p><h2><strong>III. PROHIBITING GATEKEEPERS FROM IMPLEMENTING THE MOST EFFECTIVE MEASURES FOR DATA PROTECTION AND CYBERSECURITY</strong></h2><p>As noted above, according to the Draft Joint Guidelines, any privacy or security violations by DMA-beneficiary third parties can only be policed by actors other than the gatekeepers. The key problem with this approach is that gatekeepers are, in at least some cases, best placed to perform this oversight. Indeed, prevention is likely to be the only intervention that matters, especially in cases like data exfiltration through &#8220;consent phishing&#8221;.</p><p>Effective approaches are needed, because data protection and security are exceedingly difficult to police in practice&#8212;both in terms of deterrence and in terms of enforcement against violators. It simply cannot be credibly maintained that the mere fact that DMA&#8217;s beneficiaries are theoretically subject to the GDPR <em>solves</em> the issue of GDPR compliance. This is especially true for non-EU actors who aim to benefit from the DMA, either through legitimate but economically insignificant EU establishments, or by simply lying to gatekeepers about their identities.</p><p>In adopting this approach, the guidelines depart from previously stated Commission policy, supported directly by the DMA, that gatekeepers have a duty to ensure DMA implementation measures comply with other laws, including those on data protection and cybersecurity. For instance, Commissioner Margrethe Vestager <a href="https://multimedia.europarl.europa.eu/en/webstreaming/event_20240403-0900-COMMITTEE-IMCO?start=240403071120&amp;end=240403094524&amp;audio=en">stated</a> in the European Parliament that:</p><blockquote><p>It is for the companies to decide how will they present their services, their operating system, how will they make them safe for you and comply with the DMA.</p></blockquote><p>This aligns with Article 8(1) DMA, which states:</p><blockquote><p>The gatekeeper shall ensure that the implementation of those measures complies with applicable law, in particular Regulation (EU) 2016/679, Directive 2002/58/EC, legislation on cyber security, consumer protection, product safety, as well as with the accessibility requirements.</p></blockquote><p>The guidelines note that (at [88]), but add, in the context of Article 6(4):</p><blockquote><p>At the same time, gatekeepers should not seek to instrumentalize their compliance with other applicable laws with a view to make their compliance with Article 6(4) DMA less effective. When selecting among several possible appropriate measures to comply with obligations stemming from other applicable laws, gatekeepers should select the measures that less adversely affect the pursuit of the objectives of Article 6(4) DMA, provided that they remain effective in ensuring compliance with those other applicable laws.</p></blockquote><p>The legal interpretation implicit in this paragraph is highly questionable, as it appears to assume that DMA obligations take precedence over obligations from other EU legislation. One should ask whether the Draft Joint Guidelines apply the same logic to their interpretation of the DMA. In other words, when there are several ways to interpret the DMA and pursue its goals, do the guidelines choose those that less adversely affect pursuing other EU law objectives, including minimising restrictions of the rights protected by the EU Charter? Little in the Draft Joint Guidelines suggests that such balancing has been conducted, and that the guidelines are defensible from this perspective.</p><p>Setting this aside, Article 8(1) is entirely absent from the guidelines&#8217; section on the data-portability mandate under 6(10) DMA. The section on Article 6(9) references Article 8(1) in its discussion on authorised third parties, mentioning the GDPR&#8217;s principle of integrity and confidentiality (Article 5(1)(f) GDPR) and the requirement to ensure the security of personal-data processing (Article 32 GDPR) (at [129]). Notably, the principle of data minimization (Article 5(1)(c) GDPR) is not mentioned.</p><p>In that discussion, the Draft Joint Guidelines make the already-quoted point that gatekeepers should neither gather information on GDPR compliance by third parties wanting to benefit from the DMA, nor act on any information about their noncompliance. That point is followed by the following sentence (at [131]):</p><blockquote><p>Such information would not necessarily be an indicator of future compliance or the security of an application or related processing, and as such is not strictly necessary to comply with the gatekeeper&#8217;s own responsibility under the GDPR.</p></blockquote><p>This reveals two important assumptions implicitly made in the guidelines, detailed in the next two subsections.</p><h3>A. Gatekeepers Only Responsible for Own GDPR Compliance</h3><p>First, Article 8(1) DMA concerns only &#8220;the gatekeeper&#8217;s own responsibility under the GDPR&#8221;. This is not the only possible reading of Article 8(1), and the guidelines provide no argument why this reading should be adopted. On an alternative reading, which aligns with previous Commission policy as stated by Commissioner Vestager, gatekeepers are meant to ensure that DMA implementation measures comply with other laws, full stop.</p><p>In other words, it is the gatekeepers&#8217; responsibility to ensure&#8212;to the extent they can&#8212;that implementation measures do not lead to noncompliance with those other laws. It is true that gatekeepers cannot fully control, for instance, what third parties do with ported user data. But this does not mean that they should not at least implement technical and organisational measures (<em>e.g.</em>, contractual measures) to safeguard compliance to the feasible extent. They are, after all, best placed to adopt such safeguards.</p><p>If gatekeepers do not do so, then DMA-created risks like the &#8220;consent phishing&#8221; example from the previous section will meet with no effective prevention. In other words, gatekeepers are the lowest-cost avoiders of harm. It is hard to argue that the aim of preventing them from performing a necessary service that they are best placed to do could be attributed to a rational choice by the EU legislature.</p><p>What is most puzzling is that the Draft Joint Guidelines do recognise this rather obvious point, but only in the section on search-engine data sharing under Article 6(11) DMA. There, the guidelines explicitly contemplate an implementing act that (at [189]):</p><blockquote><p>&#8230;may include an obligation for gatekeepers to contractually impose measures, where appropriate, on eligible third-party undertakings as a condition to access the data. Contractual requirements may, among others, limit onward sharing of the data received by eligible third-party undertakings. The implementing act may also impose specific monitoring obligations on the gatekeeper and also specify appropriate measures that gatekeepers must take in case of an established violation by third-party undertakings providing online search engines of requirements set out in the contract. Such measures may include requiring the gatekeeper to notify the competent data protection supervisory authority in case of an alleged breach of the GDPR, cease sharing data with the third-party undertaking providing an online search engine, and providing it with the contractual right to order the third party to delete any data it received from the gatekeeper.</p></blockquote><p>We will return in the next section to this inconsistency in how the guidelines treat Article 6(11) and other provisions.</p><h3>B. GDPR Applies Only to the Extent &#8216;Strictly Necessary&#8217;</h3><p>The other revealed assumption in the Draft Joint Guidelines is that gatekeepers&#8217; responsibility under the GDPR should be interpreted narrowly, placing the DMA&#8217;s goals above the GDPR&#8217;s goals. The sentence from the guidelines cited above restricts the GDPR&#8217;s application in a way that has no basis in the DMA: Gatekeepers are only allowed to comply with the GDPR to the extent this is &#8220;strictly necessary&#8221;<strong> </strong>(at [131]). And not just &#8220;strictly necessary&#8221; as this is typically understood, but in some qualified, extreme sense.</p><p>In fact, it is hard to see the limiting principle of this restriction, given how facially unreasonable the provided example is. The guidelines state that, because it <em>is not always the case</em> (&#8220;would not necessarily be an indicator&#8221;) that third parties&#8217; previous GDPR violations are predictive of future GDPR violations, <em>there can never be </em>a situation where prior violations indicate a strong likelihood of future violations. More precisely, there can never be a situation where knowledge of prior violations would permit a gatekeeper to better comply with the GDPR by refusing to transfer user personal data (likely including special categories of data) to such third parties. This is extremely surprising, given the EDPB&#8217;s co-authorship of the guidelines.</p><p>As far as the DMA is concerned, it states that it applies &#8220;without prejudice&#8221; to the GDPR (see, e.g., recital 12). The DMA contains no general qualification that laws with respect to which it remains &#8220;without prejudice&#8221; apply only to the &#8220;strictly necessary&#8221; extent, much less in the extreme version suggested by the cited sentence.</p><p><em>To read the final two sections on the inconsistent interpretation of the DMA and the GDPR and on the omission of interoperability mandates (Article 6(7) DMA) <a href="https://laweconcenter.org/resources/icle-comments-on-the-interplay-between-dma-and-gdpr/">go to the ICLE website</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Podcast with Eric Seufert on the EU’s GDPR/AI reform package]]></title><description><![CDATA[I joined Eric Seufert again on the Mobile Dev Memo podcast to talk about the European Commission&#8217;s newly unveiled &#8220;digital omnibus&#8221; package and what it means for the GDPR, AI, cookies, and the broader competitiveness debate in Europe.]]></description><link>https://eutechreg.com/p/podcast-with-eric-seufert-on-the</link><guid isPermaLink="false">https://eutechreg.com/p/podcast-with-eric-seufert-on-the</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Wed, 26 Nov 2025 16:54:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I joined <a href="https://mobiledevmemo.com/podcast-can-the-gdpr-be-reformed-with-mikolaj-barczentewicz/">Eric Seufert</a> again on the Mobile Dev Memo podcast to talk about the European Commission&#8217;s newly unveiled &#8220;digital omnibus&#8221; package and what it means for the GDPR, AI, cookies, and the broader competitiveness debate in Europe. Our conversation ranged from Mario Draghi&#8217;s competitiveness report to the politics of GDPR reform, browser-level consent signals, and the divergence between the EU and the UK on Pay&#8209;or&#8209;OK models.</p><p>You can listen to the full episode on <a href="https://mobiledevmemo.com/podcast-can-the-gdpr-be-reformed-with-mikolaj-barczentewicz/">Mobile Dev Memo</a>, as well as <a href="https://podcasts.apple.com/us/podcast/mobile-dev-memo-podcast/id1423753783">Apple Podcasts</a> and <a href="https://open.spotify.com/show/0RCsc3OW1p9jH5YC0qQTp0?si=jgSk98sURta7aWBpjkTpXg">Spotify</a>. Below, you will find my extended notes on that conversation.</p><h2><strong>1. From the GDPR (2018) to the Draghi report (2024)</strong></h2><p>To provide some background for the current reform plans, Eric asked me to draw a line from the GDPR&#8217;s entry into force in 2018 to <a href="https://commission.europa.eu/topics/competitiveness/draghi-report_en?utm_source=chatgpt.com">Mario Draghi&#8217;s 2024 report on EU competitiveness</a>, which criticises the GDPR&#8217;s impact on growth.</p><p>My framing:</p><ul><li><p>The GDPR was not a complete break with the past. EU data protection has a long pre&#8209;GDPR history.</p></li><li><p>What <em>did</em> change with the GDPR was its economic weight:</p><ul><li><p>Fines up to 4% of global annual turnover.</p></li><li><p>Application to nearly all digital services processing &#8220;personal data.&#8221;</p></li><li><p>Very broad and expansive interpretations by regulators.</p></li></ul></li></ul><p>A relatively small &#8220;true believer&#8221; community of privacy professionals suddenly found themselves with a powerful, economy-shaping legal instrument. Many in that community have a strong, almost absolutist view of data protection, as well as little regard for trade&#8209;offs with anything outside privacy (including innovation or competitiveness).</p><p>The small community developed into a compliance industry&#8212;consultants, auditors, trainers&#8212;whose business model depends on maximising perceived GDPR risk.</p><p>Draghi&#8217;s report picks up one visible part of this:</p><ul><li><p>Gold&#8209;plating and fragmentation: member states adding extra requirements on top of the GDPR.</p></li><li><p>Compliance and red&#8209;tape costs: especially damaging for SMEs that cannot amortise costs the way large tech firms can.</p></li></ul><p>But in our discussion I stressed what Draghi&#8217;s accounting&#8209;style analysis cannot easily quantify the opportunity cost:</p><ul><li><p>Products never launched because founders are told &#8220;the GDPR makes this too risky.&#8221;</p></li><li><p>SMEs that never expand into other member states because of fear of divergent national rules.</p></li><li><p>Internal data projects killed pre&#8209;emptively by over&#8209;cautious legal advice.</p></li></ul><p>That is the backdrop for the Commission&#8217;s omnibus effort: there is finally political pressure to &#8220;do something&#8221; about Europe&#8217;s economic malaise, and digital regulation is impossible to ignore in that context.</p><h2><strong>2. What the &#8220;digital omnibus&#8221; is trying to do</strong></h2><p>At a high level, the &#8220;digital omnibus&#8221; package is one chapter in a broader legislative response to Draghi and to mounting political anxiety over Europe&#8217;s weak growth and weak tech sector. I already covered this in <a href="https://eutechreg.com/p/europe-is-not-so-back-why-cookie">Europe is not &#8220;so back:&#8221; why cookie banners are here to stay (despite the reform) and the hard route not taken</a> and  <a href="https://eutechreg.com/p/leaked-gdpr-reform-some-good-ideas">Leaked GDPR reform: some good ideas but what about enforcement?</a>.</p><p>On data protection specifically, I highlighted three main elements of the proposal:</p><ol><li><p>AI &amp; the GDPR: clarifying when AI training and use can proceed without consent.</p></li><li><p>Cookies &amp; signals: shifting some cookie/ePrivacy issues into the GDPR, and introducing mandatory browser/OS&#8209;level &#8220;signals&#8221; for user choice.</p></li><li><p>Scope of the GDPR: codifying a <em>narrower</em> understanding of what counts as &#8220;personal data.&#8221;</p></li></ol><p>I cautioned that the proposed legislation, even if adopted, may not achieve much:</p><ul><li><p>The text of the omnibus is less important than the enforcers (national data protection authorities and their EU-level body, the EDPB).</p></li><li><p>Nothing in the proposal reforms the enforcement machinery.</p></li><li><p>Even the best textual clarifications can be neutralised later by restrictive interpretation.</p></li></ul><p>That is why <a href="https://eutechreg.com/p/europe-is-not-so-back-why-cookie">I am sceptical</a> that these changes, as currently drafted, will materially reduce the compliance burden or uncertainty.</p><h2><strong>3. AI under the GDPR: is real&#8209;world AI development illegal?</strong></h2><p>We then moved to AI, where there is now a clear standoff between two camps:</p><ol><li><p>The pragmatic camp (e.g. France&#8217;s data protection authority, the CNIL):</p><ul><li><p>real&#8209;world AI training and deployment (Mistral, OpenAI, etc.) <em>can</em> be made compatible with the GDPR;</p></li><li><p>guidance that at least gestures towards workable compliance paths.</p></li></ul></li><li><p>The prohibitionist camp (Max Schrems/NOYB and like&#8209;minded data protection officials):</p><ul><li><p>real-world state&#8209;of&#8209;the&#8209;art LLMs essentially incompatible with the GDPR;</p></li><li><p>use ambiguity strategically: never quite declaring AI &#8220;illegal&#8221; (in theory), but keeping maximum enforcement discretion to be able to deem any actual AI illegal.</p></li></ul></li></ol><h3><em>The activist legal theory against LLMs</em></h3><p>I summarised two key lines of argument used by the prohibitionists:</p><ol><li><p>Reasonable expectations</p><ul><li><p>Europeans did not reasonably expect publicly available data (e.g. website content) to be used for LLM training.</p></li><li><p>Hence: training requires prior consent from every affected individual.</p></li><li><p>Practical result: impossible to obtain &#8594; therefore, large&#8209;scale LLM training is illegal.</p></li></ul></li><li><p>Sensitive data (Article 9 GDPR)</p><ul><li><p>LLM training corpora inevitably contain &#8220;special category&#8221; data: health, politics, religion, etc. (at least on the current expansive reading of this concept).</p></li><li><p>Even if inclusion is incidental and unintentional, activists argue this <em>triggers Article 9</em>, which would likely demand explicit consent from each individual.</p></li><li><p>Again, impossible in practice &#8594; LLMs are unlawful.</p></li></ul></li></ol><p>I think this is not the best reading of the GDPR. But it is a very real reading embraced by powerful actors (including some data protection authorities), and <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">the EDPB&#8217;s opinion on AI</a> was drafted to preserve that enforcement stance.</p><h3><em>What the digital omnibus does for AI</em></h3><p>The Commission&#8217;s <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal?utm_source=chatgpt.com">digital omnibus</a> proposal tries, modestly, to push back:</p><ul><li><p>It would clarify in the GDPR that consent is <em>not</em> required for certain AI training and use cases.</p></li><li><p>In practice it is aimed at undermining the &#8220;in practice everything needs prior consent&#8221; theories pushed by NOYB and others.</p></li></ul><p>I see this as a good move in principle, but again:</p><ul><li><p>It will be interpreted and applied by the same authorities behind the EDPB&#8217;s strategically vague AI opinion.</p></li><li><p>They can still declare specific AI practices &#8220;non&#8209;compliant&#8221; using other GDPR hooks (e.g. their reading of &#8220;reasonable expectations&#8221;).</p></li></ul><p>Eric asked whether the anti&#8209;AI crowd is just staking out an extreme negotiating position, or whether they genuinely want LLMs effectively banned in Europe. I answered that many of them are sincere:</p><ul><li><p>They doubt AI&#8217;s value.</p></li><li><p>They place little weight on long&#8209;run economic consequences.</p></li><li><p>They&#8217;re effectively promoting a de&#8209;growth, anti&#8209;tech vision of Europe.</p></li></ul><h2><strong>4. Cookies, ePrivacy, and why the banners may not go away</strong></h2><p>The other big &#8220;user&#8209;facing&#8221; topic we discussed was the Commission&#8217;s attempt to reduce cookie consent banners.</p><h3><em>The starting point: EDPB on Article 5(3) ePrivacy</em></h3><p>We reminded listeners that:</p><ul><li><p>Article 5(3) of the ePrivacy Directive (the &#8220;cookie law&#8221;) applies to any information stored in or accessed from a user&#8217;s terminal equipment, whether or not it is personal data.</p></li><li><p><a href="https://eutechreg.com/p/consent-for-everything-edpb-guidelines">The EDPB issued guidance</a> that dramatically expands what needs prior consent, including:</p><ul><li><p>Email tracking pixels,</p></li><li><p>IP&#8209;based tracking,</p></li><li><p>Even UTM parameters in URLs.</p></li></ul></li></ul><p>If that interpretation is accepted, then in practice almost all meaningful analytics and ad&#8209;related communication between browser/app and server becomes subject to prior consent.</p><h3><em>What the omnibus proposes</em></h3><p>The Commission&#8217;s idea is to:</p><ol><li><p>Split the regime:</p><ul><li><p><strong>Non&#8209;personal data</strong> remains fully under ePrivacy Article 5(3) &#8594; status quo (consent by default) &#8594; no change for cookie banners.</p></li><li><p><strong>Personal&#8209;data&#8209;related operations</strong> are shifted into the GDPR, where:</p><ul><li><p>consent is still the default, <em>but</em></p></li><li><p>new exemptions would be added.</p></li></ul></li></ul></li><li><p>The two new GDPR exemptions are roughly:</p><ul><li><p>Aggregated audience measurement by the controller for its own use.</p></li><li><p>Security of the service or the terminal equipment.</p></li></ul></li></ol><p>On paper, that sounds like a step forward. In practice I see three problems:</p><ul><li><p><strong>Non&#8209;personal data is untouched</strong><br>UTMs and a lot of non&#8209;personal device&#8209;origin data stay under the strict ePrivacy regime. If DPAs continue to treat things like UTMs as requiring consent, banners and other consent requests remain unavoidable in many flows.</p></li><li><p><strong>Analytics exemption is very narrow</strong></p><ul><li><p>Only for <strong>aggregated</strong> information.</p></li><li><p>Only when generated by the <strong>controller for its own use</strong>.</p></li><li><p>Real&#8209;world analytics stacks often involve third&#8209;party providers, cross&#8209;site analysis, and user&#8209;level tracking. Those would still trigger consent.</p></li></ul></li><li><p><strong>Security exemption won&#8217;t cover adtech reality</strong></p><ul><li><p>I fully expect DPAs to say that ad fraud detection and many ad&#8209;ops activities do <em>not</em> fall under &#8220;security of the service&#8221; (or under &#8220;aggregated&#8221; analytics).</p></li><li><p>So fraud prevention and billing&#8209;grade measurement would still require consent, unlike in the UK where the ICO is trying to create more realistic exemptions.</p></li></ul></li></ul><p>Net effect: I do not think this package, as drafted, will significantly reduce cookie consent friction. It is too timid, too complex, and leaves the most expansive EDPB interpretations intact.</p><h2><strong>5. Browser / OS&#8209;level signals: a one&#8209;way ratchet?</strong></h2><p>The omnibus also introduces the idea of &#8220;automated and machine&#8209;readable indications of user choice&#8221;&#8212;browser or OS&#8209;level signals that:</p><ul><li><p>Must be implemented by web browsers within a couple of years.</p></li><li><p>Must be respected by online services when:</p><ul><li><p>Consent is required, or</p></li><li><p>Users exercise their right to object (e.g. to direct marketing based on legitimate interests).</p></li></ul></li></ul><p>The legal text is high&#8209;level and foresees a later technical standard. But based on existing GDPR practice, I expect the following pattern:</p><ul><li><p><strong>Negative signals (opt&#8209;out)</strong></p><ul><li><p>Will be interpreted broadly: a single &#8220;no tracking&#8221; browser setting will be treated as a global refusal of:</p><ul><li><p>All consent&#8209;based tracking, and</p></li><li><p>All direct&#8209;marketing processing based on legitimate interests.</p></li></ul></li></ul></li><li><p><strong>Positive signals (opt&#8209;in)</strong></p><ul><li><p>Will likely be declared insufficient for valid consent in virtually all practical cases:</p><ul><li><p>DPAs will say consent must be &#8220;specific&#8221; and &#8220;informed&#8221; at the service level.</p></li><li><p>A generic browser&#8209;level &#8220;yes to tracking&#8221; will be considered too coarse.</p></li></ul></li></ul></li></ul><p>So the browser/OS signal risks becoming a one&#8209;way ratchet:</p><ul><li><p>Easy to decline everything, everywhere, with one toggle.</p></li><li><p>Hard or impossible to use that same mechanism to give valid consent.</p></li></ul><p>The proposal also mentions an exemption for &#8220;media services&#8221; (likely reflecting lobbying by large publishers), but that term is undefined. Depending on how it is interpreted, this could become:</p><ul><li><p>Yet another privileged carve&#8209;out for certain incumbent publishers.</p></li><li><p>Another structural blow to the open web, especially smaller services that cannot claim &#8220;media&#8221; status.</p></li></ul><h2><strong>6. &#8220;Personal data:&#8221; can we tame the &#8220;law of everything&#8221;?</strong></h2><p>We then discussed perhaps the most legally important&#8212;but politically under&#8209;appreciated&#8212;piece: the definition of personal data.</p><p>Today, the GDPR is often treated as the &#8220;law of everything&#8221; in digital markets, because:</p><ul><li><p>Regulators and activists insist on extremely broad readings:</p><ul><li><p>Any pseudonymous identifier that can single out a user is treated as personal data.</p></li><li><p>Even if the controller has no realistic way to discover who the person actually is.</p></li></ul></li></ul><p>A recent Court of Justice <a href="https://curia.europa.eu/juris/document/document.jsf?text=&amp;docid=303863&amp;pageIndex=0&amp;doclang=EN&amp;mode=lst&amp;dir=&amp;occ=first&amp;part=1&amp;cid=16531016">judgment</a> pushed back slightly against this trend, and the Commission is now proposing to codify a narrower, more <em>controller&#8209;centric</em> test.</p><p>The core idea:</p><ul><li><p>Whether data is &#8220;personal&#8221; for you depends on whether <em>you</em> are reasonably likely to identify the individual.</p></li><li><p>If you only hold pseudonymous IDs that cannot, in your setup, be linked back to real people, the dataset might not be &#8220;personal data&#8221; for you.</p></li><li><p>The GDPR would then not apply directly to that dataset.</p></li></ul><p>In our conversation I argued that this could create real incentives for data minimisation and structural separation:</p><ul><li><p>Many organisations would strive to process only data they cannot link back to individuals.</p></li><li><p>A smaller set of specialised entities would handle truly identifying data under full GDPR scrutiny.</p></li></ul><p>Activist organisations like NOYB strongly oppose this, portraying it as a &#8220;loophole.&#8221; But the test is objective, not subjective:</p><ul><li><p>It is <em>not</em> enough for a controller to say &#8220;we promise we can&#8217;t re&#8209;identify.&#8221;</p></li><li><p>Authorities can and will inspect whether the controller really lacks the means to identify individuals.</p></li></ul><p>If applied in good faith, this could remove a very large class of low&#8209;risk activities from the GDPR&#8217;s full compliance burden.</p><p>However, my concern is that the same actors who pushed &#8220;everything is personal data&#8221; will work to re&#8209;interpret the new text so that nothing actually changes in practice.</p><h2><strong>7. The politics: who hates the omnibus, and why that matters</strong></h2><p>Eric asked about the reaction to the leaked proposal. The answer is: ferocious opposition from the privacy establishment.</p><p>Key elements:</p><ul><li><p>Left&#8209;of&#8209;centre political groups in the European Parliament publicly attacked the package.</p></li><li><p>A coalition of NGOs and trade unions issued an <a href="https://noyb.eu/sites/default/files/2025-11/The%20EU%20must%20uphold%20hard-won%20protections%20for%20digital%20human%20rights.pdf">open letter</a> describing the package as:</p><ul><li><p>&#8220;The biggest rollback of digital fundamental rights in EU history&#8221; and</p></li><li><p>A &#8220;an attempt to covertly dismantle&#8221; Europe&#8217;s &#8220;gold standard&#8221; GDPR regime.</p></li></ul></li></ul><p>Several aspects stood out to me:</p><ul><li><p>These groups now feel compelled to argue that the omnibus will harm competitiveness and innovation&#8212;a very strange posture in which they lack any credibility.</p></li><li><p>One civil&#8209;liberties organisation even pointed to China&#8217;s digital regulation as a model Europe should emulate, which I find remarkable given their stated mission.</p></li></ul><p>By contrast, support for reform comes from:</p><ul><li><p>Some national leaders (e.g. calling to &#8220;stop the clock&#8221; on the AI Act).</p></li><li><p>A small but vocal &#8220;<a href="https://x.com/euacchq/status/1993029205068857756">EU accelerationist</a>&#8221; community of investors and operators.</p></li></ul><p>I noted a paradox:</p><ul><li><p>The Commission is clearly willing to spend political capital&#8212;this package goes further than I would have predicted a few weeks ago.</p></li><li><p>Technically and institutionally, it still falls well short of the changes needed to really alter enforcement dynamics.</p></li></ul><h2><strong>8. &#8220;Pay or consent&#8221; and the EU&#8211;UK divergence</strong></h2><p>Although the focus of this episode was the omnibus, Eric also asked for an update on &#8220;pay or consent&#8221; and the DMA, including the UK angle. In June, Eric and I devoted <a href="https://eutechreg.com/p/discussing-the-european-commissions">an entire podcast to this issue</a>. I also recommend <a href="https://eutechreg.com/p/debating-the-eu-pay-or-consent-decision?utm_source=chatgpt.com">my debate with competition law scholars</a> from September.</p><p>So what&#8217;s happening?</p><h3>In the EU (the DMA + the GDPR)</h3><ul><li><p>In April 2025, the Commission found Meta&#8217;s original &#8220;pay or consent&#8221; model (subscription for no ads vs free with personalised ads) non&#8209;compliant with the DMA&#8217;s Article 5(2) &#8220;specific choice&#8221; requirement.</p></li><li><p>Meta had already moved away from that binary model in November 2024, introducing:</p><ul><li><p>A triple choice:</p></li></ul></li></ul><ol><li><p>Paid, no ads.</p></li><li><p>Free with personalised ads.</p></li><li><p>Free with &#8220;less personalised ads&#8221; (with non&#8209;skippable ad breaks).</p></li></ol><ul><li><p>Lower subscription prices.</p></li></ul><ul><li><p>The April decision only covers the earlier, binary model; we still do not know whether the Commission will accept Meta&#8217;s current triple&#8209;option design.</p></li></ul><h3>In the UK (the GDPR only, no DMA)</h3><ul><li><p>As I wrote in <a href="https://eutechreg.com/p/why-is-meta-offering-cheaper-and">Why is Meta offering cheaper and simpler &#8220;pay or consent&#8221; in the UK?</a>:</p></li><li><p>Meta rolled out a subscription for no ads model in the UK after extensive discussions with the ICO.</p></li><li><p>The ICO&#8217;s public statement is cautious, but clearly more accommodating:</p><ul><li><p>It does <em>not</em> insist on a third, free &#8220;less personalised&#8221; option.</p></li><li><p>It focuses on whether Meta&#8217;s consent flow and information are adequate under the (UK) GDPR.</p></li></ul></li></ul><p>I see the ICO as charting a pragmatic path that the EU <em>could</em> emulate&#8212;but so far shows no real sign of doing so.</p><h2><strong>9. The &#8220;Washington effect&#8221; and Brussels&#8217; anti&#8209;fragility</strong></h2><p>Finally, Eric asked whether the threat of US retaliation (e.g. via tariffs or counter&#8209;measures framed as responses to DMA fines) might soften EU enforcement.</p><p>I doubt it. <a href="https://eutechreg.com/p/the-washington-effect-will-the-brussels">I used the term &#8220;Washington effect&#8221;</a> for the idea that US pressure might discipline EU regulators. In practice, Brussels is institutionally anti&#8209;fragile:</p><ul><li><p>Careers in and around EU institutions can still be made by &#8220;sticking it to big tech.&#8221;</p></li><li><p>The Commission and data protection authorities staff can outlast any one US administration.</p></li><li><p>They may reasonably believe that US priorities will shift, while their own incentives and internal politics remain constant.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Europe is not “so back:” why cookie banners are here to stay (despite the reform) and the hard route not taken]]></title><description><![CDATA[&#8220;Europe is so back. No more cookie banners.&#8221; Alas, no. Cookie banners are staying. But the banner is just an irritating symbol of the fact that European politicians can&#8217;t quite muster the will to do the hard things we need.]]></description><link>https://eutechreg.com/p/europe-is-not-so-back-why-cookie</link><guid isPermaLink="false">https://eutechreg.com/p/europe-is-not-so-back-why-cookie</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Sat, 22 Nov 2025 19:29:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8220;<a href="https://x.com/artman/status/1991148317313483241">Europe is so back</a>. No more cookie banners.&#8221; Alas, no. Cookie banners are staying. But the banner is just an irritating symbol of the fact that European politicians can&#8217;t quite muster the will to do the hard things we need.</p><p>Ursula von der Leyen was right in this year&#8217;s <a href="https://ec.europa.eu/commission/presscorner/api/files/document/print/ov/speech_25_2053/SPEECH_25_2053_OV.pdf">State of the Union Address</a> that &#8220;Europe is in a fight.&#8221; Thanks to the published &#8220;omnibus&#8221; reform proposals, we now know what Brussels is bringing to this fight. </p><p>Good intentions? Sure. Some recognition that our current economic malaise is due to regulatory overreach and the failure to safeguard the common market? Yes.</p><p>However, European politicians cannot abandon the &#8220;luxury&#8221; mindset: the belief that we can pursue all policy aims simultaneously, with &#8220;win-wins&#8221; and no difficult trade-offs (Rebuild our industry! But also: no emissions! And no nuclear!).</p><p>As the authors of the excellent <a href="https://constitutionofinnovation.eu/">Constitution of Innovation</a> argued, we are on a path similar to that of twentieth-century Argentina&#8212;albeit worse due to Russia&#8217;s violent imperialism and a demographic crisis. To avoid that fate, the authors propose strengthening the common EU market while limiting the EU&#8217;s regulatory ambitions. Of course, the former faces opposition from national governments, and the latter is anathema to Brussels (<a href="https://x.com/MBarczentewicz/status/1945103113402069279">a small illustration</a>: &#8220;civil servants are being asked to rethink and streamline laws they authored, championed and built their careers on &#8230;&#8221;).</p><p>The proposed <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal">&#8220;digital omnibus&#8221;</a> legislation is an excellent example of how challenging it is to meaningfully reform the excesses of EU regulation. </p><p>Last week, I commented on its leaked version (<a href="https://eutechreg.com/p/leaked-gdpr-reform-some-good-ideas">Leaked GDPR reform: some good ideas but what about enforcement?</a>). Earlier this week, the Commission published the <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal">official proposal</a>, which is largely in line with the content of the leaked document.</p><h2>Cookie consent banners are here to stay</h2><p>I have bad news for those who are celebrating the proposal&#8217;s headline aim of getting rid of cookie consent banners as a big win over regulatory overreach. To understand why, let&#8217;s look at why businesses are currently expected to ask for user consent.</p><p>I agree with <a href="https://ec.europa.eu/newsroom/dae/redirection/document/121743">the Commission</a> that the cookie consent law&#8212;from the ePrivacy Directive&#8212;is &#8220;outdated and inadequate for contemporary privacy and data needs.&#8221;As interpreted by the European Data Protection Board (EDPB), the law requires prior user consent for many basic information exchanges between an Internet service and a user device (see my <a href="https://eutechreg.com/p/consent-for-everything-edpb-guidelines">Consent for everything? EDPB guidelines on URL, pixel, IP tracking</a>). </p><p>The exceptions to that are interpreted extremely narrowly and don&#8217;t include such table-stakes purposes as using parts of web links (URLs) to inform the digital service about which advertising partner originated the traffic or what advertising campaign the link was associated with. Similarly, the exceptions don&#8217;t cover basic measures used to detect attempts to defraud advertisers (and thus call for asking fraudsters to kindly consent to enabling anti-fraud measures!). It doesn&#8217;t matter if this isn&#8217;t personal data. It doesn&#8217;t matter if this isn&#8217;t even remotely sensitive. On the EDPB&#8217;s absolutist reading of the law, prior user consent is required&#8212;hence, the consent banners/pop-ups.</p><p>The legal change proposed by the Commission doesn&#8217;t even attempt to address the bulk of the issue. The non-sensitive, non-personal data will still require prior consent for processing.</p><p>What the Commission proposed was to exclude <em>personal</em> data processing from the ePrivacy Directive and mostly replicate the framework within the GDPR, but with minor tweaks.</p><p>The tweaks include the addition of two exemptions from consent: creating aggregated usage statistics and security measures. Despite the Commission&#8217;s optimism, the new exemptions will be practically insignificant because they will be interpreted with anti-business prejudice by the same people who produced the already cited EDPB guidelines requiring consent for processing even generic web link fragments.</p><p>With the EDPB in charge, we can expect the analytics exemption not to apply to the standard third-party analytics that businesses actually use. Not to mention any analytics not deemed &#8220;aggregated,&#8221; because it tracks individual users. Similarly, the security exemption will almost certainly not cover anti-advertising fraud measures. In other words, the purposes for which actual website operators process data will still require consent. The cookie banner will live to fight another day.</p><p>You might object that this is against the spirit of the reform, and surely there must be a way to interpret the law more flexibly? I would agree, but here we reach the core of the problem. It is the reason why, in practice, all the other good ideas from the Commission&#8217;s proposal also risk being largely nullified.</p><p>The issue is <em>enforcement</em>.</p><h2>Privacy enforcement will undermine the reform</h2><p>The officials in charge of applying the changed rules will be the same people who brought us the idea that prior user consent should be required for nearly all Internet communications. In other words, people who genuinely believe their job is to promote data protection and privacy above everything else. Even if legislators were to give them an explicit duty to care about &#8220;economic growth&#8221; or &#8220;innovation,&#8221; it would change little. In fact, privacy enforcers would likely argue that they are already doing that (e.g., some EDPB documents contain sections on balancing&#8212;I&#8217;ll leave it to the reader to guess how serious such balancing is).</p><p>What is especially difficult for privacy enforcers to internalise is the importance of clarity. I criticised the <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">opinion on AI models</a> issued by the EU-level body of data protection authorities, the EDPB, by pointing out how it failed to address regulatory uncertainty:</p><blockquote><p>Yes, the Opinion does not say that AI is illegal in the EU, but let&#8217;s be honest: even in the EU, explicitly making such a declaration is politically unpalatable. Instead, the EU privacy enforcers did what they usually do. They kept as much enforcement flexibility for themselves as possible, opening the doors for any EU national enforcers to impose billions-worth fines. Of course, the other side of that coin is that those who want to use AI in the EU have no idea if all their GDPR compliance efforts will be judged as not good enough in a year or two. &#8230;</p><p>Some privacy regulators may protest that this was not their intent; after all, they did provide the list of things to try. But such an answer would show the fundamental disconnect from economic reality. Consciously or not, regulators can thwart development not only by explicitly banning it but also by creating an environment of uncertainty. Even the threat of discretionary regulatory enforcement&#8212;combined with the risk of heavy fines&#8212;can significantly chill investment decisions (and thus innovation) at the margins.</p></blockquote><p>Why was the EDPB opinion formulated in a way that allows claiming that real-world AI efforts are illegal under the GDPR? Because an influential part of the EU&#8217;s data protection officials holds that view.</p><p>It should thus not be surprising that privacy activists are now saying the quiet part out loud in their opposition to the Commission&#8217;s proposal. For example, in <a href="https://www.linkedin.com/posts/mikolajbarczentewicz_leaked-gdpr-reform-some-good-ideas-but-what-activity-7393937114985299968-rTCH?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACQtiABF7m91F-eH3bqeLo_wX_F5ZhXBEg">NOYB&#8217;s view</a>, the problem with the draft reform is precisely that it would legally <em>enable</em> state-of-the-art AI, which, according to them, is illegal today.</p><p>To be clear, some European privacy authorities have recently shown a genuinely pragmatic approach. The best example is the French authority&#8217;s <a href="https://www.cnil.fr/fr/ia-respecter-lexercice-des-droits-des-personnes">guidance on AI</a>. However, it is not a coincidence that the leading European player in that market is a French national champion (Mistral), a fact of which the French government is justifiably proud.</p><p>This kind of clarity of purpose, derived from national interest and a &#8220;whole-of-government&#8221; mobilization, is not something we can rely on for pragmatic, consistent enforcement across the EU.</p><p>Pragmatic voices are barely heard in documents adopted by the EDPB.</p><p>The EDPB gets to make the rules because courts have been excessively deferential to data protection authorities thus far. Furthermore, even getting an EU court to review the EDPB&#8217;s interpretations is procedurally complex and likely to take years.</p><p>In their reliance on interpretations from data protection authorities, the EU courts have failed to appreciate that these authorities often do not even attempt to act in the general public interest, as we would expect from a European public authority. Instead, they are effectively American-style campaigners for a single cause&#8212;a small aspect of the public interest.</p><p>This also partially explains the preoccupation with American &#8220;big tech&#8221; companies: activists seek &#8220;David-vs-Goliath&#8221; fights. It&#8217;s much harder to get a magazine cover for cracking down on homegrown illegal &#8220;sellers&#8221; of personal data, scammers, and their ilk.</p><h2>The hard route not taken</h2><p><strong>Enforcement reform. </strong>Reforming the enforcement framework is essential for the success of any significant improvement to EU data protection law. If done well, this could reduce the need for substantive changes in laws like the GDPR.</p><p>This assumes, however, that an appropriately motivated and resourced authority could produce helpful guidance, both balancing privacy (and data protection) interests with other values and clearly informing organizations how to comply. The French data protection authority, CNIL, has demonstrated that this is possible in its guidance on AI, which contrasts starkly with the vague and non-committal language of the EDPB&#8217;s opinion on AI models.</p><p>I proposed one way to achieve this in <a href="https://eutechreg.com/p/a-serious-target-for-improving-eu">A serious target for improving EU regulation: GDPR enforcement</a>. I suggested establishing an EU tribunal with a clear mandate to balance privacy and other EU goals to approve guidance documents and cross-border enforcement decisions. Perhaps this idea could be integrated with the &#8220;specialized commercial courts&#8221; proposed in the <a href="https://constitutionofinnovation.eu/">Constitution of Innovation</a>.</p><p>I also suggested that a centralised, EU-level data protection authority might be a good idea&#8212;though it would likely need to be created from scratch to safeguard ideological neutrality and a capacity for serious balancing of interests.</p><p><strong>GDPR/ePrivacy changes and the courts. </strong>The disproportionate, myopic practice of data protection enforcement has resulted in a situation where the EU courts require clear guidance from the EU legislature&#8212;maybe even a treaty change&#8212;to undo the damage.</p><p>In the proposed GDPR reform, the Commission tries to rely on a recent judgment from the EU&#8217;s highest court, the Court of Justice (CJEU), to clarify that the definition of &#8220;personal data&#8221;&#8212;and thus the GDPR&#8217;s scope of application&#8212;should be construed more narrowly than many privacy authorities push for. This is now being criticised for not aligning perfectly with what the Court said. I understand why the Commission preferred to have such &#8220;cover&#8221; for beneficial GDPR changes.</p><p>The criticism the Commission&#8217;s proposal faced highlights the problem with relying on courts for guidance, rather than acknowledging that this area of law is a mess and that the courts need guidance from the legislature (and perhaps from the &#8220;masters of the treaties&#8221;&#8212;the member states).</p><p>The CJEU is unpredictable; they could issue a decision pushing the interpretation of &#8220;personal data&#8221; in a different direction next month. This is a significant risk, given that the GDPR is currently a de facto &#8220;law of everything&#8221; in a digital economy, with enforcers wielding powers to block technological progress while being utterly incompetent and lacking legitimacy to do so.</p><p>Stating openly that the EU legislature wants to go in a different direction, instead of claiming merely to codify what the EU Court has said, would make it clear to the courts that they&#8217;re no longer dealing just with a technical legal question, but with a considered political choice about the future of Europe.</p><p>Yes, this strategy is politically more challenging, but I&#8217;m skeptical that what the Commission is currently attempting will hold up.</p><h2>Simplification and the definition of &#8220;personal data&#8221;</h2><p>EU data protection law is notoriously complex&#8212;partially due to its drafting, but even more so due to its enforcement. The complexity is especially taxing for smaller organizations and may, in some cases, act as a business moat for larger ones. Critics of the proposed GDPR reform argue that modifying the GDPR text will introduce new interpretive challenges. To some extent, this is inevitable; that&#8217;s how the law works.</p><p>Nevertheless, the Commission&#8217;s proposal includes a provision moving toward true simplification: cutting the GDPR down to a manageable size and denying its unearned status as the &#8220;law of everything.&#8221; It is the provision regarding the definition of &#8220;personal data&#8221; I mentioned above.</p><p>In my previous comment, I <a href="https://eutechreg.com/p/leaked-gdpr-reform-some-good-ideas">summarised it in the following way</a>:</p><blockquote><p>In short, the idea is that whether something counts as personal data for you (and whether the GDPR applies to you) depends on whether you are capable of identifying the individuals to whom the data relates.</p></blockquote><p>I also suggested a potential benefit of this approach:</p><blockquote><p>&#8230;this will prompt many organisations to separate the processing of data allowing the identification of individuals (e.g. let other entities specialised in GDPR compliance &#8230; handle that processing) and otherwise to only process pseudonymous data (without being able to identify individuals).</p></blockquote><p>Acknowledging that the GDPR should only apply to organisations that are &#8220;reasonably likely&#8221; to identify specific individuals would create a powerful incentive to process data in such a way that individuals cannot be identified. I can imagine that many organisations, large and small, would jump on that opportunity&#8212;even if it means incurring technical costs or knowing less about their customers. The payoff in reduced regulatory complexity would, I think, be very clear.</p><p>Of course, such organisations would still be &#8220;in the shadow&#8221; of the GDPR in the sense of needing to make sure that they are not reasonably likely to identify individuals. Thus, e.g., pragmatic GDPR guidance on data security and organisational measures could still be helpful for them. But that would still constitute a very significant simplification.</p><p>The criticism that this particular proposal received looks almost knee-jerk: any assault on the GDPR as the &#8220;law of everything&#8221; must be rejected. A potentially clear benefit&#8212;the reduction in the processing of personal data&#8212;is presented as undesirable. This is presumably because &#8220;true data minimization&#8221; can only be done under a full GDPR regime, without even the slightest hint of reducing privacy enforcers&#8217; control. Such criticism proves too much and is rather revealing about the critics. It shows that those who engage in it start from a position of mistrust towards &#8220;business,&#8221; demonstrating that they are incapable of serious balancing of rights and interests.</p><p>That said, I am still skeptical that this change in the GDPR&#8217;s text, presented as a mere codification of what the Court said, will be sustainable. Whatever the official opinion of the data protection authorities will be on the Commission&#8217;s proposal, we can expect them to undermine it as much as possible. Whether the courts will police that several years down the line is also uncertain.</p><p>So this big chance for simplification will likely quickly fizzle out with new vague, &#8220;case-by-case&#8221; guidance, prompting lawyers to advise their clients that they are almost always at risk of being &#8220;reasonably likely&#8221; to identify individuals in the local data protection authority&#8217;s view.</p>]]></content:encoded></item><item><title><![CDATA[AI privilege and ChatGPT encryption]]></title><description><![CDATA[Technical protections of chat confidentiality should strengthen the case for legal protections]]></description><link>https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption</link><guid isPermaLink="false">https://eutechreg.com/p/ai-privilege-and-chatgpt-encryption</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Fri, 14 Nov 2025 13:48:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI&#8217;s Sam Altman has been <a href="https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/#:~:text=%E2%80%9CPeople%20talk%20about%20the%20most,%E2%80%9D">publicly</a> <a href="https://conversationswithtyler.com/episodes/sam-altman-2/">floating</a> the idea of &#8220;AI privilege,&#8221; a form of legal confidentiality to protect interactions between users and AIs. According to <a href="https://conversationswithtyler.com/episodes/sam-altman-2/">Altman</a>: &#8220;We should apply as much protection as when you talk to ... your human doctor or your human lawyer, as you do when you talk to your AI doctor or AI lawyer.&#8221; For OpenAI, their legal battle with the New York Times over disclosure of user conversations with ChatGPT illustrates why this is needed (see the new <a href="https://openai.com/index/fighting-nyt-user-privacy-invasion/">blog post</a> from OpenAI&#8217;s CISO Dane Stuckey). Whether some form of AI privilege should be legally recognised raises a myriad of fascinating jurisprudential questions, which I plan to explore later in more depth. But one thing in Stuckey&#8217;s text that struck me in particular. OpenAI&#8217;s CISO <a href="https://openai.com/index/fighting-nyt-user-privacy-invasion/">wrote</a>:</p><blockquote><p>&#8220;Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We believe these features will help keep your private conversations private and inaccessible to anyone else, even OpenAI.&#8221;</p></blockquote><p>This seems to me a rather significant declaration, also from a business perspective. Without access to user conversations, OpenAI may be limited in its ability to use ChatGPT conversations to train new models or to provide some functionalities (e.g. memory, some forms of advertising). I wrote about some technological measures for AI privacy in <em><a href="https://eutechreg.com/p/trustworthy-privacy-for-ai-apples">Trustworthy privacy for AI: Apple&#8217;s and Meta&#8217;s TEEs</a></em>. Of course, such limitations can be overcome, but there would likely be additional friction and a need for direct user involvement. So it&#8217;s a costly direction to pursue.</p><p>How does this relate to AI privilege? To some extent, technical protections are substitutes for legal protections: if OpenAI has no access to some data, it cannot be legally forced to disclose it. However, an AI provider could also be legally forced to undermine the technical protections for future conversations. Merely adopting technical protections doesn&#8217;t guarantee their longevity.</p><p>I have two rough thoughts on the relationship between legal protections like AI privilege and technical protections.</p><p>First, to be sustainable, technical protections will likely need legal protections. The argument that you can&#8217;t currently access user conversations may not carry the day against a legal order to implement such functionality, unless some legal bar would apply.</p><p>Second, <strong>technical protections are an argument for legal protections</strong>.</p><p>By technically restricting their own access to user chats, an AI provider signals that it treats confidentiality seriously. In turn, this is likely to affect how many users approach the service. Given the expectation of confidentiality, users will be even more likely to have conversations on topics that are highly sensitive to them. Users may (and some probably already do) treat AI chats not as ordinary services, but as extensions of their own minds, having conversations of the kind they otherwise would only have in their own thoughts. This may also be important for the success of Seb Krier&#8217;s idea of AI agents outlined in his <em><a href="https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale">Coasean Bargaining at Scale</a></em>.</p><p>If anyone accesses such chats, users would legitimately see that as a serious violation. Hence, by affecting user habits and expectations, the existence of technical protections would strongly support legal protections of confidentiality. At the very least, protections against undermining the technical safeguards.</p><p>What will likely complicate this, at least in the public debate, is <strong>the question of monetization</strong> (see Eric Seufert&#8217;s analysis of the likely directions of development in this space: <em><a href="https://mobiledevmemo.com/affiliate-links-personalized-ads-and-chatbot-revenue-optimization/">Affiliate links, personalized ads, and chatbot revenue optimization</a></em>).</p><p>Both personalised advertising and affiliate links could be perceived as lowering the technical protections, e.g. because they might involve tool calls beyond the encrypted environment. Even though this is solvable technically, the <em>perception</em> of AI providers monetizing chats may still weigh heavily against legal protections in the public debate. This could be partially due to simplistic analogies with professional confidentiality. Some may fail to notice that their medical or legal conversations can be used, perfectly reasonably, by the professional e.g. to &#8220;upsell&#8221; customers on other services.</p><p>Of course, what is important in the medical and legal context is that the service provider acts in the best interest of the client (even when upselling). So, to make the case for legal protections, it may be necessary for AI providers to show that they also act in the client&#8217;s best interest.</p><p>OpenAI&#8217;s CISO added that the AI provider plans to deploy &#8220;<strong>fully automated systems to detect safety issues</strong>.&#8221; He added that:</p><blockquote><p>Only serious misuse and critical risks&#8212;such as threats to someone&#8217;s life, plans to harm others, or cybersecurity threats&#8212;may ever be escalated to a small, highly vetted team of human reviewers.</p></blockquote><p>This will naturally raise questions whether such automated detection will take place on device or at least within a trusted execution environment. As I discussed in <em><a href="https://eutechreg.com/p/trustworthy-privacy-for-ai-apples">Trustworthy privacy for AI: Apple&#8217;s and Meta&#8217;s TEEs</a></em>, to build trust in such technical protections, businesses should be very transparent about the technical details and enable external auditing.</p><p>In general, I think that the technical steps OpenAI plans to adopt to protect chat confidentiality should strengthen the case for legal protections, including some form of AI privilege.</p>]]></content:encoded></item><item><title><![CDATA[Leaked GDPR reform: some good ideas but what about enforcement?]]></title><description><![CDATA[I&#8217;ve been arguing that the EU&#8217;s data protection law&#8212;the GDPR&#8212;needs to be reformed, mostly due to the absolutist approach taken to its interpretation by the data protection authorities.]]></description><link>https://eutechreg.com/p/leaked-gdpr-reform-some-good-ideas</link><guid isPermaLink="false">https://eutechreg.com/p/leaked-gdpr-reform-some-good-ideas</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Sun, 09 Nov 2025 12:09:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been arguing that the EU&#8217;s data protection law&#8212;the GDPR&#8212;needs to be reformed, mostly due to the absolutist approach taken to its interpretation by the data protection authorities. Even a year ago this may have seemed far-fetched. However, meagre economic growth is now seriously threatening European long-term prosperity. This, combined with the practical joke of how the EU leads in &#8220;AI regulation&#8221; while others build, has created enough political momentum to do <em>something</em>.</p><p>We are finally starting to see what this may involve, with ideas like the additional pan-EU corporate regime (<a href="https://en.wikipedia.org/wiki/28th_regime">the 28th regime</a>) or tweaks to the rushed and ill-conceived AI Regulation. Perhaps the biggest battle, however, is going to be waged over the excesses of EU privacy law. In other words, over legal interpretations that became an article of faith for a small but loud and influential minority, a costly burden for EU businesses, and a major annoyance for almost everyone.</p><p>So what&#8217;s on the agenda for GDPR reform? We now know a bit more, given that draft text has just <a href="https://cdn.netzpolitik.org/wp-upload/2025/11/EU-Kommission-Digital-Omnibus-A-Data-Act-und-DSGVO.pdf">leaked</a> from the European Commission. Before I consider some of the good ideas included in the draft, I want to address the biggest omission, which is likely to undermine the potential beneficial effects: the problem of privacy absolutism and lack of proportionality in GDPR <em>enforcement</em>. I&#8217;m not going to repeat here the details of my diagnosis and my proposed solutions (see eg <a href="https://eutechreg.com/p/a-serious-target-for-improving-eu">A serious target for improving EU regulation: GDPR enforcement</a> and <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">The EDPB&#8217;s AI Opinion Shows the Need for GDPR Enforcement Reform</a>). I just want to reiterate that merely changing the GDPR&#8217;s text, even though welcome, will be insufficient when the same text will then be interpreted and applied by people vehemently opposed to the changes.</p><p>Let&#8217;s look at the substance of the leaked proposal.</p><p><strong>Definition of &#8220;personal data&#8221;:</strong> this is crucial issue because this concept defines the scope of application of the GDPR. So far, data protection authorities have been pushing in only one direction: to make almost any data &#8220;personal&#8221; and thus to claim jurisdiction over virtually all data. The courts, including the EU Court of Justice (CJEU), have an uneven record on this, but at least recently the CJEU <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62023CJ0413">pushed back</a>. And it looks like the European Commission wants to amend the GDPR text with a more sensible reading of &#8220;personal data.&#8221; In short, the idea is that whether something counts as personal data for <em>you</em> (and whether the GDPR applies to <em>you</em>) depends on whether <em>you </em>are capable of identifying the individuals to whom the data relates. I&#8217;d say that this is a very good idea.</p><p>One thing that the Commission should clarify is that examples of factors that could help identify an individual in the definition of &#8220;personal data&#8221; (&#8220;an identification number, location data, an online identifier&#8230;&#8221;) should not be considered as definite &#8220;identifiers&#8221; when they don&#8217;t actually allow pointing to a specific person. In other words, using a pseudonymous unique identifier should not mean that we&#8217;re dealing with personal data if we cannot say who is the real person behind that identifier.</p><p>Some privacy activists (e.g. <a href="https://www.linkedin.com/posts/max-schrems_draft-changes-to-gdpr-and-eprivacy-comparison-activity-7392589194885365760-dA7F/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACQtiABF7m91F-eH3bqeLo_wX_F5ZhXBEg">NOYB</a>) are complaining that this will prompt many organisations to separate the processing of data allowing the identification of individuals (e.g. let other entities specialised in GDPR compliance process handle that processing) and otherwise to only process pseudonymous data (without being able to identify individuals). But I don&#8217;t really see how this is a problem. Sounds to me like a big practical win for data minimisation and privacy overall.</p><p><strong>Definition of sensitive data: </strong>&#8220;special categories of data&#8221; like health data, data on political opinions or religious beliefs are especially protected by the GDPR. Data protection authorities have been advancing an interpretation that in many practical contexts processing such data requires prior consent of the individuals (data subjects). If such a stringent regime applies only to really sensitive data, this may seem understandable (although serious questions should still be asked whether all current &#8220;special categories of data&#8221; should be protected in the same way). But privacy absolutists want the restrictions also to apply to any data that could allow someone to <em>infer </em>sensitive information, even when no such inference is being made or even likely to be made. This view would expand the strictest regime disproportionately. The Commission aims to address this by clarifying the definition of sensitive data, specifying that data should be treated as such when it &#8220;directly reveals&#8221; e.g. health status.</p><p><strong>Sensitive data and AI training: </strong>restrictions on the processing of special category data are particularly challenging for AI developers and users. The potential requirement of asking for consent of all people whose data was included in AI training datasets (eg sourced from the open web) would amount to a de facto prohibition on AI development, especially frontier large language model development.</p><p>AI developers don&#8217;t want to process sensitive data&#8212;they don&#8217;t even want to process personal data. When this is technically feasible, they use automated means to exclude such data from training data. There are, however, at least two potential legal problems. First, someone could argue that even if they delete such data, they already processed it before deleting and in the act of deleting. Second, even when state of the art measures are used to exclude personal/sensitive data, it is currently impossible to fully exclude the possibility that some such information will likely be reproduced by a model. To entirely avoid any incidental processing of personal/sensitive data in AI training would require entirely infeasible manual curation of data sources. If legally obligatory, this would amount to a de facto prohibition on LLMs.</p><p>One EU data protection authority&#8212;the French CNIL&#8212;laudably acknowledged this, while providing pragmatic <a href="https://www.cnil.fr/fr/ia-respecter-lexercice-des-droits-des-personnes">guidance</a>. But it is not a secret that CNIL&#8217;s approach faces opposition among the more absolutist privacy officials and activists, so there is no chance that privacy enforcers will adopt it voluntarily across the EU.</p><p>The Commission&#8217;s proposal is to allow the processing of special category data for AI training and use with some conditions, which appear to align with the state of the art practices that AI labs are already implementing. In principle, this is also a welcome development.</p><p>One worry I have is whether the rules will be interpreted with sufficient flexibility so as not to amount to a prohibition of <em>open-weights</em> distribution of AI models. Note in this context the following provision for what must be done if it is discovered that some sensitive data is &#8220;in&#8221; an AI model: &#8220;If removal of those data requires disproportionate effort, the controller shall in any event effectively protect without undue delay such data from being used to produce outputs, from being disclosed or otherwise made available to third parties.&#8221; Depending on interpretations, this could be a serious problem for open-weights AI.</p><p><strong>Cookie consent law:</strong> the EDPB&#8217;s recent guidelines on the &#8220;cookie consent&#8221; law (Article 5(3) of the ePrivacy Directive) is among the most glaring examples of the disproportionate and absolutist approach of EU privacy enforcers. On the EDPB&#8217;s interpretation &#8220;consent would be required for all Internet communication unless very limited exceptions apply (even more restrictive than under the GDPR)&#8221; (<a href="https://eutechreg.com/p/consent-for-everything-edpb-guidelines">Consent for everything? EDPB guidelines on URL, pixel, IP tracking</a>). The Commission proposes to address the obvious problems with the cookie consent law by partially moving the rules from the ePrivacy Directive to the GDPR, with some modifications.</p><p>One big problem with the Commission&#8217;s proposal is that they would retain the &#8220;cookie consent&#8221; ePrivacy rules for non-personal data (ie data not covered by the GDPR). I agree with other commentators that this makes no sense. Leaving those rules in place would not solve the cookie pop-up banner problem. More importantly, it would mean that non-personal data is protected in a more stringent and less pragmatic way than personal data.</p><p>To the extent there is a need for a legal rule on terminal integrity protection, it needs to be drawn much more precisely to target specific risks&#8212;the EDPB&#8217;s guidance shows that privacy enforcers cannot be trusted to proportionately interpret and apply broadly phrased rules. One reason to retain some explicit terminal integrity protections is that removing them entirely could further undermine the security protections for, eg, Android and iOS users which are currently being eroded by how the European Commission is applying the Digital Markets Act. Also, EU-law level terminal integrity protections are helpful in defending users against state-mandated incursions.</p><p>There are other ideas in the leaked draft, but I will end my comment here. It&#8217;s very hard to say how much of the current draft will make it into law given the inevitable opposition. And even if some of those changes do become law, they will likely be watered down significantly by privacy enforcers hostile to such changes. Hence my plea for addressing what I see as a bigger problem: the enforcement framework.</p>]]></content:encoded></item><item><title><![CDATA[Why is Meta offering cheaper and simpler "pay or consent" in the UK?]]></title><description><![CDATA[Meta announced today the introduction of a cheaper and simpler than in the EU version of &#8220;pay or consent&#8221; (subscription for no ads) for Facebook and Instagram for UK users.]]></description><link>https://eutechreg.com/p/why-is-meta-offering-cheaper-and</link><guid isPermaLink="false">https://eutechreg.com/p/why-is-meta-offering-cheaper-and</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Fri, 26 Sep 2025 18:00:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://about.fb.com/news/2025/09/facebook-and-instagram-to-offer-subscription-for-no-ads-in-the-uk/">Meta</a> announced today the introduction of a cheaper and simpler than in the EU version of &#8220;pay or consent&#8221; (subscription for no ads) for Facebook and Instagram for UK users. This follows Meta&#8217;s discussions with the UK Information Commissioner&#8217;s Office (ICO), which issued a guardedly positive <a href="https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/09/ico-statement-on-changes-to-meta-advertising-model/">statement</a> about the development.</p><p>There are two key differences between the UK offer and the one Meta implemented in the EU:</p><ul><li><p>Price: &#163;2.99 ($3.99) in the UK vs &#8364;5.99 ($6.99) in the EU (more on mobile).</p></li><li><p>In the UK the choice is binary: subscribe or consent to personalised ads. In the EU, Meta was pushed by regulators to offer a third option: less personalised ads (free-of-charge).</p></li></ul><p>I covered the EU developments extensively, including the European Commission&#8217;s recent DMA decision against Meta (see <a href="https://eutechreg.com/p/discussing-the-european-commissions">the summary in my notes</a> from a recent conversation with Eric Seufert). Here, I&#8217;m not going to give the full background and I&#8217;ll comment just on the two features that distinguish Meta&#8217;s UK and EU offerings.</p><h2>Why can Meta offer a binary choice in the UK, without the free-of-charge &#8220;less personalized ads&#8221; option?</h2><p>In response to regulatory pressure&#8212;<a href="https://eutechreg.com/p/discussing-the-european-commissions">under EU GDPR and EU DMA</a>&#8212;in November 2024, Meta made changes to their &#8220;pay or consent&#8221; offering in the EU: they added a free-of-charge &#8220;less personalized ads&#8221; option (while also reducing prices). In the EU, users are presented with a two-step choice flow where they first choose between paid and free, then can opt for less personalization. If they choose less personalization, then they also get occasional non-skippable ad breaks.</p><p>Meta, I think correctly, sees the requirement of the third choice as an overreach not based in EU law (they reiterated that in the UK announcement). It looks like the ICO implicitly agreed with Meta that such a requirement doesn&#8217;t follow from the GDPR. (Despite leaving the EU, the UK still has substantively the same GDPR as the EU).</p><p>Moreover, given that the UK is no longer in the EU, the <a href="https://eutechreg.com/p/discussing-the-european-commissions">EU Commission&#8217;s argument</a> that the EU DMA requires the third option independently of the GDPR is irrelevant in the UK.</p><h2>Why is Meta charging less in the UK?</h2><p>In their announcement, Meta didn&#8217;t directly explain why the price is lower in the UK than in the EU. My interpretation is that this reflects that the free-of-charge &#8220;less personalized ads&#8221; option in the EU is not as economically valuable for Meta as either subscription or serving personalized ads. So, it makes sense that without offering the third choice, Meta can offer a better price.</p><p>Meta and the UK ICO negotiated the price point. As the ICO announced:</p><blockquote><p>During the course of our engagement with Meta, it significantly lowered the starting price point at which users would be offered a subscription. As a result, users in the UK will be able to subscribe at a price point close to half that of EU users.</p></blockquote><p>Will many users choose to pay even this lower price? Probably not many. As I wrote in my analysis of the EU Commission&#8217;s DMA decision against Meta:</p><blockquote><p>The Commission highlights that Meta knew, or should have known, that an overwhelming majority of users would choose the free option. They point to academic studies showing low willingness to pay for privacy and Meta&#8217;s own internal documents from 2023, which predicted uptake for the SNA option would be extremely low (less than 1%)&#8212;a figure that closely matched the actual outcome. (...)</p><p>It was obvious to everyone that most users wouldn&#8217;t pay, and that only the most valuable advertising targets would. Taking that into account in pricing strategy was basic competence for Meta&#8217;s monetization team. I&#8217;d suggest the low uptake simply shows that users want social media funded by personalized advertising.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Apple’s new DMA criticism, EU-only feature delays, and what’s next]]></title><description><![CDATA[Apple has issued a sharply critical statement regarding the impact of the EU Digital Markets Act (DMA) and its impact on Apple products in the EU.]]></description><link>https://eutechreg.com/p/apples-new-dma-criticism-eu-only</link><guid isPermaLink="false">https://eutechreg.com/p/apples-new-dma-criticism-eu-only</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 25 Sep 2025 13:07:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Apple has issued a sharply critical <a href="https://www.apple.com/newsroom/2025/09/the-digital-markets-acts-impacts-on-eu-users/">statement</a> regarding the impact of the EU Digital Markets Act (DMA) and its impact on Apple products in the EU. The company says EU users will see delays to features such as AirPods&#8217; live translation and iPhone Mirroring on macOS. Apple attributes the delays to the DMA&#8217;s interoperability requirements&#8212;specifically, the need to ensure these features operate safely with non&#8209;Apple devices. By Brussels standards, Apple&#8217;s tone is confrontational and unlikely to improve its relationship with the European Commission, though there is little love lost. Apple&#8217;s core complaint, however, is valid: the DMA enforcers either disregard or actively welcome the DMA-forced &#8220;enshittification&#8221; of Apple&#8217;s offerings in the EU. For now, Apple is absorbing the cost of re&#8209;engineering its products to meet the Commission&#8217;s demands.</p><p>Why the statement? Apple may be hoping that vocal criticism creates political pressure on DMA enforcers when direct engagement has failed. Alternatively&#8212;and perhaps more plausibly&#8212;it mostly wants to explain EU&#8209;only delays to customers.</p><p>On Apple&#8217;s relationship with the enforcers, there is a perception perception that officials see the company as less cooperative than other regulated &#8220;gatekeepers,&#8221; including Meta and Alphabet.</p><p>I <a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">an earlier piece</a>, I suggested that Apple may be on a road to leave the EU. But I also speculated about other choices open to Apple:</p><blockquote><p>First, Apple could do what I just suggested&#8212;attempt to comply with the DMA by giving third-party developers the kind of access that the European Commission calls for, but with robust and effective safeguards controlled by users. This would be a transformation for Apple, because it would require treating its own apps as untrusted. However, I&#8217;m skeptical the Commission would allow this (which, of course, would show that the DMA enforcers don&#8217;t really care about EU customers, and are firmly committed to a Nirvana fallacy). (...)</p><p>Second, Apple could transform itself and abandon the identity of being special even among Big Tech companies in how they approach user privacy and security. This might involve acceding to all the requirements from the European Commission without an attempt to introduce zero trust for all first- and third-party applications on iOS. It would, of course, be a loss for users, but probably what the Commission wants.</p></blockquote><p>It appears Apple is pursuing the first path. As predicted, the DMA enforcers are unwilling to recognise user privacy and security concerns. Hence the feature delays&#8212;potentially indefinite for some features.</p><p>Apple&#8217;s new statement also hints at the strategic flaw in the second option. Fully opening its platforms in the EU might be cheaper in the short term, but it would undercut the differentiation on privacy and security that defines Apple&#8217;s brand.</p><p>Is the current situation a stable equilibrium?</p><p>Recall that, <a href="https://eutechreg.com/p/eu-dma-workshops-google-amazon-apple">during DMA workshops</a>, Apple said its engineers have spent &#8220;hundreds of thousands of hours&#8221; on DMA&#8209;driven changes and that &#8220;thousands&#8221; of employees across engineering, design, operations, and marketing are involved. That suggests Apple will tolerate significant friction&#8212;at least for now. The EU market may still be large enough to justify the pain, and Apple can reasonably assume that other jurisdictions could adopt similar rules, making EU compliance work reusable elsewhere.</p>]]></content:encoded></item><item><title><![CDATA[Debating the EU "pay or consent" decision against Meta]]></title><description><![CDATA[In a new webinar convened by Oles Andriychuk (University of Exeter), I debated two competition law scholars (Simonetta Vezzoso, University of Trento, and Marco Botta, European University Institute) on the topic of the EU Commission&#8217;s DMA enforcement decision against Meta (&#8220;pay or consent&#8221;).]]></description><link>https://eutechreg.com/p/debating-the-eu-pay-or-consent-decision</link><guid isPermaLink="false">https://eutechreg.com/p/debating-the-eu-pay-or-consent-decision</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Tue, 02 Sep 2025 13:31:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/bf-bEyd1j50" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a new webinar convened by Oles Andriychuk (University of Exeter), I debated two competition law scholars (Simonetta Vezzoso, University of Trento, and Marco Botta, European University Institute) on the topic of the EU Commission&#8217;s DMA enforcement decision against Meta (&#8220;pay or consent&#8221;). I already covered that decision on EUTechReg in <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">a text on how this will impact Meta&#8217;s business</a> and in <a href="https://eutechreg.com/p/discussing-the-european-commissions">my notes from a podcast with Eric Seufert</a>. The debate was lively and I think it showed well our contrasting perspectives on both the Meta decision and the Digital Markets Act as a whole.&nbsp;</p><p>You can watch the webinar <a href="https://www.youtube.com/watch?v=bf-bEyd1j50">on YouTube</a>, but I&#8217;m also including below some notes on the content of our debate.</p><blockquote><p><strong>Conference alert:</strong> On September 29-30, I&#8217;ll be participating as a panel moderator in <a href="https://www.raid.tech">RAID - Regulation of AI, Internet and Data</a> conference, in person, in Brussels (even I visit that place from time to time). I believe that registrations are still open.</p></blockquote><div id="youtube2-bf-bEyd1j50" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;bf-bEyd1j50&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/bf-bEyd1j50?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>My analysis centered on the practical consequences of the Commission's decision, my skepticism about its goals, and the lack of clarity for compliance.</p><p><strong>The decision harms Meta&#8217;s business users (advertisers). </strong>My most significant concern is the direct, negative impact this decision will have on business users, particularly small and medium-sized ones. The Commission, in its own decision, acknowledges that Meta Ads is a great tool for advertisers because of its unparalleled scale and personalization capabilities. For a niche artisanal manufacturer, for instance, the ability to target specific customers isn't just a benefit, it's existential to their business model. They don't have the budget for broad brand advertising like L&#8217;Oreal. (Simonetta&#8217;s point about a hairdresser in Rome is not a good example: such advertising is more likely to be vital for businesses that are not local; businesses which seek a small number of customers across the entire EU).&nbsp;</p><p>Yet, the entire thrust of the Commission's action seems designed to make this highly effective tool worse for its business users. The goal is to create contestability by thwarting the gatekeeper, but the immediate victims are the European businesses who rely on the service. We are seeing business users being harmed in the name of contestability with only hypothetical promises that it will somehow work out in the end. I worry this will become another hotel-like scandal, where the DMA's enforcement action ends up hurting the very constituency it was meant to empower. (I&#8217;m referring to the complaints from hotels about Google's DMA compliance measures that ended preferencing intermediaries like <a href="http://booking.com/">Booking.com</a> and harming the end business users like hotels).&nbsp;</p><p><strong>A lack of a DMA vision from the Commission. </strong>I've struggled to see a clear, coherent vision from the Commission of what they are trying to achieve. The Commission argued that Meta's data scale creates barriers to entry, but their remedy seems to be based on an implicit hope that if they weaken Meta, a new, better advertising player will magically appear. I am deeply skeptical that this will happen, given the immense challenges of achieving scale and the significant barriers for any new data-driven player created by the GDPR itself.</p><p>I've characterized the Commission's stance as a Pontius Pilate perspective. They seem to be washing their hands of the economic consequences of their actions, content to simply apply the DMA as they interpret it and say it's the legislators fault if it goes wrong. You cannot properly interpret complex provisions like Article 5(2) DMA without a teleological approach&#8212;an understanding of the ultimate goal&#8212;which I believe is missing. A discussion of the basic economic effects on business users is completely absent from the Commission&#8217;s decision.</p><p><strong>Path to compliance: what is expected of Meta? </strong>From a practical standpoint, the path to compliance for Meta is shrouded in ambiguity. The Commission's demands seem to be shifting the goalposts. Their one clear complaint about choices Meta currently offers to users was about a preselected default option, which Meta already changed, yet apparently the Commission remains unsatisfied.&nbsp;</p><p>Engaging in some amateur kremlinology, I suspect the Commission is pushing for a maximalist interpretation: a scenario where the default option for users is a free service with less personalized, contextual ads. However, since contextual advertising performs much worse for advertisers, this model is likely not economically viable to Meta. If many users choose such a default, Meta might need more frequent non&#8209;skippable ads to cover costs&#8212;but the Commission could then say such degradations aren&#8217;t allowed if they don&#8217;t flow from &#8220;less personalization&#8221; itself. </p><p>The Commission seems unconcerned, as their position is that they don't care about economic consequences on gatekeepers unless a formal application is made under Article 9(1) DMA, arguing that the company's economic viability is threatened. It looks as if they are trying to deliberately lead Meta into this exact situation.</p><p>Given this difficult dialogue, I've wondered if Meta might eventually consider a &#8220;nuclear&#8221; option: completely dismantling Meta Ads and creating separate, integrated ad backends for Facebook and Instagram. This might allow them to argue, under Recital 36 DMA, that there is no cross-use or combination of data between different services, thus avoiding Article 5(2) DMA.</p><p>However, I still speculated that some sort of settlement between Meta and the Commission is likely.</p><p><strong>The concept of &#8220;necessity.&#8221; </strong>Finally, this whole debate hinges on the unresolved concept of "necessity," which I see as a difficult axis of this conversation. I believe you cannot realistically separate the business aspect of the funding of a service from the service itself. If you make a service less personalized, it will generate less revenue, so it just has to cost more in a different way to remain viable. This is a business necessity. The problem is that the Court of Justice's reasoning seems incoherent on this point. It appears to dismiss business necessity from consideration, but in the very same judgment, it says a platform may charge an&nbsp; appropriate fee <em>if necessary</em>. This creates a contradiction: what kind of necessity allows for a fee if not a business one? This legal ambiguity will not leave us for a long time.</p><p>Simonetta, in particular, presented a strong counter-narrative to my points.</p><p><strong>On the goal of the DMA (promoting user choice vs promoting contestability).</strong> While I focused on the negative impact on business users, Simonetta argued that the decision is fundamentally about giving choice to users. She believes the DMA should be used to experiment with new business models beyond targeted advertising, stating that this is not the only type of innovation that we are looking forward to in the EU. When I raised the concerns of advertisers, she dismissed them, questioning how independent those business users are from the gatekeepers and arguing that the current model is a threat to our democracy.</p><p><strong>On the Commission's role.</strong> My skepticism about the Commission's vision was met with Simonetta's call for strong enforcement. She stated that she doesn&#8217;t expect any settlement between Meta and the Commission, and that she wants the Commission to enforce the DMA &#8220;religiously&#8221; to clarify the rules and prevent Meta from gaming the system as she believes they have with the GDPR.</p><p><strong>On Meta's ability to comply:</strong> I suggested that the Commission's demands were unclear, but Simonetta disagreed, stating that Meta is not in the dark about what they are supposed to do. She believes the legal principles are pretty clear and that Meta has been given enough indication of what a compliant solution looks like. She views Meta's pay or consent model as a clear attempt to find a way of gaming the DMA.</p><p>Marco&#8217;s view was more of a middle ground. He agreed with my assessment that the Commission's decision lacks clarity on what a compliant future model looks like. He also predicted that the parties will eventually find a compromise or settle, viewing the current decision as an interim step in a long regulatory dialogue.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Washington Effect: will the Brussels bureaucracy bend?]]></title><description><![CDATA[Much has been said about the so-called &#8220;Brussels Effect&#8221;&#8212;that is, the European Union&#8217;s animating conceit that its mission is to make rules for the entire world.]]></description><link>https://eutechreg.com/p/the-washington-effect-will-the-brussels</link><guid isPermaLink="false">https://eutechreg.com/p/the-washington-effect-will-the-brussels</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 31 Jul 2025 07:28:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Much has been said about the so-called &#8220;Brussels Effect&#8221;&#8212;that is, the European Union&#8217;s animating conceit that its mission is to make rules for the entire world. Without irony, the EU has embraced the meme that others innovate, while the EU regulates.</p><p>Recent shifts in U.S. trade policy have introduced the threat of tariffs against jurisdictions that adopt such far-reaching regulatory regimes to target U.S. firms, particularly the tech giants, in what we might call the &#8220;Washington Effect.&#8221;</p><p>The just-announced <a href="https://www.politico.com/news/2025/07/28/trump-tariffs-trade-unclear-eu-japan-00480793">trade deal</a> between the United States and the European Union, however, reveals a more complex picture. While the EU agreed to abandon network fees as a part of the agreement, it is officially signaling defiance on its core digital regulations.</p><p>This raises a fundamental question: can the Washington Effect truly succeed when EU bureaucrats&#8217; identity depends on maintaining their regulatory empire? Or would even limited success demonstrate that external pressure is the only force capable of breaking the cozy relationships between EU officials and entrenched interests that harm Europeans as consumers, workers, and innovators?</p><h2>The Network Fees Capitulation?</h2><p>The<a href="https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-the-united-states-and-european-union-reach-massive-trade-deal/"> White House fact sheet</a> announcing the U.S.-EU trade deal contains a single sentence that marks a watershed moment in EU digital policy: &#8220;the European Union confirms that it will not adopt or maintain network usage fees.&#8221; This explicit commitment represents an apparent final burial of what I&#8217;ve<a href="https://truthonthemarket.com/2023/06/06/theres-nothing-fair-about-eu-telecoms-proposed-fair-share-plan/"> previously described</a> as nothing more than special pleading from incumbent telecom operators.</p><p>What were these &#8220;network fees&#8221;? The proposal&#8212;marketed as a &#8220;fair share&#8221; contribution&#8212;would have required large providers of online content (mostly U.S. platforms like Netflix, Google, and Meta) to pay European telecom operators based on traffic volumes. The telecoms claimed they needed &#8364;174 billion by 2030 to meet their connectivity targets and argued that companies generating heavy traffic should compensate them for network usage.</p><p>This was economically illiterate. As I <a href="https://truthonthemarket.com/2023/10/09/how-etnos-fair-share-proposal-threatens-europes-digital-future/">argued in 2023</a>, internet traffic is ultimately generated by consumers, who already pay for internet access. The proposal amounted to double taxation, wrapped in the rhetoric of &#8220;fairness.&#8221; It would have raised consumer prices and potentially slowed network upgrades by reducing competitive pressure on telecom operators. Even the Body of European Regulators for Electronic Communications (BEREC), the EU&#8217;s own body of telecom regulators, <a href="https://www.berec.europa.eu/en/document-categories/berec/others/berec-input-to-the-ecs-exploratory-consultation-on-the-future-of-the-electronics-communications-sector-and-its-infrastructure">opposed the plan</a>.</p><p>Yet despite overwhelming opposition from stakeholders, member states, and independent experts, it took the credible threat of 30% tariffs from Washington to kill this zombie proposal. There are at least two plausible interpretations of what happened&#8212;both likely true to some extent. Arguably, this demonstrates how U.S. pressure can deliver clear benefits to European consumers by breaking through entrenched special interests and their connections to EU officials&#8212;exemplified by former Commissioner Thierry Breton, whose background as a telecom executive surely had nothing to do with his enthusiasm for forcing American companies to subsidize his former industry colleagues.</p><p>However, it was also probably a factor that the internal EU opposition to network fees was relatively strong and it was unlikely for this proposal to be adopted anytime soon. Hence, it was easy for the EU negotiators to &#8220;concede&#8221; and not a big win for the US (and for EU consumers).</p><h2>&#8216;Addressing Unjustified Digital Trade Barriers&#8217;</h2><p>The same White House announcement pledges that &#8220;the United States and the European Union intend to address unjustified digital trade barriers.&#8221; This carefully chosen language opens the door to challenging the EU&#8217;s entire framework of digital regulations.</p><p>According to<a href="https://www.euronews.com/business/2025/07/15/exclusive-us-pitches-special-role-in-eu-regulatory-surveillance-in-trade-deal"> </a><em><a href="https://www.euronews.com/business/2025/07/15/exclusive-us-pitches-special-role-in-eu-regulatory-surveillance-in-trade-deal">Euro News</a></em>, U.S. negotiators proposed creating a new advisory body that would give American companies subject to the Digital Markets Act (DMA) a formal voice in the regulatory process. While EU officials publicly denied any such committee is under consideration&#8212;with European Commission spokesperson Thomas Regnier insisting that &#8220;our legislation is not on the table&#8221;&#8212;the mere fact that such proposals are being floated demonstrates how far the conversation has shifted.</p><p>This represents a fundamental challenge to Brussels&#8217; cherished regulatory &#8220;sovereignty.&#8221;</p><h2>The DSA as &#8216;Foreign Censorship&#8217;</h2><p>The Digital Services Act (DSA) faces its own reckoning. A July 2025<a href="https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2025-07/DSA_Report%26Appendix%2807.25.25%29.pdf"> interim staff report</a> from the U.S. House Judiciary Committee framed the DSA as a &#8220;foreign censorship threat&#8221; that compels &#8220;global censorship and infringes on American free speech.&#8221;</p><p>The committee&#8217;s investigation, which included issuing subpoenas to nine technology companies, found that European regulators are targeting core political speech that, in the United States, would be protected under the First Amendment. At a May 2025 DSA workshop, the European Commission reportedly categorized the phrase &#8220;we need to take back our country&#8221; as &#8220;illegal hate speech.&#8221; The Commission also asked platforms how they could use &#8220;content moderation processes&#8221; to address memes that might spread &#8220;hate speech or discriminatory ideologies.&#8221;</p><p>These findings provide ammunition for the Trump administration&#8217;s<a href="https://www.whitehouse.gov/fact-sheets/2025/02/fact-sheet-president-donald-j-trump-issues-directive-to-prevent-the-unfair-exploitation-of-american-innovation/"> Feb. 21 memorandum</a> directing federal agencies to investigate EU policies that &#8220;undermine free speech.&#8221; When Thierry Breton threatened regulatory action against X.com for broadcasting a live interview with President Donald Trump in August 2024, it crystallized American concerns that the DSA represents extraterritorial censorship, rather than legitimate content moderation.</p><p>The report provides damning examples of politically motivated censorship requests from EU member states:</p><ul><li><p>Poland&#8217;s National Research Institute (NASK) flagged a TikTok post stating that &#8220;electric cars are neither an ecological nor an economical solution.&#8221;</p></li><li><p>French national police directed X.com to remove a satirical post from a U.S.-based account commenting on French immigration policies following a terrorist attack.</p></li><li><p>German authorities classified a tweet calling for the deportation of a Syrian family reported to have committed 110 criminal offenses as &#8220;incitement to hatred.&#8221;</p></li></ul><p>Moreover, the DSA requires platforms to prioritize censorship requests from government-approved &#8220;trusted flaggers&#8221;&#8212;entities the report argues are often pro-censorship and, in some cases, government-funded. The DSA&#8217;s ostensibly &#8220;voluntary&#8221; codes of conduct on hate speech and disinformation are effectively mandatory, as compliance serves as a safe harbor against DSA enforcement.</p><p>The penalties reinforce these concerns: fines of up to 6% of global revenue create what the Judiciary Committee report called a strong incentive for platforms to over-censor, affecting users globally, as major social-media platforms typically maintain a single set of terms and conditions worldwide.</p><h2>The Brussels Bureaucracy Strikes Back</h2><p>Yet here is where the Washington Effect has met with Brussels intransigence. According to<a href="https://www.politico.eu/article/winners-losers-eu-von-der-leyen-us-donald-trump-trade-deal-tariffs/"> </a><em><a href="https://www.politico.eu/article/winners-losers-eu-von-der-leyen-us-donald-trump-trade-deal-tariffs/">Politico</a></em>, &#8220;there is absolutely no commitment on digital regulation, nor on digital taxes&#8221; in the U.S.-EU trade deal. In an editorial, <em>Politico</em> opined that the Commission had &#8220;called the Trump administration&#8217;s bluff&#8221; on its attempt to bend EU rules.</p><p>The<a href="https://economictimes.indiatimes.com/tech/technology/eu-tech-rules-not-included-in-us-trade-talks-eu-commission-says/printarticle/122159192.cms"> </a><em><a href="https://economictimes.indiatimes.com/tech/technology/eu-tech-rules-not-included-in-us-trade-talks-eu-commission-says/printarticle/122159192.cms">Economic Times</a></em><a href="https://economictimes.indiatimes.com/tech/technology/eu-tech-rules-not-included-in-us-trade-talks-eu-commission-says/printarticle/122159192.cms"> quoted</a> Regnier declaring on behalf of the Commission: &#8220;The legislations will not be changed. The DMA and the DSA are not on the table in the trade negotiations with the U.S.&#8221; Regnier continued: &#8220;We are not going to adjust the implementation of our legislation based on the actions of third countries. If we started to do that, then we would have to do it with numerous third countries.&#8221;</p><h2>A Deal with Two Interpretations</h2><p>What happens next is far from clear. The United States and European Union are signaling different understandings of what their new deal means. While Washington speaks of &#8220;addressing unjustified digital trade barriers,&#8221; Brussels insists its digital regulations aren&#8217;t even on the table. This ambiguity may be deliberate, as it allows both sides to claim victory while postponing the real confrontation.</p><p>But can Brussels truly maintain its current course? The<a href="https://ccianet.org/news/2025/07/ccia-response-to-us-eu-trade-deal/"> Computer &amp; Communications Industry Association (CCIA) has estimated</a> that EU digital rules cost $97.6 billion annually&#8212;a staggering burden that harms not just American companies but, more importantly, Europe itself. While the Commission clings to the DMA and DSA as monuments to its relevance, European innovation falls further behind.</p><h2>Will the Washington Effect Ultimately Prevail?</h2><p>Sustained American pressure can work when combined with internal EU opposition. But to be effective, U.S. policy must appreciate which issues are important (network fees were a politically weak idea anyway) and of how hard the EU bureaucrats will fight any real change. The question is whether the Trump administration will maintain such robust focus on digital regulations or move on to other priorities.</p><p>Several factors could strengthen the Washington Effect over time. While European companies struggle with 110% service barriers (per the <a href="https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en">Draghi Report</a>), American competitors burdened by EU regulations may simply focus on other markets, leaving Europe further behind. Moreover, Sweden&#8217;s call to <a href="https://www.politico.eu/article/swedish-pm-calls-to-pause-eu-ai-rules/">pause AI Act implementation</a> hints at growing internal dissent. If major member states break ranks, Brussels&#8217; united front crumbles. </p><p>The congressional framing of the DSA as &#8220;foreign censorship&#8221; may provide grounds for stronger challenges in the United States that could escalate beyond trade negotiations. Also, the striking evidence that the Judiciary Committee uncovered may become crucial for legal challenges to the legislation before EU courts.</p><p>For those of us hoping the Washington Effect would force a regulatory reckoning, the deal offers both encouragement and disappointment. Yes, sustained pressure can kill the most egregious proposals. But core regulations that embody bureaucratic identity and power prove remarkably resistant to economic logic.</p><p>Perhaps the real question isn&#8217;t whether the Washington Effect will succeed, but whether Europeans will eventually tire of paying the price for their bureaucrats&#8217; regulatory fantasies.</p><p>The Washington Effect may not achieve immediate victory over Brussels&#8217; intransigence. But by forcing these contradictions into the open, it performs a valuable service. As Europeans watch their continent fall further behind technologically, they should ask whether the Brussels Effect&#8217;s conceit is really worth the cost.</p>]]></content:encoded></item><item><title><![CDATA[EU DMA workshops: Google, Amazon, Apple, Meta, and Microsoft]]></title><description><![CDATA[Over the past two weeks, the European Commission held a second round of public workshops with the designated "gatekeeper" companies Alphabet, Amazon, Apple, Bytedance, Meta, and Microsoft.]]></description><link>https://eutechreg.com/p/eu-dma-workshops-google-amazon-apple</link><guid isPermaLink="false">https://eutechreg.com/p/eu-dma-workshops-google-amazon-apple</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Tue, 08 Jul 2025 10:19:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past two weeks, the European Commission held a second round of <a href="https://digital-markets-act.ec.europa.eu/events_en">public workshops</a> with the designated "gatekeeper" companies Alphabet, Amazon, Apple, Bytedance, Meta, and Microsoft. Having <a href="https://eutechreg.com/p/dma-workshops-and-privacy">analyzed</a> the first round in April 2024 from a privacy and security perspective, I now examine what happened in Brussels during these follow-up sessions. The workshops provide helpful background for <a href="https://subscriber.politicopro.com/article/2025/07/meta-appeals-eu-fine-over-big-tech-rules-00441698">Meta&#8217;s</a> and <a href="https://9to5mac.com/2025/07/07/apple-formally-appeals-e500-million-dma-fine-in-the-eu/">Apple&#8217;s</a> announcements that they will fight the Commission&#8217;s non-compliance decisions in the EU courts.</p><p>The second round of DMA workshops reveals troubling patterns in the Commission's enforcement approach, particularly regarding privacy and security considerations. As I've argued in my work on <a href="https://laweconcenter.org/resources/interpreting-the-eu-digital-markets-act-consistently-with-the-eu-charters-rights-to-privacy-and-protection-of-personal-data/">interpreting the DMA consistently with the EU Charter</a>, the Charter takes precedence over regulations like the DMA. The workshops suggest that the Commission may be failing to meet this standard, pursuing an interpretation of the DMA that undermines rather than upholds fundamental rights.</p><p>The gatekeepers' testimonies paint a picture of regulatory enforcement that is creating real harm: European consumers receiving inferior digital services, small businesses losing effective advertising tools, and security vulnerabilities being introduced in the name of openness. Meanwhile, the compliance costs&#8212;"multiple orders of magnitude" beyond what the Commission estimated&#8212;represent resources that could have been invested in innovation and security improvements.</p><p>What is also very concerning is the Commission's apparent belief that it can mandate optimal market outcomes through regulation, even when those outcomes conflict with user preferences and technical realities. The low adoption rates for less personalized advertising that the Commission cites as evidence of non-compliance may instead be evidence that users actually prefer the services as they were originally designed. The Commission's response&#8212;to mandate changes that make services worse for users&#8212;exposes a regulatory philosophy fundamentally disconnected from consumer welfare.</p><p>The path forward requires fundamental changes in the enforcement approach: clearer compliance guidance, genuine engagement with security experts, and recognition that consumer choice&#8212;not regulatory fiat&#8212;should drive market outcomes. Based on these workshops, however, the Commission appears unlikely to make such changes without significant pressure from stakeholders and the courts.</p><p>Below, I first discuss some general themes and then add more specific notes from individual gatekeeper workshops.</p><h2><strong>General themes</strong></h2><h3><strong>The DMA's ambiguities compound enforcement uncertainty</strong></h3><p>I previously highlighted the DMA's inherent legal uncertainties&#8212;from questions about its <a href="https://laweconcenter.org/resources/german-big-tech-actions-undermine-the-dma/">validity as a harmonizing measure under Article 114 TFEU</a> to the <a href="https://laweconcenter.org/resources/how-will-the-eu-dma-pay-or-consent-decision-impact-metas-business/">undefined key concepts and conflicting interpretations</a> that plague its enforcement. The workshops confirmed these concerns have materialized into concrete compliance challenges.</p><p>The gatekeepers' frustrations reveal a regulatory framework struggling with its own contradictions. Apple characterized the Commission's interoperability interpretation as "extreme," echoing my <a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">analysis that the Commission's approach goes beyond reasonable legal interpretation</a>. Meta and Apple's planned legal challenges reflect a deeper issue: the Commission's refusal to provide compliance clarity while simultaneously imposing fines for non-compliance creates a Kafkaesque regulatory environment. As Apple pointedly observed, when "no two people agree on what the DMA's substantive obligations mean," the resulting ambiguity undermines the rule of law itself.</p><p>Amazon's experience particularly illustrates what I've described as the <a href="https://laweconcenter.org/resources/the-digital-markets-act-is-a-security-nightmare/">technical implementation challenges</a> inherent in the DMA. Their struggle to reconcile DMA data portability requirements with GDPR data controller obligations exemplifies the regulatory conflicts I warned would emerge. The companies' repeated pleas for guidance&#8212;met with Commission silence&#8212;suggest an enforcement approach that prioritizes regulatory power over legal certainty or even over positive effects for the EU.</p><h3><strong>The Commission confuses providing choices with dictating outcomes</strong></h3><p>While <a href="https://laweconcenter.org/resources/the-secret-metrics-of-dmas-success/">according to the Commission, DMA explicitly focuses on conduct rather than output</a>, the EC's interpretation of low adoption rates for less personalized advertising as evidence of non-compliance contradicts this principle.</p><p>Meta's defense&#8212;that the DMA concerns conduct and user choice, not adoption metrics&#8212;aligns with my analysis of the <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">pay-or-consent framework</a> and subsequent <a href="https://eutechreg.com/p/discussing-the-european-commissions">discussions with Eric Seufert</a>. If users overwhelmingly choose personalized services when given a genuine choice, this demonstrates consumer preference, not compliance failure. The Commission's response&#8212;treating user preferences as a problem to be solved through enforcement&#8212;exposes what I've characterized as the Commission's lack of concern for consumer welfare. This approach suggests the Commission seeks not to enable choice but to engineer predetermined market outcomes, regardless of what users actually want.</p><h3><strong>Enforcement disconnected from real-world consequences</strong></h3><p>The Commisson's fixation on "consent rates" contrasts starkly with their lack of concern with real negative effects of their DMA interpretation. I previously warned about the <a href="https://laweconcenter.org/resources/the-digital-markets-act-is-a-security-nightmare/">disconnect between the DMA's theoretical goals and practical realities</a>, particularly how enforcement could disrupt business models that European businesses depend on (e.g., <a href="https://laweconcenter.org/resources/pay-or-ok-what-is-an-appropriate-fee-july-update/">targeted advertising</a>.</p><p>On this point, Meta highlighted how enforcement "decontextualized from impact" damages both users (through irrelevant advertising) and SMEs (through reduced advertising effectiveness)&#8212;precisely the <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">advertising market disruption I've been writing about</a>. Google's examples&#8212;&#8364;114 billion in potential business losses, 30% traffic reductions for direct hotel bookings, and the withholding of innovations from Europe&#8212;illustrate the Commission's willingness to, as I've noted, ignore the economic pain (<a href="https://truthonthemarket.com/2025/06/20/the-eus-dma-enforcement-against-meta-reveals-a-dangerous-regulatory-philosophy/">https://truthonthemarket.com/2025/06/20/the-eus-dma-enforcement-against-meta-reveals-a-dangerous-regulatory-philosophy/</a>) its enforcement imposes.</p><p>Apple's concern that the DMA undermines their unique value proposition resonates with my <a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">longstanding critique</a> of policymakers' dismissal of the walled garden as a legitimate consumer choice. The real threat of companies being forced to "hit the pause button" on European innovation suggests that the DMA's approach could leave European consumers as second-class digital citizens.</p><h3><strong>The Commission's systematic dismissal of privacy and security</strong></h3><p>The workshops also provided evidence for what I've long argued: the DMA creates <a href="https://laweconcenter.org/resources/the-digital-markets-act-is-a-security-nightmare/">privacy and security risks with insufficient safeguards</a> while the Commission shows a troubling willingness to dismiss these concerns. See also my characterization of the DMA as a <a href="https://laweconcenter.org/resources/the-digital-markets-act-is-a-security-nightmare/">"security nightmare"</a> and my analysis of how <a href="https://laweconcenter.org/resources/privacy-and-security-implications-of-regulation-of-digital-services-in-the-eu-and-in-the-us/">interoperability mandates create unaddressed privacy risks</a>.</p><p>Apple's testimony was particularly striking. They observed that cybersecurity agencies like ENISA were "nowhere to be found" around specification decisions. The exclusion of technical experts from critical decisions, as Apple noted, ensures that "complex trade-offs around interoperability are being made by those without requisite technical knowledge." The company's incredulity at the Commission's statement that "integrity has a distinct meaning from users' privacy and security" highlights precisely the compartmentalized thinking I criticized when analyzing <a href="https://laweconcenter.org/resources/interpreting-the-eu-digital-markets-act-consistently-with-the-eu-charters-rights-to-privacy-and-protection-of-personal-data/">how the DMA conflicts with EU Charter rights</a>.</p><h3><strong>Fragmentation threatens the DMA's legal validity</strong></h3><p>The workshops echoed my <a href="https://laweconcenter.org/resources/german-big-tech-actions-undermine-the-dma/">thesis about regulatory fragmentation</a>: parallel national enforcement actions, particularly by Germany's Federal Cartel Office (FCO), undermine the DMA's validity as a harmonizing measure under Article 114 TFEU.</p><p>Google and Amazon's testimonies provide concrete evidence of fragmentation. Amazon's experience is particularly telling: despite the DMA's promise of a "single European rulebook," they face FCO actions on matters "squarely covered by the DMA"&#8212;precisely the <a href="https://laweconcenter.org/resources/german-big-tech-actions-undermine-the-dma/">"brazen attempt to circumvent the DMA's harmonizing function"</a> I identified in analyzing German enforcement. The FCO's requirement that Amazon promote "higher-priced, independent, third-party seller offers" contradicts the unified approach the DMA was meant to establish.</p><p>Google's warning about unchecked national litigation compounds the problem. When member states and private litigants can pursue parallel actions on DMA-covered issues, it creates a legal issue that could invalidate the entire DMA. If the Commission accepts such fragmentation&#8212;as it appears to be doing&#8212;it admits the DMA has failed its fundamental purpose as a harmonizing instrument, potentially rendering it vulnerable to invalidation by the EU Court of Justice.</p><h2><strong>Notes on individual workshops</strong></h2><h3><strong>Alphabet/Google: security-privacy issues</strong></h3><p>Google's workshop testimony raised an important problem in the DMA's implementation: how third parties systematically push to weaken security protections in the name of competition. A participant raised concerns about the friction in the sideloading process due to security warnings, which cause significant user drop-off, and asked if Google could create a whitelist for trusted developers.</p><p>Alphabet representatives argued that Android requires a single, one-time warning per install source due to real security risks. A whitelisting system is not technically feasible due to the vast number of apps and the inability to guarantee that a sideloaded app is the same as a verified one from the Play Store. They also stated they do not dictate to OEMs what additional warnings they can show.</p><p>Google also presented compelling data: users are 50 times more likely to encounter malware off the Google Play Store. They provided a concrete example of a current threat&#8212;an app that clones TikTok and allows malicious actors to upload the entire photo gallery from Android devices. As Google noted, "these are real threats, these are real risks."</p><p>This aligns with what I've been warning about regarding <a href="https://truthonthemarket.com/2022/01/26/privacy-and-security-risks-of-interoperability-and-sideloading-mandates/">privacy and security risks of interoperability and sideloading mandates</a>. As I wrote then, a sideloading mandate "will effectively <em>force</em> users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny."</p><p>Another concerning example came from a "privacy-focused" search engine whose representative argued they need more granular search query data from Google because the current anonymization thresholds render over 98-99% of the query data useless for key functions like auto-complete, as it excludes the long tail of unique queries.</p><p>Alphabet representatives stressed that the law requires anonymization, and frequency thresholding is the state-of-the-art method. They invited challengers to propose a better, equally robust anonymization technique. Google explained their specific approach: queries must be entered by at least 30 signed-in users globally over the preceding 13 months, and results must have been viewed by at least five signed-in users in a given country per device type. To understand the significance: these thresholds mean that unique or rare search queries&#8212;which constitute the 'long tail' of search behavior&#8212;cannot be shared with competitors, effectively limiting their ability to improve features like autocomplete. Technical privacy experts who reviewed these measures almost suggested they might not be stringent enough, highlighting the difficult balance required. Google mentioned that they have innovated to find ways to recover some queries that fall below the threshold, such as easily identifiable misspellings. They stated that Google is not aware of any better solution for robustly anonymizing such data and that no one has proposed one. They called the frequency thresholding method "state of the art throughout the industry."</p><p>The European Commission representative highlighted that Article 6.11 of the DMA protects end-user privacy by requiring gatekeepers to share search data in an anonymized form. The Commission stated that this anonymization should not significantly reduce the quality or usefulness of the data. They mentioned that the Commission is assessing how data can be anonymized to ensure strong privacy protections in line with GDPR and is engaging with data protection authorities on this topic.</p><h3><strong>Amazon: compliance costs and unintended privacy consequences</strong></h3><p>Amazon's testimony exposed two issues in the DMA's design and implementation. First, the staggering gap between projected and actual compliance costs. The Commission estimated &#8364;10 million annually across all gatekeepers; Amazon alone reports costs "multiple orders of magnitude beyond that predicted amount." This massive diversion of resources from innovation to compliance represents a hidden tax on digital services offered in Europe.</p><p>More troubling is what Amazon's data portability implementation revealed about security risks. Their vetting process for DMA third-party data access uncovered that over 75% of applicants operate from outside the EU, with many appearing to be data aggregators with opaque privacy policies. This validates my concerns about the DMA creating new attack vectors: the very mechanisms meant to enhance competition are being weaponized by actors with no stake in European privacy protection.</p><p>The API versus self-service portal distinction matters here. While Amazon maintains some control through vetting, the fundamental problem remains: the DMA mandates data flows to entities that may have minimal privacy safeguards or malicious intent. Amazon's discovery of "significant security risks" in their vetting process should prompt serious reconsideration of these mandates, yet the Commission appears oblivious.</p><h3><strong>Apple: when interoperability becomes exploitation</strong></h3><p>As I've written in my <a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave?utm_source=publication-search">analysis on Apple and the EU DMA</a>, Apple's concerns about the Commission's approach are particularly acute.</p><p>The Commission representative emphasized that user safety was a primary factor in the Commission's decisions. The official stated that the measures specified in their decision "also come with privacy and security safeguards." A typical measure involves "having user permission to grant apps access to sensitive data," similar to how Apple's own ecosystem works. For example, regarding iOS notifications, the Commission clarified that Apple can "require third parties to encrypt notifications and not to break any end-to-end encryption."</p><p>Apple countered that the interoperability mandates were "intrusive" and that the process was being rushed. An Apple representative said, "You're forcing us to rush interoperability engineering, creating the risk that we're going to have to offer premature solutions with bugs... creating lasting problems for developers and users."</p><p>Apple revealed that some third parties are exploiting interoperability requirements for data harvesting. Specifically, these parties have requested the ability to "read the contents from each and every message and email on the user's device." An Apple spokesperson pointed out a direct conflict with GDPR, stating, "The DMA was explicitly intended to not undermine the GDPR. Yet it seems we are constantly being asked to do just that."</p><p>This weaponization of interoperability is precisely what ICLE warned about in the <a href="https://laweconcenter.org/resources/comments-of-icle-to-commission-consultation-on-proposed-measures-for-interoperability-between-apples-ios-operating-system-and-connected-devices-dma-100203/">recent comments on Apple's interoperability requirements</a>. As my colleagues noted there, "unscrupulous actors with <em>no</em> incentive to maintain security safeguards are sure to try to exploit any loopholes created by the Commission. The result is an inevitable increase in vulnerabilities, leaving European users more exposed to risks that could have been mitigated within a more controlled ecosystem."</p><p>An Apple representative read a quote from the Commission's specification decision: "Within the architecture of the DMA, integrity has a distinct meaning from users' privacy and security," and responded, "That cannot be right, and again, cannot be what was intended."</p><p>Regarding the Account Data Transfer API, Apple described the security measures in place, including a due diligence process for third parties. Apple asks if they "plan to sell or license user data to third parties that the user has no relationship with." The representative noted the Commission's early agreement that "it was important the DMA not become synonymous with abuse of data in this space."</p><h3><strong>Meta: business model destruction through regulatory fiat</strong></h3><p>I've written extensively about Meta's advertising compliance, but the workshops revealed one significant clarification. Meta explained that their LPA (less personalised ads) option does not involve any data combination across different Meta services, stating: "if a user opts to take the ads experience less personalised ads, the only data that we would use are personal data relating to a user's interactions with ads, so an ad they want to see more of or an ad they never want to see again, a user's age, gender, if they've provided this, and a general location. We also use very limited context-based data. This is relating to content shown to a user in a specific session they are engaging with."</p><p>However, a Meta representative added that "there is a part of the ad selection process which could be impacted by the content in the live session that you are in on Facebook and Instagram. But it's minimal, and it's reduced."</p><p>This statement raises important questions. Meta appears to acknowledge that its advertising system still accesses basic user information from Facebook/Instagram, even in LPA mode. This could prove problematic given the Commission's strict interpretation that a less personalized alternative must involve no data combination/cross-use across the gatekeeper's services&#8212;as I analyzed in <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">"How will the EU DMA 'pay or consent' decision impact Meta's business?"</a>.</p><p>But, as a Meta representative noticed during the workshop, data like age and location are "pure table stakes for advertising." This creates an impossible bind. The Commission's interpretation risks mandating that Meta offer a service that cannot economically exist. Meta's observation that this "potentially imposes an unviable business model" understates the severity&#8212;it definitively imposes business model destruction.</p><p>Even if the Commission accepts LPA as is, this doesn't address the issue of it being largely useless to the many EU SMEs that rely on Facebook or Instagram to be able to reach their customers. Such businesses don't have budgets for "brand advertising." To be viable they need to be able to reach precisely the people who are interested in their, often niche, products or services.</p><p>The enforcement "decontextualized from impact" that Meta describes has real consequences: European SMEs lose effective advertising tools, users receive irrelevant ads, and Meta faces potentially a difficult choice between compliance and viability. The Commission's fixation on consent rates while ignoring these impacts reveals enforcement driven by ideology rather than consumer welfare or economic reality.</p><h3><strong>Microsoft: the Recall controversy and consent confusion</strong></h3><p>Microsoft's workshop highlighted two distinct but related problems. LinkedIn's dual consent requirement&#8212;requiring both DMA and GDPR consent for overlapping data uses&#8212;exemplifies the regulatory confusion I've long criticized. Users must navigate multiple consent frameworks for the same data processing, a complexity that serves no privacy purpose but increases the Commission's control. This artificial interpretation seems designed to circumvent inconvenient GDPR precedents rather than protect users. (I explain how the "two consents" interpretation increases EC powers and attempts to circumvent precedent <a href="https://eutechreg.com/p/discussing-the-european-commissions">here</a>)</p><p>The second issue is the Recall feature in Windows. Microsoft explained that the Recall feature, which allows a user to find content they have previously seen on their screen, "runs entirely locally on the PC." It uses AI to process screenshots locally to make past activity searchable. This design implies that sensitive user activity data is not sent to the cloud.</p><p>The Commission's interest in mandating data portability for Recall raises serious security concerns. (Some will say "even more" concerns given existing debates about Recall's security, but I'm more optimistic about the future of this kind of technology). An EU Commission official stated that they're seeking feedback on the "data portability solution" for Recall under the EU DMA.</p><p>This could be very problematic. Throughout my work on the DMA, I've argued that the EU Commission consistently downplays user security and privacy concerns about legally-mandated interoperability and data portability.</p>]]></content:encoded></item><item><title><![CDATA[Discussing the European Commission's Meta (“pay or consent”) decision with Eric Seufert]]></title><description><![CDATA[Today, I joined Eric Seufert for my seventh (!) appearance on the Mobile Dev Memo podcast[1] to discuss the European Commission's April decision on Meta's "pay or consent" model.]]></description><link>https://eutechreg.com/p/discussing-the-european-commissions</link><guid isPermaLink="false">https://eutechreg.com/p/discussing-the-european-commissions</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Wed, 25 Jun 2025 16:20:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, I <a href="https://mobiledevmemo.com/podcast-whats-happening-with-dma-enforcement-with-mikolaj-barczentewicz/">joined</a> <a href="https://x.com/eric_seufert">Eric Seufert</a> for my seventh (!) appearance on the <a href="https://mobiledevmemo.com/podcast-whats-happening-with-dma-enforcement-with-mikolaj-barczentewicz/">Mobile Dev Memo</a> podcast[1] to discuss the European Commission's April decision on Meta's "pay or consent" model. The catalyst for our conversation was the recent release of the full text of this decision, which provides crucial insights into the Commission's approach to enforcing the Digital Markets Act (DMA). I already wrote about the decision <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">here</a> and on <a href="https://truthonthemarket.com/2025/06/20/the-eus-dma-enforcement-against-meta-reveals-a-dangerous-regulatory-philosophy/">Truth on the Market</a>, but our conversation with Eric covered more ground, so I encourage you to <a href="https://mobiledevmemo.com/podcast-whats-happening-with-dma-enforcement-with-mikolaj-barczentewicz/">listen</a>! Below, I&#8217;m sharing my extended notes on that conversation with a bit more chronological and technical detail.</p><h2><strong>The long road through two regulatory regimes</strong></h2><p>I began by providing context on the two regulatory regimes affecting Meta's advertising monetization in the EU: the GDPR and the DMA. The saga began in 2018 when <strong>the GDPR</strong> entered into force, prompting Meta to move from user consent to "contractual necessity" as their legal basis for processing personal data. This kicked off a regulatory odyssey, which I have been covering here on <a href="https://eutechreg.com/archive?sort=new">EUTechReg</a>.</p><p>In December 2022, the European Data Protection Board (EDPB) forced the Irish Data Protection Commission to tell Meta they couldn't use contractual necessity for personalized advertising. The EDPB adopted a restrictive&#8212;and <a href="https://eutechreg.com/p/facebook-instagram-pay-or-consent">I think</a> mistaken&#8212;interpretation that personalized ads funding a service don't fall under contractual necessity.</p><p>By April 2023, Meta had pivoted to "legitimate interests" as their legal basis. However, this too was short-lived. The same month, Meta received an unrelated but significant &#8364;1.2 billion fine for EU-US data transfers, signaling the regulatory pressure they faced on multiple fronts.</p><p>The EU Court of Justice ruled in July 2023 that Facebook couldn't rely on legitimate interests, though that case was limited to third-party data. Crucially&#8212;and this would become important later&#8212;the Court stated that businesses may be allowed to charge for an alternative version without data processing if they require consent. This seemingly minor point would become central to Meta's strategy.</p><p>A significant escalation came from the Norwegian data protection authority. In July 2023, they issued a local decision prohibiting Meta from relying on legitimate interests and initiated an urgency procedure at the EDPB.&nbsp;</p><p>By August 2023, Meta announced they would move to consent as their legal basis. The EDPB partially agreed with the Norwegian position in October 2023, directing the Irish DPC to decide that Meta couldn't rely on legitimate interest. With contractual necessity and legitimate interests both ruled out by the regulators, consent was the only remaining option under the GDPR.</p><p><strong>The Digital Markets Act</strong>, which entered into force in November 2022, added another layer of complexity. When Meta was designated as a "gatekeeper" in September 2023, the Commission crucially designated Facebook, Instagram, and Meta Ads as separate regulated services.&nbsp;</p><p>This designation had serious implications. Under Article 5(2) of the DMA, gatekeepers cannot cross-use or combine personal data between separate services without user consent. Even though Meta Ads functions as the advertising backend for Facebook and Instagram&#8212;what could be considered as an integrated system&#8212;the Commission's designation meant any data flow between these "separate" services would presumptively violate the DMA without consent.</p><p>In response, Meta introduced its "Subscription for No Ads" (SNA) plan on September 7, 2023, just two days after their gatekeeper designation. By October 30, 2023, Meta officially announced the model: users could choose between a paid subscription without ads or free access with personalized ads.</p><p>The rollout in November 2023 immediately attracted regulatory attention. The European Commission sent Meta requests for information about the new model. From January to March 2024, the Commission engaged in discussions with the Irish DPC about Meta's approach. Meta submitted their compliance report, arguing that the "pay or consent" model satisfied both GDPR and DMA requirements.&nbsp;</p><p>Meta&#8217;s initial "Subscription for No Ads" (SNA) model offered users a choice: consent to personalized advertising or pay &#8364;9.99 (&#8364;12.99 on mobile) monthly for an ad-free experience.</p><p>March 7, 2024, marked the deadline for gatekeepers to comply with the DMA. Soon after, the Commission opened formal proceedings to investigate whether Meta's model complied with Article 5(2).&nbsp;</p><p>Back on the &#8220;GDPR track,&#8221; in April 2024, the EDPB issued Opinion 08/2024 on &#8220;pay or consent&#8221; models generally. While acknowledging that the Court of Justice had explicitly said paid alternatives to consent were possible, the EDPB struggled to target &#8220;big tech&#8221; without harming traditional publishers who had long used similar models. Their opinion hinted heavily that binary choices weren't sufficient.</p><p>Continuing the DMA investigation, the Commission sent its preliminary findings to Meta at the beginning of July 2024, concluding that the model didn't comply with DMA requirements. Meta responded contesting these conclusions. In November 2024 Meta announced an "Additional Ads option" providing a third choice for users.</p><p>Finally, on April 23, 2025, the Commission adopted its final decision, finding Meta in non-compliance with Article 5(2) of the DMA. As I noted <a href="https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent">previously</a>, the &#8364;200 million fine was relatively low&#8212;given the &#8364;15.2 billion maximum. This monetary penalty, I suggested to Eric, seems designed to deflect attention from the decision's prohibitionist and economically uninformed legal reasoning, perhaps hoping a lower sum would prevent the U.S. administration from viewing this as a tax or tariff on American companies.</p><h2><strong>The Commission's two-pronged attack</strong></h2><p>In its April decision, the Commission adopted a novel interpretation that Article 5(2) of the DMA creates two distinct and cumulative conditions:</p><ol><li><p><strong>The &#8220;specific choice&#8221; condition</strong>, an autonomous DMA concept requiring gatekeepers to offer &#8220;a less personalised but equivalent alternative.&#8221;</p></li><li><p><strong>The &#8220;valid consent&#8221; condition</strong>: traditional GDPR consent requirements.</p></li></ol><p>This distinction seems strained, especially since GDPR authorities have long insisted on reading consent requirements to ensure &#8220;real&#8221; choice. The Commission appears to be creating this separation to assert more regulatory power and circumvent what the Court of Justice said in the <em><a href="https://barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/">Meta</a></em><a href="https://barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/"> case</a> about allowing &#8220;appropriate fees.&#8221;</p><h2><strong>The European Commission&#8217;s arguments</strong></h2><p>The Commission's central argument against Meta's "Consent or Pay" model rests on their interpretation that it fails to provide users with a "specific choice" because the paid "Subscription for No Ads" (SNA) option is not an equivalent alternative to the free "With Ads" option.</p><h3><em>Different conditions of access</em></h3><p>The Commission asserts that the two options are fundamentally different because they have disparate conditions of access. Accessing the SNA option requires a recurring monthly monetary payment and access to online payment systems&#8212;a significant economic commitment. In contrast, the "With Ads" option is free, requiring only a click to consent. The Commission argues that the means of remuneration is an essential feature of a service.</p><p>Their core thesis is striking: <strong>if Meta chooses to offer its primary service for free, any equivalent alternative must also be free</strong> of monetary charge to have equivalent access conditions. The Commission states that if Meta finds this unviable, it should make a case under Article 9(1) of the DMA, which allows for suspension of obligations if they threaten economic viability.</p><p>Meta counters by accusing the Commission of trying to impose a specific business model, effectively punishing Meta for having historically offered its services for free. They point out that the DMA doesn't explicitly mandate that alternatives must be free of charge, unlike other articles in the regulation where this is specified. Meta claims that offering a non-personalized, ad-supported service for free is not a viable business model.</p><h3><em>User &#8220;lock-in&#8221; and habituation </em></h3><p>The Commission also pointed to alleged user lock-in and habituation. For two decades, Meta cultivated a massive user base by offering its services for free, even featuring the slogan "It's Free and Always Will Be" on Facebook's registration page until 2019. The Commission argues that users have grown accustomed to accessing these services without monetary payment and don't perceive ads as disruptive enough to warrant paying a fee. Therefore, presenting a paid option is not a realistic or attractive alternative for most users.</p><p>When Meta raised the well-known examples like YouTube Premium, Spotify, and various news publishers offering similar "Consent or Pay" models, the Commission dismissed these examples on two grounds. First, most aren't designated gatekeepers under the DMA. Second, services like video streaming and media publishing allegedly have different user perceptions because their content (movies, articles) has "known monetary value outside the platform," unlike user-generated content on social networks.</p><p>I find this distinction particularly unpersuasive. Can Europeans really be considered so unfamiliar with paying for digital services that we can't legally consider them free to choose when payment is involved? The distinction between platforms where content has "known monetary value outside" versus Instagram sounds like arguments from lawyers with no understanding of economic reality. Compare YouTube influencers to Instagram influencers&#8212;how is this a relevant categorical distinction?</p><h3><em>Predictability</em></h3><p>The Commission highlights that Meta knew, or should have known, that an overwhelming majority of users would choose the free option. They point to academic studies showing low willingness to pay for privacy and Meta's own internal documents from 2023, which predicted uptake for the SNA option would be extremely low (less than 1%)&#8212;a figure that closely matched the actual outcome.</p><p>The Commission suggests Meta's pricing strategy around SNA expressly assumed people wouldn't pay, treating this as a big &#8220;gotcha.&#8221; But I don't see how this is the gotcha that the Commission's lawyers think it is. Their argument is sophomoric&#8212;they say users choose not to pay even though they're "interested" in lesser processing, but provide no argument for why "interest" undermines the choice not to explore that interest. It's a normal feature of life that many interests are of so little value to us that we don't pursue them. A legal standard of free choice cannot be this low as to be based on whatever people might say to a survey, irrespective of business reality.</p><p>It was obvious to everyone that most users wouldn't pay, and that only the most valuable advertising targets would. Taking that into account in pricing strategy was basic competence for Meta's monetization team. I'd suggest the low uptake simply shows that users want social media funded by personalized advertising. The low uptake is evidence that the Commission is wasting everyone's time trying to force something Europeans don't want.</p><h3><em>Circumventing the CJEU's Meta judgment</em></h3><p>The Commission believes that the Court of Justice's <a href="https://barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/">ruling in </a><em><a href="https://barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/">Meta Platforms</a></em>, which mentioned offering an alternative service for &#8220;an appropriate fee,&#8221; is inapplicable to the &#8220;specific choice&#8221; requirement under the DMA. This is because that judgment concerned validity of consent under GDPR, not DMA&#8217;s &#8220;specific choice.&#8221;&nbsp;</p><p>Even if the judgment were applicable, the Commission notes it doesn't automatically entitle Meta to charge a monetary fee. They point out that "if necessary for an appropriate fee" doesn't exclude other forms of remuneration, citing German ("Entgelt" - consideration/compensation) and French ("r&#233;mun&#233;ration") versions that suggest broader transfers of value beyond direct user payment.</p><p>Meta insists the Commission is trying to circumvent the <em>Meta Platforms</em> judgment, which they interpret as explicitly permitting an equivalent alternative to be offered for an appropriate fee.</p><h3><em>Lack of valid consent</em></h3><p>Beyond "specific choice," the Commission argues users cannot give valid GDPR consent due to the coercive nature of Meta's model. They emphasize the power imbalance between Meta and its users, combined with the clear "detriment" of either paying for previously free services or abandoning platforms central to their social lives&#8212;what they call a "very substantial and potentially irreparable loss." Citing the EDPB Opinion on pay or consent, they argue the binary choice fails to mitigate this detriment, as true freedom would require a free alternative without behavioral advertising.&nbsp;</p><p>The Commission particularly emphasizes how Meta cultivated users for two decades with free services ("It's Free and Always Will Be"), making the subsequent paywall an "illusory choice." While the CJEU's<em> Meta Platforms</em> judgment allows "appropriate fees," the Commission argues that here the fee itself becomes coercive&#8212;a negative consequence for withholding consent rather than a freely chosen alternative.</p><h2><strong>Economic blindness: the Commission's most troubling stance</strong></h2><p>Perhaps the most alarming aspect of the decision is revealed partially in paragraph 154, where the Commission states that DMA compliance "will inevitably impact the business models that gatekeepers choose" and that the DMA "does not safeguard such business models from any constraint it mandates."&nbsp;</p><p>Moreover, the Commission explicitly states they won't consider economic impacts unless a gatekeeper applies under Article 9(1), claiming exceptional circumstances threaten their EU operations' viability. Even then, we can expect an extremely narrow interpretation&#8212;perhaps pushing gatekeepers to accept minimal profits above a strictly defined cost base.</p><h3><strong>The third-party blind spot</strong></h3><p>As I noted in my previous comments on the April decision, the Commission seems to have completely abandoned third-party impact analysis. In paragraph 262, they acknowledge that "behavioural advertising may create value to some end users and businesses under certain circumstances" but immediately dismiss this as irrelevant to their legal analysis.</p><p>The Decision contains no examination of potential harm to EU businesses advertising on Meta's platforms. I suggested that personalized advertising on Facebook and Instagram has created significant contestability in European markets, particularly for SMEs and direct-to-consumer businesses that depend entirely on targeted advertising channels.</p><h2><strong>Meta's current triple choice model</strong></h2><p>Since November 2024, Meta has implemented a significantly revised model:</p><ul><li><p>Reduced subscription prices by 40% (&#8364;5.99/month on web, &#8364;7.99 on mobile).</p></li><li><p>Introduced a third option: "less personalized ads" with non-skippable ad breaks.</p></li><li><p>Created a two-step choice flow where users first choose between paid and free, then can opt for less personalization.</p></li></ul><p>The Commission's decision hints they might accept this model if Meta removes the pre-selection of personalized ads in the second step. However, given the hostile tone throughout the decision, I wouldn't exclude new objections, particularly around what data processing occurs in the "less personalized" option.</p><p>Meta states they use age, gender, location, device information, ad engagement data, and &#8220;information about the content you're viewing.&#8221; The Commission might perhaps argue this still constitutes cross-use between Meta Ads and the social platforms.</p><h2><strong>EU AI regulation: a pattern of uncertainty</strong></h2><p>We also discussed recent developments in EU AI regulation, where similar patterns of regulatory uncertainty emerge.</p><h3><strong>The EDPB's non-guidance</strong></h3><p>The European Data Protection Board issued an Opinion on AI models and the GDPR in December 2024, which in classic EDPB style provides a long list of things AI developers can attempt for GDPR compliance while giving no guarantees of sufficiency. They didn't declare AI illegal in the EU, but that's hardly reassuring&#8212;such a declaration would be politically unpalatable even there. Instead, they maintained maximum enforcement flexibility, allowing any national enforcer to impose billion-euro fines while leaving AI developers guessing whether their compliance efforts will be deemed inadequate.</p><h3><strong>CNIL's pragmatic exception</strong></h3><p>The French data protection authority, CNIL, stands out by actually labeling specific actions as likely to achieve compliance. This may sound like faint praise, but it shows how imbalanced GDPR enforcement has become (see my <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">The EDPB&#8217;s AI Opinion Shows the Need for GDPR Enforcement Reform</a> and <a href="https://eutechreg.com/p/a-serious-target-for-improving-eu">A serious target for improving EU regulation: GDPR enforcement</a>). The problem is that CNIL's guidance only indicates how the French authority will act. In cross-border cases, CNIL may be outvoted in the EDPB by more privacy-absolutist authorities, as the Irish DPC has repeatedly learned.</p><h3><strong>Meta's AI training controversy</strong></h3><p>Meta's plan to train AI models on public user data faced particular scrutiny. Critics argued Meta couldn't use data uploaded before their May 27 announcement because users lacked reasonable expectations about AI training. This creates an impossible retroactivity problem&#8212;if accepted, no existing public data could ever be used for AI training.</p><p>Meta implemented unconditional opt-outs for both users and non-users. Yet critics argue this still isn't sufficient. As I noted, we're discussing public content anyone on the internet can access. Given search engines and widespread web scraping, can it seriously be argued that users lack reasonable expectations about how public content might be reused?</p><h2><strong>The Draghi Report's limited impact</strong></h2><p>Finally, we discussed how the Draghi Report's warnings about European competitiveness have had minimal impact on Brussels. While there's genuine reform momentum in European capitals&#8212;with the Swedish Prime Minister even calling to delay AI Act implementation&#8212;the Commission treats this as a political problem to be managed rather than a call for fundamental change.</p><p>The Commission's proposed GDPR &#8220;simplification&#8221; is <a href="https://www.linkedin.com/posts/mikolajbarczentewicz_how-eu-law-influences-tech-miko%C5%82aj-barczentewicz-activity-7330943507416682496-HF6q?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACQtiABF7m91F-eH3bqeLo_wX_F5ZhXBEg">laughable</a>, focusing on minor cost savings while ignoring fundamental issues. Brussels seems unwilling to accept that there are fundamental problems with how laws like the DMA and GDPR are applied&#8212;particularly the refusal to consider economic consequences.</p><p>What we get from Brussels is lawyers making categorical decisions about digital markets without understanding their economic dynamics. Their formalistic interpretation, divorced from business reality, poses serious risks to European competitiveness.</p><p>There&#8217;s a segment of privacy officials who view AI like they do cryptocurrency&#8212;as a fad to be stopped or at best ignored. They're the same crowd that eagerly shares every paper suggesting LLMs can't count or &#8220;reason&#8221; properly. Combined with the Commission's explicit stance that they needn't consider economic impacts on gatekeepers or third parties, this attitude threatens the welfare of Europeans.<br></p><p>[1] Here is a list of my MDM appearances so far:</p><ul><li><p>1 March 2023, <a href="https://mobiledevmemo.com/a-deep-dive-on-european-data-privacy-law">&#8220;A deep dive on European data privacy law&#8221;&nbsp;</a></p></li><li><p>11 April 2023, <a href="https://mobiledevmemo.com/a-deep-dive-on-the-digital-markets-act-dma-and-digital-services-act-dsa/">&#8220;A deep dive into the Digital Markets Act (DMA) and Digital Services Act (DSA)&#8221;</a></p></li><li><p>14 June 2023, &#8220;<a href="https://mobiledevmemo.com/the-future-of-eu-us-data-transfers/">The future of EU-US data transfers</a>&#8221;</p></li><li><p>20 December 2023, &#8220;<a href="https://mobiledevmemo.com/podcast-exploring-the-pay-or-okay-model-with-mikolaj-barczentewicz/">Exploring the Pay-or-Okay model</a>&#8221;</p></li><li><p>24 April 2024, <a href="https://mobiledevmemo.com/podcast-is-pay-or-okay-dead-in-europe-with-mikolaj-barczentewicz/">&#8220;Is Pay or Okay dead in Europe?&#8221;</a> (see also my <a href="https://eutechreg.com/p/eu-edpb-opinion-on-pay-or-ok-and?utm_source=publication-search">notes</a> from that podcast on EUTechReg.com)</p></li><li><p>12 February 2025, <a href="https://mobiledevmemo.com/podcast-the-eus-ai-prospects-with-mikolaj-barczentewicz/">&#8220;Understanding the EU&#8217;s AI Act&#8221;</a></p></li><li><p>25 June 2024, &#8220;<a href="https://mobiledevmemo.com/podcast-whats-happening-with-dma-enforcement-with-mikolaj-barczentewicz/">What&#8217;s happening with DMA enforcement?</a>&#8221;</p></li></ul>]]></content:encoded></item><item><title><![CDATA[How will the EU DMA “pay or consent” decision impact Meta’s business?]]></title><description><![CDATA[The European Commission published the text of its April Decision on Meta&#8217;s &#8220;pay or consent&#8221; scheme as implemented until November 2024.]]></description><link>https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent</link><guid isPermaLink="false">https://eutechreg.com/p/how-will-the-eu-dma-pay-or-consent</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 19 Jun 2025 12:25:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The European Commission <a href="https://ec.europa.eu/competition/digital_markets_act/cases/202525/DMA_100055_528.pdf">published the text</a> of its April Decision on Meta&#8217;s &#8220;pay or consent&#8221; scheme as implemented until November 2024. Since then, Meta has made significant changes to Facebook and Instagram in the EU, introducing a free "less-personalized ads" option. Yet the Commission maintains "that it is possible that Meta's non-compliance is ongoing."</p><p>I examine the Commission's reasoning for why Meta may still be non-compliant and explore potential solutions for Meta. I also analyze the Commission's striking approach: its stated willingness to ignore the economic pain DMA enforcement imposes on gatekeepers. This stance affects all gatekeepers, including Amazon, Apple, Google, and Microsoft. A separate piece will provide my legal analysis of the Decision's other aspects.</p><p>The headline from April focused on Meta's EUR 200M fine&#8212;far below the maximum EUR 15.2B (10% of global turnover). This relatively modest penalty seems designed to deflect attention from the Decision's prohibitionist and economically uninformed legal reasoning, perhaps hoping a lower sum will prevent the U.S. administration from viewing this as a tax or tariff.</p><h2>Economic viability as the only limit on enforcement</h2><p>Paragraph 154 reveals the Commission's approach to DMA enforcement:</p><blockquote><p>&#8220;... the purpose of [the DMA] is to ensure, for all businesses, contestable and fair markets in the digital sector across the Union where gatekeepers are present, to the benefit of business users and end users. The achievement of those objectives through the implementation of specific obligations, including Article 5(2) of [the DMA], will inevitably impact the business models that gatekeepers choose for their designated CPSs. In that context, [the DMA] does not safeguard such business models from any constraint it mandates, nor is complying with that Regulation conditional upon preserving the profits gatekeepers generated prior to its adoption.&#8221;</p></blockquote><p>The Commission believes Article 9(1) sets the only limit on how much the DMA may hurt gatekeepers. In case of exceptional circumstances beyond a gatekeeper's control that threaten its EU operations' economic viability, the Commission may suspend specific obligations. We can expect narrow interpretation here&#8212;perhaps pushing gatekeepers to accept minimal profits above a strictly defined cost base. In essence, gatekeepers become quasi-public utilities (despite the Commission's explicit denial in paragraph 245).</p><p>The Commission claims it need not consider DMA interpretations' effect on gatekeepers' profits until a gatekeeper applies under Article 9(1), arguing exceptional circumstances threaten its EU operations' viability.</p><p>From a European perspective, the Commission's biggest blind spot is its abandonment of serious third-party impact analysis. It ignores how withdrawing personalized advertising affects SME advertisers in the EU. It fails to recognize that platforms like Facebook and Instagram create massive market contestability where SMEs operate. Without these advertising channels, many EU SMEs will likely fail.</p><h2>Why the EC found Meta's "pay or consent" non-compliant</h2><p>The Commission ruled that Meta's pre-November 2024 "pay or consent" scheme didn't allow users to make a "specific choice for a less personalized but equivalent alternative" and to "validly consent" to personal data use.</p><p>Simply put: if Meta offers free services to anyone in the EU, it cannot charge for the option that doesn't combine data across services (the less personalized alternative).</p><p>The DMA permits "degradation of quality" as a "direct consequence" of not processing personal data or signing in users. However, the Commission views subscription fees as "conditions of access" that exceed DMA-allowed quality degradation.</p><p>The Commission treats this finding as independent of GDPR authorities' decisions. The EC interprets the DMA as separately requiring both "specific choice" (a DMA-only requirement) and GDPR "consent" for combining or cross-using personal data across services. The prohibition on charging for less personalized options (when a free option exists) stems from the "specific choice" requirement, not GDPR consent.</p><h2>Will Meta's current model face non-compliance findings?</h2><p>The Commission now reviews Meta's post-November 2024 scheme, including the free "less-personalized ads" option. Even if Meta successfully appeals, the Commission's decision on the new model will likely arrive sooner. What might it say?</p><p>The Commission appears to have left room to accept Meta's current model. The Decision emphasizes that subscription fees were "conditions of access," not just "degradation of quality." Perhaps the current model&#8212;even with non-skippable ads&#8212;would qualify as DMA-allowed "degradation of quality."</p><p>When preliminarily discussing Meta's current model, the Commission questioned only one aspect: the pre-selection of the free option with personalized ads. Meta could easily address this.</p><p>However, given the Commission's hostile tone and unwillingness to consider business reality, it may find "new" arguments against the post-November model.</p><p>For instance, perhaps the Commission might want to argue that data used for "less-personalized ads" falls under DMA cross-use/combination prohibitions. Meta&#8217;s &#8220;<a href="https://www.facebook.com/business/help/825123042877166">About less-personalized ads</a>&#8221; lists:</p><blockquote><p>How you engage with ads, such as clicking or liking them<br>Your age, the gender you provide and your location<br>Your device information, like the device or browser you&#8217;re using<br>Information about the content you&#8217;re viewing while you browse on our Products</p></blockquote><p>I don't know whether Meta Ads processes this data separately or within Facebook. Meta should be able to achieve full separation if needed.</p><p>As a "nuclear" solution, Meta could separate "Facebook Ads" and "Instagram Ads" completely, eliminating cross-use of personal data between them and other Ads platforms. Each could then support its respective platform. This approach could benefit from Recital 36 DMA, which the EC acknowledged clarifies that cross-use/combination prohibitions don't apply between services "provided together with, or in support of" a core platform service.</p><p>This structure would arguably exempt the integrated ads backend and social media front-end from Article 5(2) DMA's cross-use prohibitions. Meta might even return to the "legitimate interest" GDPR basis[1] rather than consent. At minimum, it would deal with GDPR's consent requirement rather than the stricter, nebulous threshold the Commission conjures under the DMA.</p><p>Like restrictions on personalised advertising, this solution would have third-party consequences. The kind of effects for which the Commission has provided no evidence of consideration. Currently, SME advertisers benefit from a unified ad system that allows them to reach audiences across both platforms with a single campaign, shared budgets, and consolidated performance metrics. Meta&#8217;s decision to silo these services would mean advertisers would need to manage separate campaigns, split their already limited budgets, and lose the efficiency of cross-platform optimization.&nbsp;</p><div><hr></div><p>[1] Often overlooked: the EU Court of Justice didn't require consent for Meta's use of <em>first-party</em> personal data for personalized advertising. The famous Meta judgment addressed<em> third-party</em> data. See my analysis <a href="https://barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/">The CJEU&#8217;s Decision in Meta&#8217;s Competition Case: Consequences for Personalized Advertising Under the GDPR (Part 1)</a> and <a href="https://eutechreg.com/p/edpb-meta-violates-gdpr-by-personalised">EDPB: Meta violates GDPR by personalised advertising. A "ban" or not a "ban"?</a>.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Trustworthy privacy for AI: Apple's and Meta's TEEs]]></title><description><![CDATA[Ben Thompson argues that Apple should "make its models &#8211; both local and in Private Cloud Compute &#8211; fully accessible to developers to make whatever they want," rather than limiting them to the "cutesy-yet-annoying frameworks" like Genmoji.]]></description><link>https://eutechreg.com/p/trustworthy-privacy-for-ai-apples</link><guid isPermaLink="false">https://eutechreg.com/p/trustworthy-privacy-for-ai-apples</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Sun, 01 Jun 2025 15:34:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ben Thompson <a href="https://stratechery.com/2025/apple-ais-platform-pivot-potential/">argues</a> that Apple should "make its models &#8211; both local and in Private Cloud Compute &#8211; fully accessible to developers to make whatever they want," rather than limiting them to the "cutesy-yet-annoying frameworks" like Genmoji. This is not only a good idea but one that can be implemented safely for user data, without introducing privacy risks significantly greater than those users already accept when granting apps access to their data. Let me explain why, while also examining how Apple and Meta are approaching the challenge of "private" AI with trusted execution environments (TEEs). Don&#8217;t worry&#8212;this is a newsletter on EU tech regulation&#8212;so I will also comment on how the EU Digital Markets Act may affect this approach.</p><p><strong>My personal stakes in privacy-preserving AI</strong></p><p>I don't think anyone would accuse me of being a privacy absolutist. But I am  concerned about the safety of my data when processed by various AI services. What makes me even more concerned is the sheer volume of data that may be needed to make personal and professional AI agents truly useful. On the other hand, I really want those agents in my life&#8212;and not just to book flights and organize dinner parties.</p><p>I've been experimenting with local models, and while they're adequate for many uses, I find myself consistently gravitating toward state-of-the-art hosted services from Anthropic, Google, OpenAI, and X. They're simply better, both in terms of speed of inference and quality of the outputs. The gap in capability is significant enough that I don't want to be forced into running a subpar local AI agent just because I'm concerned about exposing my email history, messages, calendars, and other sensitive data. (I do hope that local models will become a viable option, but it's unclear when this will happen, and even then, it&#8217;s possible they will not perform quite as well.)&nbsp;</p><p>This drives my interest in "private clouds" and trusted execution environments (TEEs) as a solution&#8212;technologies that could give us both state-of-the-art AI capabilities and reasonable data safety, even for someone with my attitude.</p><p>I don't claim to speak for all users. Many people will likely embrace powerful AI agents without much thought about data protection, so captivated by the functionality that privacy becomes an afterthought. However, there are at least two good reasons not to rely on this too much.</p><p>First, these same users will be justifiably outraged if they become victims of a major AI service breach. (This reminds me of <a href="https://cases.justia.com/federal/district-courts/new-york/nysdce/1%3A2023cv11195/612697/551/0.pdf">the data-preservation order</a> against OpenAI in their litigation with the New York Times&#8212;possibly a massive data breach waiting to happen.)&nbsp;</p><p>Second, privacy regulators tend to have higher expectations than most users. These regulators are still developing their approach to AI, which creates a window for AI developers to propose credible solutions that could set standards for years to come.&nbsp;</p><p>The long-term viability of the AI business may well depend on addressing those concerns, and reducing the &#8220;attack surface.&#8221;</p><p><strong>How Apple Intelligence currently works</strong></p><p>Let me start with a concrete example of how Apple Intelligence handles a common task: text summarization. When you select text in any app using <code>UITextView</code> (the standard iOS text component), the system presents Writing Tools options including summarization.</p><p>The app already has full access to the text displayed in its <code>UITextView</code>. When you invoke Writing Tools, the selected text&#8212;and only that text, perhaps with minimal context expansion&#8212;gets sent to Apple's AI model. If the task is simple enough, it's processed by the ~3 billion parameter on-device model. For more complex requests, it's encrypted and sent to Private Cloud Compute servers.</p><p>The crucial point is that the app already possessed this text. Apple Intelligence isn't accessing new data. It's simply processing information the app technically could have uploaded to any server at any time.</p><p><strong>Apple's Private Cloud Compute: technical guarantees</strong></p><p>Apple's PCC implements several <a href="https://security.apple.com/blog/private-cloud-compute/">technical guarantees</a> that create a trustworthy environment, including:</p><ul><li><p>Stateless computation: all data is immediately deleted after processing. No user data persists between requests.</p></li><li><p>Hardware-enforced isolation: custom Apple Silicon servers with Secure Enclaves ensure that even Apple employees cannot access user data during processing.</p></li><li><p>Cryptographic verification: the device verifies it's communicating with legitimate PCC servers running publicly auditable code before sending any data.</p></li><li><p>No privileged access: the system design eliminates debugging interfaces and administrative access that could compromise user data.</p></li><li><p>Verifiable transparency: Apple publishes software images and offers bug bounties up to $1 million for security researchers to verify these claims.</p></li></ul><p><strong>The case for developer access</strong></p><p>Here's why giving developers access to Apple Intelligence&#8212;both local and PCC models&#8212;wouldn't introduce significant new privacy risks. Consider the current situation. An app with access to your documents can already send them to OpenAI, Anthropic, or its own servers. The privacy risk exists the moment you grant the app access to your data. Routing AI processing through Apple's infrastructure promises to improve privacy. Unlike most third-party services, PCC guarantees stateless processing and provides cryptographic proof that your data won't be retained or used for training.&nbsp;</p><p><strong>What user data Apple Intelligence processes&nbsp;</strong></p><p>Apple's legal <a href="https://www.apple.com/legal/privacy/data/en/intelligence-engine/">documentation</a> contains a potentially confusing statement: "To provide a customized experience, Apple Intelligence uses information on your device including across your apps, like your upcoming calendar events and your frequently used apps."</p><p>This might suggest Apple&#8217;s APIs automatically draw on all user context. But this language only applies to system-level features&#8212;like Siri Personal Context&#8212;not to what developers can access via APIs. Apple is building different privacy models for different features&#8212;narrow access for developer-accessible tools, broader access only for system-level features.</p><p><strong>Potential impact of the EU Digital Markets Act</strong></p><p>This creates an opening for a regulatory challenge. Under the EU&#8217;s Digital Markets Act (DMA), regulators might argue that if Apple&#8217;s own features can combine data from across apps, then third-party developers should have access to the same functionality. If that argument wins, Apple may have no choice but to build permission systems that allow users to explicitly authorize third-party AI systems to work with their personal data in a comparable way. I&#8217;ve explored these implications elsewhere (e.g., "<a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">Apple and EU DMA: a road to leave the EU?</a>"), pointing out how culturally difficult it may be for Apple to abandon the idea that they can treat their own apps and functionalities as more trustworthy.&nbsp;</p><p>There is also a risk that the EU DMA will be interpreted in a more radical way, attempting to force Apple to provide access in a way that undermines the security features of their framework. For example, by requiring Apple to run third party code within trusted execution environments or by prohibiting Apple from adopting reasonable restrictions on API use (e.g., if the API will include models with function/tool-calling capabilities). The risk of this is not negligible given that so far, the DMA enforcers have shown little understanding and care for the impact of their actions on end users, especially in terms of data security (see, e.g., <a href="https://eutechreg.com/p/dma-workshops-and-privacy?utm_source=publication-search">DMA workshops and privacy</a> and <a href="https://eutechreg.com/p/apple-and-eu-dma-a-road-to-leave">Apple and EU DMA: a road to leave the EU?</a>).</p><p><strong>Essential safeguards for safe implementation</strong></p><p>Returning to Ben Thompson&#8217;s recommendations, he <a href="https://stratechery.com/2025/openai-acquires-io-openais-strategic-positioning-apples-worsening-ai-problem/">specifically advocated</a> that Apple should:</p><ul><li><p>&#8220;Make Apple&#8217;s on-device models available to developers as an API without restriction.&#8221;</p></li><li><p>&#8220;Make Apple&#8217;s cloud inference available to developers as an API at cost.&#8221;</p></li></ul><p>I believe this can be done safely, but with caveats:</p><ul><li><p>All AI model data processing must be fully stateless, like Apple's current model. This ensures user data cannot be used to train new models or be retained for any purpose.</p></li><li><p>Both in-app and system-level data access must be controlled by robust, specific permissions that users can understand and manage.</p></li><li><p>Any off-device processing must occur in verified trusted execution environments with what Apple calls &#8220;verifiable transparency.&#8221;</p></li><li><p>Process as much as feasible on-device, using cloud resources only when computational requirements demand it.</p></li><li><p>End-to-end traffic analysis remains a vulnerability. The TEE approach may in some extreme cases not be a sufficient protection for dissidents or journalists in certain countries, but it is likely adequate for most users.</p></li></ul><p><strong>Meta's WhatsApp Private Processing</strong></p><p>Meta faces a higher burden in convincing the public about privacy&#8212;they haven't yet successfully made privacy part of their brand the way Apple has. It does, however, seem that they aim to move in that direction, given that their <a href="https://engineering.fb.com/2025/04/29/security/whatsapp-private-processing-ai-tools/">WhatsApp Private Processing</a> framework implements remarkably similar technical guarantees to Apple&#8217;s PCC.</p><p>Like Apple, Meta uses hardware-based trusted execution environments, though with an important architectural difference. While Apple employs custom silicon with dedicated Secure Enclaves, Meta relies on AMD's Confidential Virtual Machine (CVM) technology within standard cloud infrastructure. This creates a larger potential attack surface but offers more deployment flexibility.</p><p>Both systems promise stateless processing, cryptographic attestation, and third-party verifiability. However, Meta's implementation currently lacks the same level of verifiable transparency that Apple provides. While Meta has announced plans to publish Private Processing components and provide third-party access to binary digests, these transparency measures remain partially unfulfilled.</p><p><strong>Why verifiable transparency matters</strong></p><p>I want to stress the importance of verifiable transparency, which is the area where Meta still likely has the most to improve. One key reason why verifiable transparency is important is the risks from government-mandated backdoors, effectively breaking TEE-based privacy schemes. When code is open source&#8212;or at least available to a sufficiently large group of independent, adequately funded researchers&#8212;and when devices can cryptographically verify that servers run the published code, this risk is significantly mitigated.</p><p>Even if you think that law-abiding users have nothing to fear from democratic governments having this kind of access, you should still be concerned about the track record of such &#8220;government access&#8221; schemes being exploited by unintended parties, including other governments and criminals (e.g., <a href="https://en.wikipedia.org/wiki/2024_United_States_telecommunications_hack">the 2024 U.S. telecom hack</a>). Cybersecurity experts have long argued that a vulnerability does not distinguish &#8220;good&#8221; guys from &#8220;bad&#8221; guys. In other words, the idea that a government agency can claim that &#8220;nobody but us&#8221; will have access to some backdoor is a dangerous illusion.&nbsp;</p><p><strong>Conclusions</strong></p><p>Thompson's proposal to open Apple Intelligence to developers is both technically feasible and potentially beneficial for the AI ecosystem. The privacy risks are manageable because they're largely identical to risks users already accept when granting apps data access. In fact, routing AI processing through privacy-preserving infrastructure like Apple's PCC could improve the status quo.</p><p>The convergence of Apple and Meta's approaches suggests an emerging industry standard for privacy-preserving AI. Both companies recognize that long-term success requires solving the privacy problem, not just for regulatory compliance but for maintaining user trust as AI agents become more capable and data-hungry.</p>]]></content:encoded></item><item><title><![CDATA[Meta is about to start AI training on public EU user data—what will the GDPR authorities do?]]></title><description><![CDATA[On 27 May, Meta plans to begin AI training using public EU user data from Facebook and Instagram.]]></description><link>https://eutechreg.com/p/meta-is-about-to-start-ai-training</link><guid isPermaLink="false">https://eutechreg.com/p/meta-is-about-to-start-ai-training</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 22 May 2025 16:51:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On 27 May, Meta plans to begin AI training using public EU user data from Facebook and Instagram. Yesterday, Meta&#8217;s lead GDPR regulator, the Irish Data Protection Commission (IDPC) published a <a href="https://www.dataprotection.ie/en/news-media/latest-news/dpc-statement-meta-ai">statement</a> highlighting the additional safeguards adopted by Meta since the <a href="https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/">previously paused</a> plan from early 2024. On my reading, the IDPC implies that they do not object to Meta&#8217;s current plan, including Meta&#8217;s reliance on the &#8220;legitimate interest&#8221; basis for data processing under the GDPR. Nevertheless, it remains to be seen whether other EU data protection authorities (DPAs) and the courts will take the same approach. Following the IDPC&#8217;s statement, the Hamburg authority <a href="https://www.mlex.com/mlex/articles/2343743/meta-could-be-asked-by-hamburg-dpa-to-pause-ai-training-in-germany">issued</a> a &#8220;hearing letter&#8221; which may suggest German officials will try to overrule the Irish DPC to force a more restrictive approach <a href="https://eutechreg.com/p/edpb-meta-violates-gdpr-by-personalised">as they have done in the past</a>. There is also an ongoing attempt to secure an injunction in a German court to stop Meta&#8217;s 27 May plan. This matters to all AI developers and deployers in the EU, because Meta&#8217;s situation is not unique and whatever limitations are imposed on Meta, they can easily become a standard for others.</p><blockquote><p><strong>UPDATE (27 May): </strong>The attempt to block Meta&#8217;s AI training plan through an injunction <a href="https://www.reuters.com/sustainability/boards-policy-regulation/german-rights-group-fails-bid-stop-metas-data-use-ai-2025-05-23/">failed</a> before the court in Cologne court. Also, despite expressing some last minute doubts, the Hamburg data protection authority <a href="https://datenschutz-hamburg.de/news/hmbbfdi-sieht-von-dringlichkeitsverfahren-gegen-meta-ab">just announced</a> that they will not step out of line and use the GDPR's urgency procedure.<strong> </strong>However, they referenced some unspecified &#8220;forthcoming EU-wide evaluation of Meta's practices,&#8221; perhaps aiming to suggest that they will try to push for a more restrictive GDPR interpretation at a later date. (All this is quite puzzling given that it was the Hamburg DPA which previously published <a href="https://eutechreg.com/p/ai-and-eu-privacy-law-june-2024-state?utm_source=publication-search">a very pragmatic</a> discussion paper on the application of the GDPR to AI models).</p></blockquote><p></p><h2>The legal questions being raised</h2><p>Multiple national data protection authorities published information about Meta&#8217;s plans, including links to forms provided by Meta, through which users can object to their public data being used for AI training (e.g., <a href="https://azop.hr/meta-pocinje-koristiti-podatke-europskih-korisnika-za-treniranje-modela-umjetne-inteligencije/">Czechia</a>, <a href="https://cnil.fr/fr/meta-entrainement-ia-donnees-utilisateurs">France</a>, <a href="https://www.gpdp.it/home/docweb/-/docweb-display/docweb/10125702">Italy</a>, <a href="https://datenschutz-hamburg.de/news/meta-starts-ai-training-with-personal-data">Hamburg</a>, the <a href="https://www.autoriteitpersoonsgegevens.nl/actueel/ap-kom-nu-in-actie-als-je-niet-wil-dat-meta-ai-traint-met-jouw-data">Netherlands</a>). Notably, at least the Dutch, the French and the Italian authorities included in their statements&#8212;made as recently as late April&#8212;that EU authorities are working with the Irish DPC to assess Meta&#8217;s compliance with the GDPR. The authorities listed the following issues:&nbsp;</p><ul><li><p>the legality of Meta&#8217;s plan (presumably meaning the question whether Meta can rely on the legitimate interest basis);</p></li><li><p>the effectiveness of the right to object (which Meta facilitates through ex ante opt-out forms);</p></li><li><p>the compatibility between the original purposes of the processing and this new use of the data (&#8220;historical data&#8221; issue).</p></li></ul><p>The historical data issue is <a href="https://www.mlex.com/mlex/articles/2343743/meta-could-be-asked-by-hamburg-dpa-to-pause-ai-training-in-germany">reportedly</a> at the centre of the &#8220;hearing letter&#8221; to Meta from the Hamburg DPA.</p><p>Aside from the DPAs, a German consumer advocacy group Verbraucherzentrale NRW (vznrw) <a href="https://www.verbraucherzentrale.nrw/pressemeldungen/digitale-welt/verbraucherzentrale-nrw-beantragt-einstweilige-verfuegung-gegen-meta-107036">is seeking an injunction</a> to stop Meta&#8217;s AI training plans in the Higher Regional Court of Cologne (Oberlandesgericht K&#246;ln). The hearing in that case was scheduled for 22 May (see <a href="https://www.noerr.com/de/insights/ein-verfahren-mit-grosser-tragweite-morgen-verhandelt-das-olg-koeln-zum-ki-training-mit-personenbezogenen-daten">commentary</a> from Noerr, a law firm). Vznrw&#8217;s position appears to be that Meta should be asking all concerned users for affirmative opt-in (consent), instead of allowing an opt-out.</p><p></p><h2>Opt-out vs opt-in (legitimate interest vs consent)</h2><p>On the dominant among EU DPAs, broad understanding of what counts as personal data, the authorities would likely argue that the development of many, if not all, Large Language Models (LLMs) involves the processing of personal data at some stage, at least due to scraping public websites. <a href="https://eutechreg.com/p/the-gdpr-and-genai-part-1-lawful">It is not possible</a> to implement an opt-in consent for all concerned individuals, so imposing such a requirement would mean a de facto prohibition of the technology. The other reason why consent is not feasible is the issue of withdrawal of consent&#8212;it&#8217;s likely to be vastly disproportionate to delete models trained with personal data and surgical removal of an individual&#8217;s personal data from a trained model is not yet workable. So withdrawal of consent couldn&#8217;t lead to the end of data processing (assuming that a trained model contains personal data).</p><p>The European Data Protection Board (EDPB), in their <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en">Opinion on AI models</a> only really considered an opt-out mechanism (objection to data processing), suggesting various measures to make AI development and use compatible with the GDPR under the legitimate interest basis.</p><p>Privacy activists and some public officials are now pushing to ban LLMs and much of AI more broadly, claiming that opt-out mechanisms are insufficient. Their strategy centers on a peculiar argument about consent requirements. When AI developers can directly contact individuals whose data will be used for training&#8212;what's called "first-party" data from direct users[1]&#8212;these advocates argue that only individual consent should permit data use. Their solution for withdrawn consent is equally extreme: delete the entire AI model.&nbsp;</p><p>This creates a bizarre contradiction. When developers cannot easily contact data subjects and informing them is much harder&#8212;circumstances where providing notice and objection opportunities is genuinely difficult&#8212;these same advocates would have us believe that they accept that consent isn't required and legitimate interest suffices. But when circumstances are actually favorable for exercising information and objection rights, suddenly consent becomes mandatory. This logical inconsistency strains credulity. The argument appears to be part of a bait-and-switch strategy: first attack "first-party" data use to establish a consent requirement, then extend that precedent to "third-party" data.&nbsp;</p><p></p><h2>Ex ante opt-out and effective exercise of a right to object</h2><p>Meta has implemented unconditional opt-outs from AI training for both <a href="https://www.facebook.com/help/contact/6359191084165019">users</a> and <a href="https://www.facebook.com/help/contact/510058597920541">non-users</a>. This goes beyond Article 21 GDPR's express requirements but follows a suggestion in the EDPB Opinion on AI models (para 102). Yet critics argue this still doesn't allow sufficiently effective exercise of objection rights under legitimate interest processing.</p><p>Article 21 GDPR states that "the controller shall no longer process the personal data unless the controller demonstrates compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject or for the establishment, exercise or defence of legal claims." While this requirement is conditional, it does mandate that controllers "no longer process the personal data."</p><p>Allowing unconditional, ex ante objections&#8212;before processing even begins&#8212;without requiring deletion of already-trained models represents a reasonable, proportionate interpretation of GDPR. A more absolutist reading of the "no longer process" requirement would effectively ban the technology entirely. Notably, switching to a consent basis raises virtually identical issues regarding the consequences of withdrawal of consent.</p><p>Arguing that Meta's unconditional, ex ante, and well-publicized opt-out scheme (also publicized by DPAs) remains insufficient would set the compliance bar impossibly high. No one&#8212;not just Meta&#8212;could train AI models in the EU under such standards.&nbsp;</p><p>To be clear: Meta's offer to accept objections before even beginning training arguably exceeds GDPR requirements. Concluding otherwise would imply that most AI model training conducted so far violated GDPR&#8212;assuming a broad interpretation of which models contain personal data.</p><p></p><h2>Historical data and historical reasonable expectations</h2><p>The EDPB correctly noted in its Opinion on AI models that individuals' reasonable expectations matter when assessing legitimate interest. However, this argument can easily be taken too far. Most people understand technology, business, and government only superficially&#8212;and this is entirely normal. We are not omniscient. Requiring individuals to have detailed understanding of technological and organizational aspects before their data can be processed under legitimate interest would make that legal basis impossible to use. This would render the GDPR provision ineffective, violating a fundamental principle of legal interpretation: we should read law as if crafted by a rational lawgiver. While even the Court of Justice sometimes leans toward overly restrictive interpretations&#8212;often betraying its own lack of understanding of business and technological realities&#8212;this temptation should be resisted.[2] Any interpretation of "reasonable expectations" must remain proportionate and grounded in reality.</p><p>This brings us to the "historical data" issue that German DPAs are currently reportedly emphasizing. Their argument appears to be that Meta cannot use data uploaded before May 27, because users lacked reasonable expectations that their public data could be used for AI training before the deadline set in Meta's announcement.</p><p>This reasoning proves far too much. If we follow this logic, how could any AI developer claim that individuals reasonably expected their data to be used for training? Even limiting the argument to pre-ChatGPT data still goes too far. Consider an analogous question about search engines circa 1998. Would we really interpret GDPR to treat emerging technologies as inherently illegal simply because their details are novel? I argued against such an approach to the GDPR in my text on <a href="https://eutechreg.com/p/ai-hallucinations-eu-gdpr-and-the">the importance of cautious optimism</a>.</p><p>Let&#8217;s remember: we're discussing data that anyone on the internet can access&#8212;<a href="https://www.facebook.com/privacy/policy?subpage=3.subpage.3-PublicContentWhatContent">public content</a>. Given the existence of search engines and widespread web scraping, can it seriously be argued that internet users lack reasonable expectations about how public web content might be reused?</p><p>Some DPAs already recognize this reality. The <a href="https://www.baden-wuerttemberg.datenschutz.de/rechtsgrundlagen-datenschutz-ki/">Baden-W&#252;rttemberg DPA</a> acknowledged that "given the generally known technological progress of recent years, it may be expected that data published on the Internet may be reused by third parties for other purposes."[3]</p><div><hr></div><p>[1] What makes things more complicated is that the data Meta plans to use isn&#8217;t &#8220;pure&#8221; first-party data, because even data like Facebook/Instagram posts, comments and so on may include references to other individuals than the person who posted the data. So what is first-party data of a direct Meta user, may very well also be third-party data of someone who is not a Meta user (which is also why Meta <a href="https://www.facebook.com/help/contact/510058597920541">facilitates opt-outs</a> for non-users). The same applies, of course, to all other platforms like Reddit, YouTube, X/Twitter, and so on.</p><p>[2] See eg EDPB&#8217;s reference to <em>Meta v Bundeskartellamt </em>in footnote 73 of the Opinion on AI models and the accompanying text (para 93).</p><p>[3] The <a href="https://www.baden-wuerttemberg.datenschutz.de/rechtsgrundlagen-datenschutz-ki/">Baden-W&#252;rttemberg DPA</a> also made a qualification for data published by others. In principle there may be a difference in reasonable expectations regarding content you post about yourself and what someone else posts about you. However, we also need to have in mind the technical limits of anyone&#8217;s capacity to distinguish such situations at scale.&nbsp;</p><p></p>]]></content:encoded></item><item><title><![CDATA[A First Take on the European Commission’s DMA Decision Against Meta]]></title><description><![CDATA[The European Commission issued a significant noncompliance decision earlier today, finding the &#8220;consent or pay&#8221; model that Meta implemented from March 2024 to November 2024 for its Facebook and Instagram services breached key obligations imposed on designated gatekeepers under the Digital Markets Act (DMA).]]></description><link>https://eutechreg.com/p/a-first-take-on-the-european-commissions</link><guid isPermaLink="false">https://eutechreg.com/p/a-first-take-on-the-european-commissions</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Sun, 11 May 2025 16:01:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The European Commission <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1085">issued</a> a significant noncompliance decision earlier today, finding the &#8220;consent or pay&#8221; model that Meta implemented from March 2024 to November 2024 for its Facebook and Instagram services breached key obligations imposed on designated gatekeepers under the Digital Markets Act (DMA). Accompanied by a &#8364;200 million fine, the decision concluded that Meta&#8217;s approach failed to comply with Article 5(2) of the DMA. Importantly, the Commission&#8217;s decision does not extend to Meta&#8217;s <a href="https://about.fb.com/news/2024/11/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">revised</a> offer, which has been augmented with a free-of-charge &#8220;less personalized ads&#8221; option.</p><p>The Commission&#8217;s decision hinges on two determinations. First, Meta&#8217;s model, which offered users a binary choice between accepting personalized advertising through personal-data combination or paying a monthly fee for an ad-free experience, did not provide a &#8220;less personalized but equivalent alternative,&#8221; as required by the DMA&#8212;especially as interpreted through Recital 36. Second, this binary structure, along with the associated fee for the privacy-centric option, was deemed to be coercive and that it violated the General Data Protection Regulation&#8217;s (GDPR) &#8220;freely given consent&#8221; requirement, which is explicitly integrated into DMA Article 5(2).</p><p>In this post, I&#8217;ll try to reconstruct the Commission&#8217;s likely reasoning. It&#8217;s crucial, however, to state upfront that, without the full published decision text, we&#8217;re operating somewhat in the dark, relying on the Commission&#8217;s press release and inferring connections to broader regulatory discussions. Of particular interest, in regard to the latter, is the European Data Protection Board&#8217;s (EDPB) recent opinions on &#8220;pay or consent&#8221; under the EU GDPR. This inherent uncertainty must be kept in mind.</p><h2>Lack of &#8216;Freely Given&#8217; Consent</h2><p>Article 5(2) of the DMA demands GDPR-standard consent for the kind of data combination and cross-use that Meta employs for personalized advertising. The Commission&#8217;s announcement explicitly stated that Meta&#8217;s model &#8220;did not allow users to exercise their right to freely consent.&#8221;</p><p>The uncertainty lies in <em>how</em>, exactly, the Commission reached this conclusion under the DMA. It seems probable they leaned heavily on the reasoning laid out in the EDPB&#8217;s <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-082024-valid-consent-context-consent-or_en">Opinion 08/2024</a> regarding &#8220;consent or pay&#8221; models under the GDPR (I commented on that opinion <a href="https://barczentewicz.substack.com/p/pay-or-ok-what-is-an-appropriate">here</a> and <a href="https://barczentewicz.substack.com/p/eu-edpb-opinion-on-pay-or-ok-and">here</a>). The EDPB strongly insisted that, for &#8220;large online platforms,&#8221; a binary choice between consent and paying a fee is likely coercive, due to power imbalances and potential detriment, thus invalidating consent.</p><p>Directly transposing this quite-restrictive GDPR interpretation (which is itself quite debatable, especially regarding detriment) onto the DMA requires careful justification. While the DMA references GDPR consent, it doesn&#8217;t automatically incorporate every aspect of the EDPB&#8217;s subsequent guidance. In particular, it does not incorporate guidance that arguably stretches the concept of &#8220;freely given&#8221; beyond what the Court of Justice of the European Union (CJEU) outlined in <em>Meta Platforms v Bundeskartellamt </em>(see my commentary <a href="https://www.barczentewicz.com/post/the-cjeus-decision-in-metas-competition-case-consequences-for-personalized-advertising-under-the-gdpr-part-1/">here</a> and <a href="https://barczentewicz.substack.com/p/metas-paid-subscriptions-are-they">here</a>).</p><p>The CJEU specifically acknowledged the possibility of valid consent&#8212;even when dealing with a dominant undertaking&#8212;and allowed for a fee-based alternative. So, the Commission&#8217;s finding of a lack of <em>free</em>consent under the DMA likely relies on accepting the EDPB&#8217;s premise that the binary choice is inherently coercive for gatekeepers, a point which remains legally contested.</p><h2>Failure to Provide a &#8216;Less Personalised but Equivalent Alternative&#8217;</h2><p>The Commission also found that Meta &#8220;did not give users the required specific choice to opt for a service that uses less of their personal data but is otherwise equivalent.&#8221; This points directly to the language in Recital 36 DMA, which calls for such an alternative as a prerequisite to enable free choice under Article 5(2).</p><p>The crux of the issue here seems to be the <em>paid</em> nature of the ad-free alternative Meta offered from March to November 2024. The Commission appears to have interpreted &#8220;equivalent alternative&#8221; under the DMA as requiring a <em>free</em> option that uses less personal data (perhaps funded by contextual ads, as the EDPB suggested for GDPR compliance).</p><p>This is, frankly, a problematic interpretation if it&#8217;s indeed the Commission&#8217;s definitive stance. Again, the CJEU in the <em>Meta</em> case explicitly allowed for an equivalent alternative under GDPR &#8220;if necessary for an appropriate fee.&#8221; While the DMA aims for contestability and fairness, neither Article 5(2) nor Recital 36 explicitly mandates that the &#8220;equivalent alternative&#8221; must be <em>free of charge</em>. Recital 37 focuses on ensuring the alternative isn&#8217;t of &#8220;degraded quality,&#8221; unless unavoidable due to lack of data. It doesn&#8217;t directly address the payment model.</p><p>Requiring a free-of-charge alternative seems less like a direct application of the DMA text and more like an adoption of the policy preference the EDPB expressed in its GDPR guidance. It effectively means the Commission might be using the DMA to impose a specific business model (free of charge and contextually ad-supported) rather than simply enforcing the stated rules.</p><p>From <a href="https://barczentewicz.substack.com/p/pay-or-ok-what-is-an-appropriate">my analysis</a> of &#8220;pay or consent&#8221; under GDPR, the focus should be on whether the <em>fee</em> is &#8220;appropriate&#8221; (as per CJEU) and whether consent is genuinely free under the circumstances, not on whether a free-of-charge option exists <em>per se</em>. If the Commission&#8217;s DMA reasoning <em>mandates </em>the alternative be free, it represents a significant, and legally questionable, step beyond established principles.</p><h2>Conclusion</h2><p>While the Commission fined Meta, citing lack of free consent and a failure to provide an equivalent alternative under DMA Article 5(2), the precise legal steps in their reasoning remain unclear without the full decision. It appears highly likely that they&#8217;ve heavily relied on the EDPB&#8217;s restrictive interpretation of &#8220;consent or pay&#8221; under GDPR, potentially extending it to the DMA context.</p><p>Specifically, the apparent requirement for a <em>free-of-charge</em> equivalent alternative seems particularly contentious. It arguably goes beyond both the DMA&#8217;s text and the CJEU&#8217;s guidance on GDPR, raising concerns about whether the Commission is enforcing the law as written or imposing a preferred market outcome.</p><p>Until the full reasoning is public and potentially tested in court via Meta&#8217;s likely appeal, the exact legal basis and its broader validity remain uncertain. I remain critical of interpretations&#8212;whether under GDPR or seemingly now under DMA&#8212;that would effectively prohibit established &#8220;pay or consent&#8221; models outright by demanding a free alternative, where the law and higher courts have explicitly allowed for appropriate fees.</p><p><em>[Previously published on <a href="https://truthonthemarket.com/2025/04/23/a-first-take-on-the-european-commissions-dma-decision-against-meta/">Truth on the Market</a> on 23 April 2025]</em></p>]]></content:encoded></item><item><title><![CDATA[Meta v EDPB and what really needs to change in the GDPR]]></title><description><![CDATA[[Note: I changed my newsletter&#8217;s address to EUTechReg.com, hoping it will be easier to remember than my difficult surname.]]]></description><link>https://eutechreg.com/p/meta-v-edpb-and-what-really-needs</link><guid isPermaLink="false">https://eutechreg.com/p/meta-v-edpb-and-what-really-needs</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Tue, 06 May 2025 17:55:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>[Note: I changed my newsletter&#8217;s address to <a href="https://eutechreg.com">EUTechReg.com</a>, hoping it will be easier to remember than my difficult surname.]</em></p><p>Just as we <a href="https://x.com/MBarczentewicz/status/1919455350136189279">learn</a> that the European Commission&#8217;s &#8220;GDPR reform&#8221; plan is likely to be a performative window-dressing exercise, the EU General Court again denied direct judicial review of a privacy-fundamentalist guidance document from the European Data Protection Board (EDPB) on &#8220;<a href="https://eutechreg.com/p/pay-or-ok-what-is-an-appropriate">pay or OK</a>.&#8221; The EDPB is acting like an unaccountable legislator, issuing theoretically non-binding guidance, which easily becomes the law in action. This occurs even when the guidance extends EU data protection law beyond its intended scope, driven by ideological extremism.&nbsp;</p><p>In <a href="https://curia.europa.eu/juris/document/document.jsf?text=&amp;docid=298762&amp;pageIndex=0&amp;doclang=EN&amp;mode=lst&amp;dir=&amp;occ=first&amp;part=1&amp;cid=19839444">Case T-319/24</a>, <em>Meta Platforms Ireland v EDPB</em>, Meta challenged the EDPB's Opinion 8/2024 on &#8220;pay or OK" (see also my comments <a href="https://eutechreg.com/p/eu-edpb-opinion-on-pay-or-ok-and?utm_source=publication-search">here</a>). In its decision, the General Court noted that</p><blockquote><p>although, as Meta observes, the contested opinion uses words such as &#8216;should&#8217;, &#8216;should not&#8217; and &#8216;in most cases&#8217;, the passages containing such wording, read in the light of the document as a whole, appear to be calling for an in-depth consideration... rather than censuring &#8216;consent or pay&#8217; models across the board.&nbsp;</p></blockquote><p>The central problem is this: the General Court rejected Meta's case because the EDPB's Opinion was considered <em>non-binding</em> and lacked direct legal consequences for Meta. Consequently, the EDPB can issue highly detailed &#8220;non-binding&#8221; documents filled with mandatory language that significantly influence market behavior and national DPA enforcement, but these pronouncements avoid direct EU judicial scrutiny. This leaves businesses in a precarious situation where &#8220;guidance&#8221; effectively operates as law, yet lacks the oversight inherent in formal law-making.</p><p>Contrast this with Advocate General &#262;apeta's recent <a href="https://curia.europa.eu/juris/document/document.jsf?text=&amp;docid=297247&amp;pageIndex=0&amp;doclang=en&amp;mode=req&amp;dir=&amp;occ=first&amp;part=1">opinion</a> in Case C-97/23 P, <em>WhatsApp/EDPB</em>. Here, the AG took a refreshingly pragmatic approach to the EDPB's <em>binding decisions</em> under Article 65 GDPR. She recommended that the Court of Justice find WhatsApp's action for annulment against such a decision admissible. Her reasoning was that Article 65 decisions are legally binding on national supervisory authorities and directly impact the legal position of companies like WhatsApp, leaving the national authority no discretion regarding the EDPB's findings.&nbsp;&nbsp;&nbsp;</p><p>If the CJEU follows AG &#262;apeta, it would indeed be a step towards accountability, allowing direct challenges to the EDPB's formal, binding directives. This would be a welcome development, potentially accelerating legal clarification and forcing the EDPB to be more rigorous when it issues these powerful Article 65 decisions.&nbsp;&nbsp;&nbsp;</p><p>AG &#262;apeta's reasonable stance on binding decisions highlights the problem with&nbsp; &#8220;non-binding&#8221; guidance. While labeled as such, EDPB interpretations and recommendations are often treated as mandatory by national DPAs and carry significant weight for businesses. However, as the <em>Meta v EDPB</em> case illustrates, their &#8220;non-binding&#8221; status may shield these influential instruments from direct EU judicial review, creating an accountability loophole (any indirect review that may eventually happen is likely to be too little, too late).</p><p>This is quasi-legislation without accountability, transparency, and adequate judicial control. The EDPB, a body of regulators, effectively makes law through the back door, pushing its maximalist interpretations of data protection law without the checks and balances inherent in a proper legislative or even a fully reviewable administrative process. The &#8220;pay or OK&#8221; opinion is <a href="https://eutechreg.com/p/eu-edpb-opinion-on-pay-or-ok-and?utm_source=publication-search">a prime example</a>, where the EDPB heavily suggested that large online platforms should consider providing users with an &#8220;equivalent alternative&#8221; that does not entail payment of a fee, a significant market intervention not mandated by the GDPR. Similarly with the opinion on AI models, which <a href="https://eutechreg.com/p/the-edpbs-ai-opinion-shows-the-need">I criticised</a> as an example of privacy myopia, which is a structural problem of GDPR enforcement.</p><p>This situation is untenable and cries out for GDPR reform. As <a href="https://eutechreg.com/p/gdpr-reform-what-should-it-achieve">I argued</a>, we need a system that reins in the EDPB's tendency towards privacy fundamentalism and that allows for independent review of all the past guidance. The GDPR was not meant to put two rights&#8212;privacy and data protection&#8212;above all other rights and vital interests of Europeans. The EDPB has proven institutionally incapable of adhering to this fundamental feature of EU law.&nbsp;&nbsp;&nbsp;</p><p>AG &#262;apeta's opinion, if followed, might plug one hole in the dam of EDPB accountability. But the <em>Meta v EDPB</em> case shows that other significant leaks remain, allowing the EDPB's influence to flood the regulatory landscape unchecked. It's time to rebuild the dam. It's time for GDPR reform that ensures the rule of law applies as much to the regulators as it does to the regulated. The kind of reform I've advocated for previously is needed to ensure the GDPR serves its intended purpose without stifling the digital economy and common sense through unchecked regulatory overreach.</p>]]></content:encoded></item><item><title><![CDATA[GDPR reform: what should it achieve]]></title><description><![CDATA[It looks like GDPR reform, touching both its enforcement and its substantive rules, may be happening.]]></description><link>https://eutechreg.com/p/gdpr-reform-what-should-it-achieve</link><guid isPermaLink="false">https://eutechreg.com/p/gdpr-reform-what-should-it-achieve</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Tue, 01 Apr 2025 16:58:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It looks like GDPR reform, touching both its enforcement and its substantive rules, may be happening. I only hope that it will not be a wasted effort. In March, the EU Justice Commissioner <a href="https://www.csis.org/analysis/future-transatlantic-digital-collaboration-eu-commissioner-michael-mcgrath">announced</a> that the EU Commission&#8217;s &#8220;simplification&#8221; agenda will involve reducing GDPR compliance burdens for smaller organisations. Moreover, Axel Voss, a prominent Member of the European Parliament from the dominant EPP faction <a href="https://www.linkedin.com/feed/update/urn:li:activity:7302998853488660480/">called for</a> a reform in the similar direction. Apparently, the privacy activist Max Schrems joined Voss&#8217; call, though I assume with reservations and for different reasons. My regular readers won&#8217;t be surprised that I welcome the idea of GDPR reform, as I&#8217;ve been <a href="https://barczentewicz.substack.com/archive">writing</a> about GDPR&#8217;s problems and the need for change, most recently in <a href="https://barczentewicz.substack.com/p/a-serious-target-for-improving-eu">A serious target for improving EU regulation: GDPR enforcement</a>. However, even though we don&#8217;t know much about the various March proposals, what has been released causes me to worry that they are missing the biggest issue with the GDPR: its imbalanced, privacy-absolutist enforcement framework. In this text, I will briefly summarise what we know about the new proposals and what I think is missing in them.</p><p></p><h2>What do we know about the proposals?</h2><p>We know the least about the EU Commission&#8217;s plans. All that Commissioner Michael McGrath said was:&nbsp;</p><blockquote><p>So, it is a real focus of the European Commission to improve the competitiveness of the European economy and to bring forward a whole range of simplification measures, and we have started that process with a number of omnibus packages already focused on sustainability reporting, sustainability due diligence. And yes, GDPR will feature in a future omnibus package, particularly around the recordkeeping for SMEs and other small and medium-sized organizations with less than 500 people. So we will be examining what ways in which we can ease the burden on smaller organizations in relation to the retention of records while at the same time preserving the underlying core objective of our GDPR regime.</p></blockquote><p>Hopefully, relaxing recordkeeping rules is not the full extent of the Commission&#8217;s ambition, because that would really be a performative exercise and a wasted chance to really improve EU&#8217;s competitiveness.</p><p>As to the Voss (Voss-Schrems?) proposal, we know a bit more, based on Axel Voss&#8217; LinkedIn <a href="https://www.linkedin.com/feed/update/urn:li:activity:7302998853488660480/">post</a> and on <a href="https://www.euractiv.com/section/tech/news/voss-and-schrems-team-up-to-propose-three-layered-gdpr-revision/">press reports</a> from an event where he presented the idea. The big picture changes (which Schrems did not agree with) would involve the following according to <a href="https://www.euractiv.com/section/tech/news/voss-and-schrems-team-up-to-propose-three-layered-gdpr-revision/">Euractiv</a>:</p><blockquote><p>Voss wants to remove the enforcement role of the European Data Protection Board and "adjust" fundamental principles like 'the right to be forgotten' or 'data minimisation' to modern technologies.</p></blockquote><blockquote><p>He also wants to flip the logic of the legislation, currently prohibiting personal data processing by default with certain exceptions, to a regime that only prohibits certain privacy-violating practices.</p></blockquote><p>This is somewhat vague, but I welcome the idea that adjusting the key GDPR principles is necessary, due to the imbalanced GDPR interpretations that are currently being enforced. I also like the idea&#8212;if I understand it correctly&#8212;of removing GDPR Article 6 which requires a &#8220;lawful basis&#8221; for the processing of personal data</p><p>As to removing the enforcement role of the European Data Protection Board, I think that a more important issue is making sure that any decisions or guidance can only be made when approved by a body that is truly institutionally and intellectually independent from privacy enforcers. If that can be achieved, then it may be a good idea for the EDPB or some other body to even centralise on an EU level case-handling functions (but not issuing fines or directions!), at least for cross-border cases. (For more details, see my text <a href="https://barczentewicz.substack.com/p/a-serious-target-for-improving-eu">A serious target for improving EU regulation: GDPR enforcement</a>.</p><p>Voss also proposed a &#8220;a new three-layer risk-based approach.&#8221;&nbsp;</p><p>The first (&#8220;GDPR mini&#8221;) layer would apply</p><blockquote><p>to 90% of all businesses (i.e small retailers, manufacturers) that process personal data from fewer than 100.000 data subjects and are not handling special categories of personal data (Art 9 GDPR, i.e. health records).</p></blockquote><p>This seems not to appreciate how broadly the special categories of personal data (Article 9 GDPR) are being interpreted by data protection authorities&#8212;which I&#8217;d say is one of the prime examples of the privacy-absolutist enforcement framework. So if such disproportionate interpretative approaches to key definitional GDPR elements are not tackled, then a tier meant for &#8220;90%&#8221; could end up applying to a small minority.</p><p>As to the benefits of being in a &#8220;mini&#8221; tier:</p><blockquote><p>No Data Protection Officer (DPO) required anymore. Simplified transparency rules &amp; no excessive documentation. Lower administrative fines (i.e. capped at &#8364;500.000 instead of &#8364;20M). Only core obligations (i.e. Art 5) apply.</p></blockquote><p>Interesting, but could mean not too much in practice if the general approach of the privacy enforcement doesn&#8217;t change.</p><p>As to the other tiers, the one that may cause the most controversy is &#8220;GDPR plus:&#8221;</p><blockquote><p>for large VLOPs, online advertisers, and data brokers, meaning all those companies whose business model is built fundamentally on the processing of personal data. A threshold of 'processing data from 10M+ individuals' or 'handling 50%+ of a country&#8217;s population' seems to make sense.&nbsp;</p></blockquote><p>If this is combined with the kind of enforcement reform I <a href="https://barczentewicz.substack.com/p/a-serious-target-for-improving-eu">outlined</a>, the largest companies operating digital services across the EU would probably welcome being placed in a separate category. It would also make sense to have fully centralised EU-level GDPR enforcement for this tier. Perhaps Voss casts the net a bit too broadly (all online advertisers, really?). But on the other hand, many non-&#8221;big tech&#8221; cross-border digital service providers would probably prefer to have centralized GDPR enforcement, so some flexibility may be needed.</p><p>The proposed rules for the &#8220;plus&#8221; tier would involve:</p><blockquote><p>Mandatory annual external audits (similar to financial audits).&nbsp; Stronger transparency obligations (i.e. publicly disclose processing records). Reversed burden of proof: companies must prove compliance, not the regulators. Real consequences for structural violations.</p></blockquote><p>All those things may be sensible, but only <em>if</em> the rules (or rather the legal interpretations) against which those organizations are meant to be measured are applied proportionately. This is clearly not the case now, as I&#8217;ve been <a href="https://barczentewicz.substack.com/archive">arguing repeatedly</a>.</p><p></p><h2>What is missing?</h2><p>The main missing element in the proposals is a serious enforcement reform to balance the currently dominant privacy absolutism. Beyond that, the Voss plan suggests the right direction of rethinking some of the key GDPR principles. I&#8217;d add that we should also rethink the key definitional notions, including &#8220;personal data&#8221; and &#8220;special categories&#8221; of personal data. Also, GPDR reform should simplify digital data protection by addressing problems with a separate &#8220;cookie law,&#8221; i.e. with the ePrivacy Directive (see e.g. my <a href="https://barczentewicz.substack.com/p/consent-for-everything-edpb-guidelines">Consent for everything? EDPB guidelines on URL, pixel, IP tracking</a>).</p><p>Frankly, with a better, more balanced approach to enforcement&#8212;based on a realistic political economy perspective on bureaucracy and ideologies&#8212;I don&#8217;t think there would have been a need for much substantive reform. But we are now in a situation where the GDPR &#8220;in action,&#8221; as interpreted by the authorities and the courts, has been shaped by privacy absolutism. To address that, we likely need both changes in enforcement and in substantive rules.&nbsp;</p><p>Sensible GDPR reform needs to be based on a realistic understanding of what is wrong with the current system. Yes, there are some &#8220;red tape&#8221; concerns, but this pales in comparison with the more serious, structural problem of an enforcement framework which is institutionally biased in the direction of privacy absolutism. To focus on &#8220;record keeping,&#8221; without tackling the latter problem would be to waste this opportunity. Europe needs a proportionate data protection framework, which protects individuals, while avoiding absolutist zealotry and placing privacy above everything else.&nbsp;</p><p>Not too long ago, the idea that there may be an opening for a substantive GDPR reform seemed fanciful. Now, it is the official policy of the European Commission. I hope that the reform process will be open to voices from outside the privacy bubble, and will result in restoring the balance between privacy and data protection, and other vital interests of Europeans, like economic security and freedom of expression and information.</p><p></p>]]></content:encoded></item><item><title><![CDATA[A serious target for improving EU regulation: GDPR enforcement]]></title><description><![CDATA[In recent months, we&#8217;ve been hearing about the European Commission&#8217;s call for simplification of regulations&#8212;an agenda championed by President Ursula von der Leyen.]]></description><link>https://eutechreg.com/p/a-serious-target-for-improving-eu</link><guid isPermaLink="false">https://eutechreg.com/p/a-serious-target-for-improving-eu</guid><dc:creator><![CDATA[Mikołaj Barczentewicz]]></dc:creator><pubDate>Thu, 27 Feb 2025 10:43:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ANRN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F092138d0-76e1-4106-a3e0-17c1cf65da71_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In recent months, we&#8217;ve been hearing about the European Commission&#8217;s call for simplification of regulations&#8212;an agenda championed by President Ursula von der Leyen. She rightly <a href="https://enlargement.ec.europa.eu/news/speech-president-von-der-leyen-european-parliament-plenary-new-college-commissioners-and-its-2024-11-27_en?utm_source=chatgpt.com">noted</a> that businesses are overwhelmed by regulations that are &#8220;too complex and costly to comply with.&#8221; She echoed the seminal <a href="https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en">Draghi report</a>, which underscored that a lighter and clearer regulatory hand can spur innovation and growth that Europe desperately needs.</p><p>Yet <strong>there&#8217;s one regulatory domain where complexity and unpredictability are on full display: data protection</strong>. Under the current GDPR enforcement regime, data protection authorities (DPAs) hold enormous power to impose huge fines, but they do so through an essentially privacy-maximalist lens. This has left businesses unsure of where the lines really are, has sown confusion across EU borders, and worst of all, has arguably placed an absolutist interpretation of one fundamental right&#8212;privacy&#8212;above everything else. <strong>What we do need instead is balanced protection of all Europeans&#8217; vital interests, including innovation, economic security, freedom of expression, and more.</strong></p><p>This is why <strong>I was surprised to see that the European Commission has not yet withdrawn its proposal for the <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52023PC0348">GDPR Procedural Regulation</a></strong>. The Proposal does little to reduce the complexity in GDPR enforcement and <em>nothing</em> to address the most pressing problem of the lack of balance in GDPR enforcement. The best course of action would be to quickly go back to the drawing board and seriously think about fixing GDPR enforcement, so that it genuinely contributes to the protection of fundamental rights and vital interests of Europeans, including our economic interests. Here, I propose one path for potential reform, hoping to spur a debate.</p><p></p><h3><strong>Why the Current Regime Needs an Overhaul</strong></h3><p><strong>The key substantive provisions of the GDPR aren&#8217;t the problem.</strong> The GDPR was intended to protect personal data while acknowledging the need to balance privacy with other fundamental rights and societal goals. <strong>But in practice, the enforcement structure has produced a disproportionate, absolutist interpretation of privacy.</strong> We see it in how the European Data Protection Board (EDPB) issues opinions that preserve enormous enforcement &#8220;flexibility&#8221; for DPAs without offering genuinely reliable guidance for compliance. The result is a chilling effect on innovation and a sense that privacy concerns trump everything else, which is neither reflective of European values nor truly mandated by EU law.</p><p><strong>The much-hyped <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en">EDPB Opinion on AI models</a> exemplifies this phenomenon.</strong> It&#8217;s a laundry list of action items without any firm commitment that, if followed, will keep you on the safe side of enforcement. In effect, it grants regulators near-limitless discretion while offering minimal practical guidance to those trying to innovate responsibly. That fundamental disconnect from everyday economic realities means a startup or business simply can&#8217;t count on any consistent interpretation of what &#8220;compliance&#8221; looks like once it&#8217;s actually tested in a DPA enforcement proceeding.</p><p>Some privacy regulators will say they aren&#8217;t trying to ban AI or hamper innovation; they merely &#8220;keep options open&#8221; so that they are never constrained from imposing what they see as the most privacy-preserving interpretation. But in practice, <strong>uncertainty can be as chilling as an outright prohibition</strong>&#8212;especially when regulators have the power to levy fines that could cripple many businesses. Many potential entrepreneurs won&#8217;t chance building an ambitious AI tool here if they suspect a DPA might retroactively decide their GDPR efforts weren&#8217;t good enough.</p><p>What drives this outcome? It&#8217;s the political economy of GDPR enforcement. <strong>Post-GDPR, privacy authorities gained sweeping powers but haven&#8217;t faced a proportionate obligation to weigh the broader societal and economic impacts of their choices.</strong> In reality, they&#8217;re incentivized to adopt a mindset that maximizing privacy is the sole end, with any consideration of non-privacy interests treated as a burden for businesses (and occasionally other regulators) to defend.</p><p>This outcome stands in direct conflict with the EU&#8217;s broader ambitions. With the Draghi report and the stated goals of the new EU Commission, the status quo of GDPR enforcement feels increasingly out of sync with where Europe wants to be. Giving more powers to the EDPB as proposed in the draft GDPR Procedural Regulation does not address the real issue.</p><p></p><h3><strong>My vision for a new tribunal to make GDPR enforcement decisions</strong></h3><p>The problem with GDPR enforcement is structural, and thus my suggested solution is also structural. What if we could end the practice of leaving DPAs as both prosecutor and judge&#8212;and instead entrust the most consequential and cross-border decisions to an independent EU tribunal? Here&#8217;s how I see this could work:</p><p><strong>DPAs keep investigative powers. </strong>DPAs would remain responsible for uncovering facts and building cases. It may be worth considering whether for cross-border cases there should be a new DPA on the EU level, but I leave this issue for another day.</p><p><strong>An independent, multi-disciplinary tribunal decides. </strong>When a cross-border case or similarly significant enforcement action is on the table, the DPA must present its findings to a specialized tribunal. This body would issue binding decisions (including fines). Crucially, it would not be stacked with privacy lawyers. Instead, a<strong> </strong>diverse panel&#8212;including economists, business experts, generalist judges&#8212;would weigh privacy alongside other fundamental rights and Europe&#8217;s need for innovation and growth.</p><p><strong>Formal balancing of all interests. </strong>The tribunal&#8217;s legal framework would require it to articulate how each decision balances data protection against other fundamental rights and economic realities. Instead of mere lip service, there would be a written record ensuring that privacy isn&#8217;t automatically placed above, say, the freedom to do research with AI or the freedom to conduct a business.</p><p><strong>&#8220;Advocates general&#8221; for non-data protection interests. </strong>Drawing inspiration from the EU Court of Justice, we could empower judicial officers&#8212;&#8220;advocates general&#8221;&#8212;who act specifically to highlight and defend non-data-protection interests. Their role would be to ensure that the tribunal never loses sight of broader vital considerations like free expression or economic development.&nbsp;</p><p><strong>Transparency and harmonization. </strong>All parties&#8212;DPAs, investigated parties, and possibly third-party interveners&#8212;could submit arguments (or amicus curiae briefs). Because this tribunal speaks with a single EU-level voice, we&#8217;d see much less fragmentation and more consistent decision-making. Over time, the decisions would build a coherent framework, streamlining compliance efforts across the Union&#8212;a direct answer to the calls for simplification and legal certainty.</p><p><strong>Review of EDPB opinions, and guidelines. </strong>Another possible refinement would be to require that the tribunal reviews any EDPB guidelines or opinions before they become final. This way, if the EDPB produces <a href="https://barczentewicz.substack.com/p/consent-for-everything-edpb-guidelines">an imbalanced document like its infamous &#8220;cookie law&#8221; opinion</a>, the tribunal could refuse to approve it. Regarding documents already produced by the EDPB, many of which call for radical overhaul, there could be a procedure for the tribunal to review them too. Some bodies could be given a right to request a review (e.g., a majority of the EDPB, a group of Members of the European Parliament, a Member State, the Commission, a tribunal panel, or one of the tribunal&#8217;s &#8220;advocates general&#8221;).</p><p></p><h3><strong>Conclusion</strong></h3><p>Such a reform would complement the Commission&#8217;s &#8220;simplification&#8221; drive. It would reduce regulatory burden by offering better and likely more predictable outcomes, ending the constant second-guessing about whether a new technology might suddenly get crushed by a surprising DPA crackdown that doesn&#8217;t take economic and technological reality into account. And it upholds the plurality of rights enshrined in EU law, preventing privacy from overshadowing everything else.</p><p>This proposal is not about changing substantive GDPR rules, but about refining enforcement so that it delivers on all European values&#8212;privacy, yes, but also innovation, free expression, and economic security. Instead of leaving DPAs to shape the entire digital future of Europe on their own, it would aim to establish a more measured and balanced process.</p><p>Reforming GDPR enforcement is not trivial. There is a significant risk of ideological capture of any new mechanism, as it happened with the existing framework. However, it is a worthwhile task and now may be the right time to undertake it. The currently proposed GDPR Procedural Regulation does not address the key problems. Thus I think it would be best for the Commission to withdraw the proposal and replace it with one based on a realistic assessment of the failings of the current enforcement framework. With this text, I hope to provoke some debate about serious reform of GDPR enforcement, a debate that we need but currently lack. </p>]]></content:encoded></item></channel></rss>