EU DMA workshops: Google, Amazon, Apple, Meta, and Microsoft
Over the past two weeks, the European Commission held a second round of public workshops with the designated "gatekeeper" companies Alphabet, Amazon, Apple, Bytedance, Meta, and Microsoft. Having analyzed the first round in April 2024 from a privacy and security perspective, I now examine what happened in Brussels during these follow-up sessions. The workshops provide helpful background for Meta’s and Apple’s announcements that they will fight the Commission’s non-compliance decisions in the EU courts.
The second round of DMA workshops reveals troubling patterns in the Commission's enforcement approach, particularly regarding privacy and security considerations. As I've argued in my work on interpreting the DMA consistently with the EU Charter, the Charter takes precedence over regulations like the DMA. The workshops suggest that the Commission may be failing to meet this standard, pursuing an interpretation of the DMA that undermines rather than upholds fundamental rights.
The gatekeepers' testimonies paint a picture of regulatory enforcement that is creating real harm: European consumers receiving inferior digital services, small businesses losing effective advertising tools, and security vulnerabilities being introduced in the name of openness. Meanwhile, the compliance costs—"multiple orders of magnitude" beyond what the Commission estimated—represent resources that could have been invested in innovation and security improvements.
What is also very concerning is the Commission's apparent belief that it can mandate optimal market outcomes through regulation, even when those outcomes conflict with user preferences and technical realities. The low adoption rates for less personalized advertising that the Commission cites as evidence of non-compliance may instead be evidence that users actually prefer the services as they were originally designed. The Commission's response—to mandate changes that make services worse for users—exposes a regulatory philosophy fundamentally disconnected from consumer welfare.
The path forward requires fundamental changes in the enforcement approach: clearer compliance guidance, genuine engagement with security experts, and recognition that consumer choice—not regulatory fiat—should drive market outcomes. Based on these workshops, however, the Commission appears unlikely to make such changes without significant pressure from stakeholders and the courts.
Below, I first discuss some general themes and then add more specific notes from individual gatekeeper workshops.
General themes
The DMA's ambiguities compound enforcement uncertainty
I previously highlighted the DMA's inherent legal uncertainties—from questions about its validity as a harmonizing measure under Article 114 TFEU to the undefined key concepts and conflicting interpretations that plague its enforcement. The workshops confirmed these concerns have materialized into concrete compliance challenges.
The gatekeepers' frustrations reveal a regulatory framework struggling with its own contradictions. Apple characterized the Commission's interoperability interpretation as "extreme," echoing my analysis that the Commission's approach goes beyond reasonable legal interpretation. Meta and Apple's planned legal challenges reflect a deeper issue: the Commission's refusal to provide compliance clarity while simultaneously imposing fines for non-compliance creates a Kafkaesque regulatory environment. As Apple pointedly observed, when "no two people agree on what the DMA's substantive obligations mean," the resulting ambiguity undermines the rule of law itself.
Amazon's experience particularly illustrates what I've described as the technical implementation challenges inherent in the DMA. Their struggle to reconcile DMA data portability requirements with GDPR data controller obligations exemplifies the regulatory conflicts I warned would emerge. The companies' repeated pleas for guidance—met with Commission silence—suggest an enforcement approach that prioritizes regulatory power over legal certainty or even over positive effects for the EU.
The Commission confuses providing choices with dictating outcomes
While according to the Commission, DMA explicitly focuses on conduct rather than output, the EC's interpretation of low adoption rates for less personalized advertising as evidence of non-compliance contradicts this principle.
Meta's defense—that the DMA concerns conduct and user choice, not adoption metrics—aligns with my analysis of the pay-or-consent framework and subsequent discussions with Eric Seufert. If users overwhelmingly choose personalized services when given a genuine choice, this demonstrates consumer preference, not compliance failure. The Commission's response—treating user preferences as a problem to be solved through enforcement—exposes what I've characterized as the Commission's lack of concern for consumer welfare. This approach suggests the Commission seeks not to enable choice but to engineer predetermined market outcomes, regardless of what users actually want.
Enforcement disconnected from real-world consequences
The Commisson's fixation on "consent rates" contrasts starkly with their lack of concern with real negative effects of their DMA interpretation. I previously warned about the disconnect between the DMA's theoretical goals and practical realities, particularly how enforcement could disrupt business models that European businesses depend on (e.g., targeted advertising.
On this point, Meta highlighted how enforcement "decontextualized from impact" damages both users (through irrelevant advertising) and SMEs (through reduced advertising effectiveness)—precisely the advertising market disruption I've been writing about. Google's examples—€114 billion in potential business losses, 30% traffic reductions for direct hotel bookings, and the withholding of innovations from Europe—illustrate the Commission's willingness to, as I've noted, ignore the economic pain (https://truthonthemarket.com/2025/06/20/the-eus-dma-enforcement-against-meta-reveals-a-dangerous-regulatory-philosophy/) its enforcement imposes.
Apple's concern that the DMA undermines their unique value proposition resonates with my longstanding critique of policymakers' dismissal of the walled garden as a legitimate consumer choice. The real threat of companies being forced to "hit the pause button" on European innovation suggests that the DMA's approach could leave European consumers as second-class digital citizens.
The Commission's systematic dismissal of privacy and security
The workshops also provided evidence for what I've long argued: the DMA creates privacy and security risks with insufficient safeguards while the Commission shows a troubling willingness to dismiss these concerns. See also my characterization of the DMA as a "security nightmare" and my analysis of how interoperability mandates create unaddressed privacy risks.
Apple's testimony was particularly striking. They observed that cybersecurity agencies like ENISA were "nowhere to be found" around specification decisions. The exclusion of technical experts from critical decisions, as Apple noted, ensures that "complex trade-offs around interoperability are being made by those without requisite technical knowledge." The company's incredulity at the Commission's statement that "integrity has a distinct meaning from users' privacy and security" highlights precisely the compartmentalized thinking I criticized when analyzing how the DMA conflicts with EU Charter rights.
Fragmentation threatens the DMA's legal validity
The workshops echoed my thesis about regulatory fragmentation: parallel national enforcement actions, particularly by Germany's Federal Cartel Office (FCO), undermine the DMA's validity as a harmonizing measure under Article 114 TFEU.
Google and Amazon's testimonies provide concrete evidence of fragmentation. Amazon's experience is particularly telling: despite the DMA's promise of a "single European rulebook," they face FCO actions on matters "squarely covered by the DMA"—precisely the "brazen attempt to circumvent the DMA's harmonizing function" I identified in analyzing German enforcement. The FCO's requirement that Amazon promote "higher-priced, independent, third-party seller offers" contradicts the unified approach the DMA was meant to establish.
Google's warning about unchecked national litigation compounds the problem. When member states and private litigants can pursue parallel actions on DMA-covered issues, it creates a legal issue that could invalidate the entire DMA. If the Commission accepts such fragmentation—as it appears to be doing—it admits the DMA has failed its fundamental purpose as a harmonizing instrument, potentially rendering it vulnerable to invalidation by the EU Court of Justice.
Notes on individual workshops
Alphabet/Google: security-privacy issues
Google's workshop testimony raised an important problem in the DMA's implementation: how third parties systematically push to weaken security protections in the name of competition. A participant raised concerns about the friction in the sideloading process due to security warnings, which cause significant user drop-off, and asked if Google could create a whitelist for trusted developers.
Alphabet representatives argued that Android requires a single, one-time warning per install source due to real security risks. A whitelisting system is not technically feasible due to the vast number of apps and the inability to guarantee that a sideloaded app is the same as a verified one from the Play Store. They also stated they do not dictate to OEMs what additional warnings they can show.
Google also presented compelling data: users are 50 times more likely to encounter malware off the Google Play Store. They provided a concrete example of a current threat—an app that clones TikTok and allows malicious actors to upload the entire photo gallery from Android devices. As Google noted, "these are real threats, these are real risks."
This aligns with what I've been warning about regarding privacy and security risks of interoperability and sideloading mandates. As I wrote then, a sideloading mandate "will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny."
Another concerning example came from a "privacy-focused" search engine whose representative argued they need more granular search query data from Google because the current anonymization thresholds render over 98-99% of the query data useless for key functions like auto-complete, as it excludes the long tail of unique queries.
Alphabet representatives stressed that the law requires anonymization, and frequency thresholding is the state-of-the-art method. They invited challengers to propose a better, equally robust anonymization technique. Google explained their specific approach: queries must be entered by at least 30 signed-in users globally over the preceding 13 months, and results must have been viewed by at least five signed-in users in a given country per device type. To understand the significance: these thresholds mean that unique or rare search queries—which constitute the 'long tail' of search behavior—cannot be shared with competitors, effectively limiting their ability to improve features like autocomplete. Technical privacy experts who reviewed these measures almost suggested they might not be stringent enough, highlighting the difficult balance required. Google mentioned that they have innovated to find ways to recover some queries that fall below the threshold, such as easily identifiable misspellings. They stated that Google is not aware of any better solution for robustly anonymizing such data and that no one has proposed one. They called the frequency thresholding method "state of the art throughout the industry."
The European Commission representative highlighted that Article 6.11 of the DMA protects end-user privacy by requiring gatekeepers to share search data in an anonymized form. The Commission stated that this anonymization should not significantly reduce the quality or usefulness of the data. They mentioned that the Commission is assessing how data can be anonymized to ensure strong privacy protections in line with GDPR and is engaging with data protection authorities on this topic.
Amazon: compliance costs and unintended privacy consequences
Amazon's testimony exposed two issues in the DMA's design and implementation. First, the staggering gap between projected and actual compliance costs. The Commission estimated €10 million annually across all gatekeepers; Amazon alone reports costs "multiple orders of magnitude beyond that predicted amount." This massive diversion of resources from innovation to compliance represents a hidden tax on digital services offered in Europe.
More troubling is what Amazon's data portability implementation revealed about security risks. Their vetting process for DMA third-party data access uncovered that over 75% of applicants operate from outside the EU, with many appearing to be data aggregators with opaque privacy policies. This validates my concerns about the DMA creating new attack vectors: the very mechanisms meant to enhance competition are being weaponized by actors with no stake in European privacy protection.
The API versus self-service portal distinction matters here. While Amazon maintains some control through vetting, the fundamental problem remains: the DMA mandates data flows to entities that may have minimal privacy safeguards or malicious intent. Amazon's discovery of "significant security risks" in their vetting process should prompt serious reconsideration of these mandates, yet the Commission appears oblivious.
Apple: when interoperability becomes exploitation
As I've written in my analysis on Apple and the EU DMA, Apple's concerns about the Commission's approach are particularly acute.
The Commission representative emphasized that user safety was a primary factor in the Commission's decisions. The official stated that the measures specified in their decision "also come with privacy and security safeguards." A typical measure involves "having user permission to grant apps access to sensitive data," similar to how Apple's own ecosystem works. For example, regarding iOS notifications, the Commission clarified that Apple can "require third parties to encrypt notifications and not to break any end-to-end encryption."
Apple countered that the interoperability mandates were "intrusive" and that the process was being rushed. An Apple representative said, "You're forcing us to rush interoperability engineering, creating the risk that we're going to have to offer premature solutions with bugs... creating lasting problems for developers and users."
Apple revealed that some third parties are exploiting interoperability requirements for data harvesting. Specifically, these parties have requested the ability to "read the contents from each and every message and email on the user's device." An Apple spokesperson pointed out a direct conflict with GDPR, stating, "The DMA was explicitly intended to not undermine the GDPR. Yet it seems we are constantly being asked to do just that."
This weaponization of interoperability is precisely what ICLE warned about in the recent comments on Apple's interoperability requirements. As my colleagues noted there, "unscrupulous actors with no incentive to maintain security safeguards are sure to try to exploit any loopholes created by the Commission. The result is an inevitable increase in vulnerabilities, leaving European users more exposed to risks that could have been mitigated within a more controlled ecosystem."
An Apple representative read a quote from the Commission's specification decision: "Within the architecture of the DMA, integrity has a distinct meaning from users' privacy and security," and responded, "That cannot be right, and again, cannot be what was intended."
Regarding the Account Data Transfer API, Apple described the security measures in place, including a due diligence process for third parties. Apple asks if they "plan to sell or license user data to third parties that the user has no relationship with." The representative noted the Commission's early agreement that "it was important the DMA not become synonymous with abuse of data in this space."
Meta: business model destruction through regulatory fiat
I've written extensively about Meta's advertising compliance, but the workshops revealed one significant clarification. Meta explained that their LPA (less personalised ads) option does not involve any data combination across different Meta services, stating: "if a user opts to take the ads experience less personalised ads, the only data that we would use are personal data relating to a user's interactions with ads, so an ad they want to see more of or an ad they never want to see again, a user's age, gender, if they've provided this, and a general location. We also use very limited context-based data. This is relating to content shown to a user in a specific session they are engaging with."
However, a Meta representative added that "there is a part of the ad selection process which could be impacted by the content in the live session that you are in on Facebook and Instagram. But it's minimal, and it's reduced."
This statement raises important questions. Meta appears to acknowledge that its advertising system still accesses basic user information from Facebook/Instagram, even in LPA mode. This could prove problematic given the Commission's strict interpretation that a less personalized alternative must involve no data combination/cross-use across the gatekeeper's services—as I analyzed in "How will the EU DMA 'pay or consent' decision impact Meta's business?".
But, as a Meta representative noticed during the workshop, data like age and location are "pure table stakes for advertising." This creates an impossible bind. The Commission's interpretation risks mandating that Meta offer a service that cannot economically exist. Meta's observation that this "potentially imposes an unviable business model" understates the severity—it definitively imposes business model destruction.
Even if the Commission accepts LPA as is, this doesn't address the issue of it being largely useless to the many EU SMEs that rely on Facebook or Instagram to be able to reach their customers. Such businesses don't have budgets for "brand advertising." To be viable they need to be able to reach precisely the people who are interested in their, often niche, products or services.
The enforcement "decontextualized from impact" that Meta describes has real consequences: European SMEs lose effective advertising tools, users receive irrelevant ads, and Meta faces potentially a difficult choice between compliance and viability. The Commission's fixation on consent rates while ignoring these impacts reveals enforcement driven by ideology rather than consumer welfare or economic reality.
Microsoft: the Recall controversy and consent confusion
Microsoft's workshop highlighted two distinct but related problems. LinkedIn's dual consent requirement—requiring both DMA and GDPR consent for overlapping data uses—exemplifies the regulatory confusion I've long criticized. Users must navigate multiple consent frameworks for the same data processing, a complexity that serves no privacy purpose but increases the Commission's control. This artificial interpretation seems designed to circumvent inconvenient GDPR precedents rather than protect users. (I explain how the "two consents" interpretation increases EC powers and attempts to circumvent precedent here)
The second issue is the Recall feature in Windows. Microsoft explained that the Recall feature, which allows a user to find content they have previously seen on their screen, "runs entirely locally on the PC." It uses AI to process screenshots locally to make past activity searchable. This design implies that sensitive user activity data is not sent to the cloud.
The Commission's interest in mandating data portability for Recall raises serious security concerns. (Some will say "even more" concerns given existing debates about Recall's security, but I'm more optimistic about the future of this kind of technology). An EU Commission official stated that they're seeking feedback on the "data portability solution" for Recall under the EU DMA.
This could be very problematic. Throughout my work on the DMA, I've argued that the EU Commission consistently downplays user security and privacy concerns about legally-mandated interoperability and data portability.