Today, I joined Eric Seufert for my seventh (!) appearance on the Mobile Dev Memo podcast[1] to discuss the European Commission's April decision on Meta's "pay or consent" model. The catalyst for our conversation was the recent release of the full text of this decision, which provides crucial insights into the Commission's approach to enforcing the Digital Markets Act (DMA). I already wrote about the decision here and on Truth on the Market, but our conversation with Eric covered more ground, so I encourage you to listen! Below, I’m sharing my extended notes on that conversation with a bit more chronological and technical detail.
The long road through two regulatory regimes
I began by providing context on the two regulatory regimes affecting Meta's advertising monetization in the EU: the GDPR and the DMA. The saga began in 2018 when the GDPR entered into force, prompting Meta to move from user consent to "contractual necessity" as their legal basis for processing personal data. This kicked off a regulatory odyssey, which I have been covering here on EUTechReg.
In December 2022, the European Data Protection Board (EDPB) forced the Irish Data Protection Commission to tell Meta they couldn't use contractual necessity for personalized advertising. The EDPB adopted a restrictive—and I think mistaken—interpretation that personalized ads funding a service don't fall under contractual necessity.
By April 2023, Meta had pivoted to "legitimate interests" as their legal basis. However, this too was short-lived. The same month, Meta received an unrelated but significant €1.2 billion fine for EU-US data transfers, signaling the regulatory pressure they faced on multiple fronts.
The EU Court of Justice ruled in July 2023 that Facebook couldn't rely on legitimate interests, though that case was limited to third-party data. Crucially—and this would become important later—the Court stated that businesses may be allowed to charge for an alternative version without data processing if they require consent. This seemingly minor point would become central to Meta's strategy.
A significant escalation came from the Norwegian data protection authority. In July 2023, they issued a local decision prohibiting Meta from relying on legitimate interests and initiated an urgency procedure at the EDPB.
By August 2023, Meta announced they would move to consent as their legal basis. The EDPB partially agreed with the Norwegian position in October 2023, directing the Irish DPC to decide that Meta couldn't rely on legitimate interest. With contractual necessity and legitimate interests both ruled out by the regulators, consent was the only remaining option under the GDPR.
The Digital Markets Act, which entered into force in November 2022, added another layer of complexity. When Meta was designated as a "gatekeeper" in September 2023, the Commission crucially designated Facebook, Instagram, and Meta Ads as separate regulated services.
This designation had serious implications. Under Article 5(2) of the DMA, gatekeepers cannot cross-use or combine personal data between separate services without user consent. Even though Meta Ads functions as the advertising backend for Facebook and Instagram—what could be considered as an integrated system—the Commission's designation meant any data flow between these "separate" services would presumptively violate the DMA without consent.
In response, Meta introduced its "Subscription for No Ads" (SNA) plan on September 7, 2023, just two days after their gatekeeper designation. By October 30, 2023, Meta officially announced the model: users could choose between a paid subscription without ads or free access with personalized ads.
The rollout in November 2023 immediately attracted regulatory attention. The European Commission sent Meta requests for information about the new model. From January to March 2024, the Commission engaged in discussions with the Irish DPC about Meta's approach. Meta submitted their compliance report, arguing that the "pay or consent" model satisfied both GDPR and DMA requirements.
Meta’s initial "Subscription for No Ads" (SNA) model offered users a choice: consent to personalized advertising or pay €9.99 (€12.99 on mobile) monthly for an ad-free experience.
March 7, 2024, marked the deadline for gatekeepers to comply with the DMA. Soon after, the Commission opened formal proceedings to investigate whether Meta's model complied with Article 5(2).
Back on the “GDPR track,” in April 2024, the EDPB issued Opinion 08/2024 on “pay or consent” models generally. While acknowledging that the Court of Justice had explicitly said paid alternatives to consent were possible, the EDPB struggled to target “big tech” without harming traditional publishers who had long used similar models. Their opinion hinted heavily that binary choices weren't sufficient.
Continuing the DMA investigation, the Commission sent its preliminary findings to Meta at the beginning of July 2024, concluding that the model didn't comply with DMA requirements. Meta responded contesting these conclusions. In November 2024 Meta announced an "Additional Ads option" providing a third choice for users.
Finally, on April 23, 2025, the Commission adopted its final decision, finding Meta in non-compliance with Article 5(2) of the DMA. As I noted previously, the €200 million fine was relatively low—given the €15.2 billion maximum. This monetary penalty, I suggested to Eric, seems designed to deflect attention from the decision's prohibitionist and economically uninformed legal reasoning, perhaps hoping a lower sum would prevent the U.S. administration from viewing this as a tax or tariff on American companies.
The Commission's two-pronged attack
In its April decision, the Commission adopted a novel interpretation that Article 5(2) of the DMA creates two distinct and cumulative conditions:
The “specific choice” condition, an autonomous DMA concept requiring gatekeepers to offer “a less personalised but equivalent alternative.”
The “valid consent” condition: traditional GDPR consent requirements.
This distinction seems strained, especially since GDPR authorities have long insisted on reading consent requirements to ensure “real” choice. The Commission appears to be creating this separation to assert more regulatory power and circumvent what the Court of Justice said in the Meta case about allowing “appropriate fees.”
The European Commission’s arguments
The Commission's central argument against Meta's "Consent or Pay" model rests on their interpretation that it fails to provide users with a "specific choice" because the paid "Subscription for No Ads" (SNA) option is not an equivalent alternative to the free "With Ads" option.
Different conditions of access
The Commission asserts that the two options are fundamentally different because they have disparate conditions of access. Accessing the SNA option requires a recurring monthly monetary payment and access to online payment systems—a significant economic commitment. In contrast, the "With Ads" option is free, requiring only a click to consent. The Commission argues that the means of remuneration is an essential feature of a service.
Their core thesis is striking: if Meta chooses to offer its primary service for free, any equivalent alternative must also be free of monetary charge to have equivalent access conditions. The Commission states that if Meta finds this unviable, it should make a case under Article 9(1) of the DMA, which allows for suspension of obligations if they threaten economic viability.
Meta counters by accusing the Commission of trying to impose a specific business model, effectively punishing Meta for having historically offered its services for free. They point out that the DMA doesn't explicitly mandate that alternatives must be free of charge, unlike other articles in the regulation where this is specified. Meta claims that offering a non-personalized, ad-supported service for free is not a viable business model.
User “lock-in” and habituation
The Commission also pointed to alleged user lock-in and habituation. For two decades, Meta cultivated a massive user base by offering its services for free, even featuring the slogan "It's Free and Always Will Be" on Facebook's registration page until 2019. The Commission argues that users have grown accustomed to accessing these services without monetary payment and don't perceive ads as disruptive enough to warrant paying a fee. Therefore, presenting a paid option is not a realistic or attractive alternative for most users.
When Meta raised the well-known examples like YouTube Premium, Spotify, and various news publishers offering similar "Consent or Pay" models, the Commission dismissed these examples on two grounds. First, most aren't designated gatekeepers under the DMA. Second, services like video streaming and media publishing allegedly have different user perceptions because their content (movies, articles) has "known monetary value outside the platform," unlike user-generated content on social networks.
I find this distinction particularly unpersuasive. Can Europeans really be considered so unfamiliar with paying for digital services that we can't legally consider them free to choose when payment is involved? The distinction between platforms where content has "known monetary value outside" versus Instagram sounds like arguments from lawyers with no understanding of economic reality. Compare YouTube influencers to Instagram influencers—how is this a relevant categorical distinction?
Predictability
The Commission highlights that Meta knew, or should have known, that an overwhelming majority of users would choose the free option. They point to academic studies showing low willingness to pay for privacy and Meta's own internal documents from 2023, which predicted uptake for the SNA option would be extremely low (less than 1%)—a figure that closely matched the actual outcome.
The Commission suggests Meta's pricing strategy around SNA expressly assumed people wouldn't pay, treating this as a big “gotcha.” But I don't see how this is the gotcha that the Commission's lawyers think it is. Their argument is sophomoric—they say users choose not to pay even though they're "interested" in lesser processing, but provide no argument for why "interest" undermines the choice not to explore that interest. It's a normal feature of life that many interests are of so little value to us that we don't pursue them. A legal standard of free choice cannot be this low as to be based on whatever people might say to a survey, irrespective of business reality.
It was obvious to everyone that most users wouldn't pay, and that only the most valuable advertising targets would. Taking that into account in pricing strategy was basic competence for Meta's monetization team. I'd suggest the low uptake simply shows that users want social media funded by personalized advertising. The low uptake is evidence that the Commission is wasting everyone's time trying to force something Europeans don't want.
Circumventing the CJEU's Meta judgment
The Commission believes that the Court of Justice's ruling in Meta Platforms, which mentioned offering an alternative service for “an appropriate fee,” is inapplicable to the “specific choice” requirement under the DMA. This is because that judgment concerned validity of consent under GDPR, not DMA’s “specific choice.”
Even if the judgment were applicable, the Commission notes it doesn't automatically entitle Meta to charge a monetary fee. They point out that "if necessary for an appropriate fee" doesn't exclude other forms of remuneration, citing German ("Entgelt" - consideration/compensation) and French ("rémunération") versions that suggest broader transfers of value beyond direct user payment.
Meta insists the Commission is trying to circumvent the Meta Platforms judgment, which they interpret as explicitly permitting an equivalent alternative to be offered for an appropriate fee.
Lack of valid consent
Beyond "specific choice," the Commission argues users cannot give valid GDPR consent due to the coercive nature of Meta's model. They emphasize the power imbalance between Meta and its users, combined with the clear "detriment" of either paying for previously free services or abandoning platforms central to their social lives—what they call a "very substantial and potentially irreparable loss." Citing the EDPB Opinion on pay or consent, they argue the binary choice fails to mitigate this detriment, as true freedom would require a free alternative without behavioral advertising.
The Commission particularly emphasizes how Meta cultivated users for two decades with free services ("It's Free and Always Will Be"), making the subsequent paywall an "illusory choice." While the CJEU's Meta Platforms judgment allows "appropriate fees," the Commission argues that here the fee itself becomes coercive—a negative consequence for withholding consent rather than a freely chosen alternative.
Economic blindness: the Commission's most troubling stance
Perhaps the most alarming aspect of the decision is revealed partially in paragraph 154, where the Commission states that DMA compliance "will inevitably impact the business models that gatekeepers choose" and that the DMA "does not safeguard such business models from any constraint it mandates."
Moreover, the Commission explicitly states they won't consider economic impacts unless a gatekeeper applies under Article 9(1), claiming exceptional circumstances threaten their EU operations' viability. Even then, we can expect an extremely narrow interpretation—perhaps pushing gatekeepers to accept minimal profits above a strictly defined cost base.
The third-party blind spot
As I noted in my previous comments on the April decision, the Commission seems to have completely abandoned third-party impact analysis. In paragraph 262, they acknowledge that "behavioural advertising may create value to some end users and businesses under certain circumstances" but immediately dismiss this as irrelevant to their legal analysis.
The Decision contains no examination of potential harm to EU businesses advertising on Meta's platforms. I suggested that personalized advertising on Facebook and Instagram has created significant contestability in European markets, particularly for SMEs and direct-to-consumer businesses that depend entirely on targeted advertising channels.
Meta's current triple choice model
Since November 2024, Meta has implemented a significantly revised model:
Reduced subscription prices by 40% (€5.99/month on web, €7.99 on mobile).
Introduced a third option: "less personalized ads" with non-skippable ad breaks.
Created a two-step choice flow where users first choose between paid and free, then can opt for less personalization.
The Commission's decision hints they might accept this model if Meta removes the pre-selection of personalized ads in the second step. However, given the hostile tone throughout the decision, I wouldn't exclude new objections, particularly around what data processing occurs in the "less personalized" option.
Meta states they use age, gender, location, device information, ad engagement data, and “information about the content you're viewing.” The Commission might perhaps argue this still constitutes cross-use between Meta Ads and the social platforms.
EU AI regulation: a pattern of uncertainty
We also discussed recent developments in EU AI regulation, where similar patterns of regulatory uncertainty emerge.
The EDPB's non-guidance
The European Data Protection Board issued an Opinion on AI models and the GDPR in December 2024, which in classic EDPB style provides a long list of things AI developers can attempt for GDPR compliance while giving no guarantees of sufficiency. They didn't declare AI illegal in the EU, but that's hardly reassuring—such a declaration would be politically unpalatable even there. Instead, they maintained maximum enforcement flexibility, allowing any national enforcer to impose billion-euro fines while leaving AI developers guessing whether their compliance efforts will be deemed inadequate.
CNIL's pragmatic exception
The French data protection authority, CNIL, stands out by actually labeling specific actions as likely to achieve compliance. This may sound like faint praise, but it shows how imbalanced GDPR enforcement has become (see my The EDPB’s AI Opinion Shows the Need for GDPR Enforcement Reform and A serious target for improving EU regulation: GDPR enforcement). The problem is that CNIL's guidance only indicates how the French authority will act. In cross-border cases, CNIL may be outvoted in the EDPB by more privacy-absolutist authorities, as the Irish DPC has repeatedly learned.
Meta's AI training controversy
Meta's plan to train AI models on public user data faced particular scrutiny. Critics argued Meta couldn't use data uploaded before their May 27 announcement because users lacked reasonable expectations about AI training. This creates an impossible retroactivity problem—if accepted, no existing public data could ever be used for AI training.
Meta implemented unconditional opt-outs for both users and non-users. Yet critics argue this still isn't sufficient. As I noted, we're discussing public content anyone on the internet can access. Given search engines and widespread web scraping, can it seriously be argued that users lack reasonable expectations about how public content might be reused?
The Draghi Report's limited impact
Finally, we discussed how the Draghi Report's warnings about European competitiveness have had minimal impact on Brussels. While there's genuine reform momentum in European capitals—with the Swedish Prime Minister even calling to delay AI Act implementation—the Commission treats this as a political problem to be managed rather than a call for fundamental change.
The Commission's proposed GDPR “simplification” is laughable, focusing on minor cost savings while ignoring fundamental issues. Brussels seems unwilling to accept that there are fundamental problems with how laws like the DMA and GDPR are applied—particularly the refusal to consider economic consequences.
What we get from Brussels is lawyers making categorical decisions about digital markets without understanding their economic dynamics. Their formalistic interpretation, divorced from business reality, poses serious risks to European competitiveness.
There’s a segment of privacy officials who view AI like they do cryptocurrency—as a fad to be stopped or at best ignored. They're the same crowd that eagerly shares every paper suggesting LLMs can't count or “reason” properly. Combined with the Commission's explicit stance that they needn't consider economic impacts on gatekeepers or third parties, this attitude threatens the welfare of Europeans.
[1] Here is a list of my MDM appearances so far:
1 March 2023, “A deep dive on European data privacy law”
11 April 2023, “A deep dive into the Digital Markets Act (DMA) and Digital Services Act (DSA)”
14 June 2023, “The future of EU-US data transfers”
20 December 2023, “Exploring the Pay-or-Okay model”
24 April 2024, “Is Pay or Okay dead in Europe?” (see also my notes from that podcast on EUTechReg.com)
12 February 2025, “Understanding the EU’s AI Act”
25 June 2024, “What’s happening with DMA enforcement?”
This is a great post that helps everyone understand where regulation in the EU is going wrong. It's not the laws or regulations themselves. It's the way they are administered.
For me, the kicker is in this section:
"What we get from Brussels is lawyers making categorical decisions about digital markets without understanding their economic dynamics. Their formalistic interpretation, divorced from business reality, poses serious risks to European competitiveness.
There’s a segment of privacy officials who view AI like they do cryptocurrency—as a fad to be stopped or at best ignored. They're the same crowd that eagerly shares every paper suggesting LLMs can't count or “reason” properly. Combined with the Commission's explicit stance that they needn't consider economic impacts on gatekeepers or third parties, this attitude threatens the welfare of Europeans."
Unfortunately, this is all true. Most of us who work in the tech sector believe in the power of harmonised regulation to clear the path for responsible innovation. But regulators creating uncertainty in how their regulations should be interpreted only create blockers to innovation in our societies. Ursula von der Leyen, please take note. And take action.