Daniel Leufer is a Senior Policy Analyst at Access Now, a global human rights organisation that defends and extends the digital rights of people and communities at risk. We spoke about Access Now’s work on the EU AI Act and how the European Commission’s AI Omnibus proposal could seriously undermine fundamental rights.
Q: What is Access Now and can you introduce your work on the EU AI Act?
Daniel: Access Now is a global human rights organisation that works to defend and extend the digital rights of people and communities at risk around the world. We operate a 24/7 digital security helpline that supports people and organisations facing digital threats and conduct policy work at the national, regional, and international levels.
I joined Access Now in 2019 after the so called “glory days of the GDPR”. I had just finished a PhD and was interested in this very vague and hype-driven term “artificial intelligence”. Access Now began to focus on AI because we noticed a concerning narrative: Whilst data protection was coupled to ‘regulation’, AI was coupled with ‘ethics’. This was a subtle and deliberate linguistic tactic to avoid regulatory scrutiny and encourage reliance on industry self-regulation and ethical guidelines. Meanwhile, industry began to rebrand systems they had been operating for years from ‘big data’ to ‘AI’.
At this time, Access Now’s (former) Europe Director, Fanny Hidvégi, was a member of the Commission’s High-Level Expert Group on AI. Through the High-Level Expert Group process, we, alongside civil society and academics, advocated for the EU to conduct a review of existing relevant regulation and identify actual gaps that should be addressed. We did not advocate for a bespoke AI regulation because AI systems – which fundamentally involved large-scale data processing – were well regulated by the GDPR.
Unfortunately, the EU did not follow this approach. Instead, they proposed the risk-based AI Act, which we
reported was loudly welcomed by industry. We were concerned about the risk-based framing because it departs from the GDPR’s rights-based approach, which sets a floor of fundamental rights that apply whether processing is low or high risk. The AI Act takes a different route: The rules don’t apply unless the risk reaches a certain threshold. This framing also assumes that all risks can be mitigated. However, certain uses of AI are intrinsically at odds with fundamental rights and cannot be mitigated.
We worked very closely with EDRi and a broad coalition of organisations throughout the AI Act process. For example, Sarah Chander, while at EDRI, led our coalition’s work calling for red lines for certain uses that intrinsically undermine fundamental rights. Whilst we were unsure whether we could achieve anything meaningful through the AI Act, we advocated to add some rights-based elements. In particular, we advocated to include transparency and documentation obligations to support people in exercising their rights under other regimes, such as the GDPR. Whilst we were successful with Parliament, a lot of these elements were removed during the trilogues. Overall, Access Now see the AI Act as a major missed opportunity. However, we believe the transparency and documentation obligations could improve people’s ability for redress.
Q: Can you introduce the Omnibus Package?
Daniel: The EU’s Omnibus is a package of amendments to multiple digital laws, including the AI Act (the AI Omnibus), and the GDPR and ePrivacy framework (the Digital Omnibus). The Commission argues that the Omnibus package simplifies rules and supports innovation through minor technical changes to ensure efficient implementation. Therefore, it has not conducted an impact assessment. However, this claim is simply not accurate: The amendments undercut fundamental rights.
The Omnibus follows years of sustained lobbying against the GDPR and digital regulation more broadly. Whilst the proposed changes often look technical and superficial, they risk having serious negative consequences for how the laws work in practice. Several amendments clearly undermine legal certainty and enforcement.
The AI Omnibus is particularly surprising because we saw the AI Act as a victory for industry, law enforcement, and migration authorities. However, these actors are now arguing it poses an unfair regulatory burden. This puts digital rights organisations in a strange position. We are now compelled to defend the AI Act despite believing it is deeply flawed. In particular, the AI Omnibus undermines the limited gains made to improve transparency and documentation. However, from an AI governance perspective, the most significant changes are to the GDPR rather than the AI Act.
Q: How do the Omnibus amendments to the AI Act impact fundamental rights?
Daniel: The AI Act obliges providers of high-risk systems to comply with several responsible development requirements. As a reward for compliance, the provider can place the system in the market with the ‘CE’ mark. This enables any company or public authority to easily identify and procure from responsible providers. This aims to prevent responsible providers from being undercut by irresponsible providers.
We think one of the most promising aspects of the AI Act is to require registration of high-risk AI systems in a publicly accessible database. Under this obligation, providers of high-risk systems must publish documentation that clearly describes the system and its intended purpose. This information could be instrumental in enabling an individual to pursue a data protection or anti-discrimination complaint.
This improves historic difficulties with accessing reliable information about deployed AI systems. For example, Access Now intervened in a court case in Brazil about the use of an emotion, gender and age recognition system in the São Paulo transport system. Due to a lack of transparency, we were only able to glean how the system worked through promotional materials. This meant we had to build a legal argument on exaggerated or inaccurate marketing claims. In addition, we’ve seen cases where materials were removed from a company website once an investigation began. This situation is all too common.
But what actually qualifies as a high-risk AI system? Originally, the AI Act had a two-stage designation process for high-risk AI systems: (1) Is the product an ‘AI system’ under the AI Act’s definition? (2) Does it fall under one of the high-risk use cases listed in Annex III? If the answer to both is affirmative, the provider would have to comply with the high-risk requirements, including documentation. However, a third stage was added during the legislative process: 3) Does the provider think the system actually poses a high-risk? This self-assessment enables providers to exempt themselves.
Parliament’s legal service published an extremely negative opinion about this exemption, concluding it would create real legal uncertainty. We also strongly advocated against it, concluding it would dramatically undermine the AI Act and its enforceability. However, it was treated as politically impossible to remove because it was in both the Parliament and Council positions.
As a concession, the final AI Act required all providers that exempt themselves to declare this in the publicly accessible database. At a minimum, this means that the AI Office can review the register and identify concerning exemption patterns and anomalies across the market, e.g. if thousands of systems were being self-exempted in one Member State and almost none in another.
The AI Omnibus removes this obligation. As a result, providers can unilaterally decide their system does not pose a ‘high-risk’ and do not have to declare the exemption. This is ridiculous, not least because the burden removed is minimal: An exempting provider simply had to publish: (1) their name, address and contact details (or those of any representative); (2) the system’s name; (3) description of its intended purpose; (4) a brief explanation of why they consider the system not to be high-risk; (5) market status (already on the market, intended to be placed on the market); (6) the member states where it is, or will be, used. This is a few minutes of administrative work.
Removing this requirement cannot meaningfully boost innovation, but it will allow the most irresponsible providers to quietly opt out of the regulation. This will undermine the AI Act’s potential to create a level playing field where responsible providers are no longer undercut by those cutting corners. It will also deprive regulators and the public of one of the few tools that could reveal how the high-risk category is working in practice.
Q: How do the Omnibus amendments impact fundamental rights supervision?
Daniel: As discussed, the Omnibus increases opacity around exempted high-risk systems. As a result, the AI Office can’t easily detect patterns of abuse, impacted persons lose an important information source for enforcing data subject rights, and researchers and civil society can’t scrutinise the market.
In addition, the AI Omnibus undermines the ability for national fundamental rights bodies to supervise AI systems. Under the current AI Act, Article 77 gives national bodies responsible for supervising and enforcing fundamental rights (e.g. equality bodies) the power to request and access any documentation created or maintained under the AI Act from providers or deployers, as long as they inform the market surveillance authority of such requests.
The Omnibus reduces this power. It removes the ability for national fundamental rights bodies to go directly to the provider and instead requests for documentation must be made via the market surveillance authorities. This creates a bottleneck which risks delays or refusals. This is a particular concern because it is unclear the level of resourcing or independence we can expect of the market surveillance authorities.
Q: What other key changes does the Omnibus propose to the GDPR that impact responsible AI governance?
Daniel: Gloria Gonzalez Foster aptly describes how the Digital Omnibus proposes many small technical changes which cumulatively undermine the core of GDPR – namely its rights-based approach. Currently, data subjects can exercise their rights even if the processing looks trivial or obviously low risk.
The Omnibus pushes in the opposite direction. Overall, it gives controllers and processors more discretion to decide whether rules apply, whether they must follow stricter measures, or whether they can partially exempt themselves. For example, it proposes a subjective, controller-centred definition of personal data, opening the door to arguments that the GDPR does not apply in certain situations. This increases discretion for regulated entities, undermines legal certainty and makes it harder for data subjects and regulators to enforce data subject rights.
The Commission’s narrative is that these are minor adjustments that will not impact rights. This isn’t credible. When you weaken safeguards and expand discretion in laws that govern virtually all digital processing, you inevitably change how individuals can exercise their rights and how regulators can intervene. From an AI governance perspective, these changes to GDPR are particularly concerning because it remains the main legal instrument governing data-intensive AI systems. The AI Act is not sufficient to regulate AI systems if the GDPR is hollowed out.
This fits into a broader push to turn the GDPR into a fully risk-based regulation. Whilst it does not go that far, it is a move in that direction. We know that the Commission’s Digital Fitness Check is forthcoming, and there is a risk that the GDPR will be properly reopened.
Overall, the Omnibus package removes safeguards and introduces opportunities for discretion that will allow AI providers to sidestep obligations and harm people’s rights. This is going to make enforcement of the digital acquis extremely difficult and will not improve innovation.