Bram Vranken is a researcher and campaigner at Corporate Europe Observatory (CEO), a Brussels-based watchdog that monitors corporate lobbying and its influence on European policymaking. We spoke about CEO’s work on the AI Act and its recent complaint to the European Ombudsman about the AI Act’s standardisation process.
What is Corporate Europe Observatory, and can you introduce your work?
Bram: Corporate Europe Observatory (CEO) is a watchdog that researches corporate influences in EU policymaking. We focus on sectors such as fossil fuels, agribusiness, and Big Tech. My own work centres on technology and digital policy, tracking how major tech firms seek to shape regulation to suit their commercial interests.
We conduct investigative research to map the lobbying ‘firepower’ of Big Tech, analysing lobbying registrations, corporate spending, and meeting disclosures. This enables us to understand how much companies spend, how many lobbyists they employ and the tactics they use. Big Tech has also built an extensive ecosystem of organisations such as think tanks and trade associations that amplify its messaging. We trace those connections and show how they skew the public debate and policymaking.
Can you introduce your work on the European AI Act?
Bram: We began working on the AI Act in 2023. At this time, the European Commission’s proposal was under negotiation and OpenAI had just launched ChatGPT. This transformed the debate, sparking new questions about what constituted ‘general-purpose’ AI and how it should be regulated since it was not covered in the draft law. From the outset, we observed a strong pushback from mostly Big Tech companies against regulating general purpose AI systems. Companies insisted that policymakers should only regulate how it is used, not how it’s developed.
We noticed that the lobbying around the AI Act was incredibly intense involving a wide range of industry actors, from established technology giants to emerging AI startups such as Mistral AI and Aleph Alpha. This is surprising: The AI Act is a piece of product safety regulation that was based on the advice of a corporate dominated expert group and that relies on industry-friendly harmonised standards. It is intended to support industry and facilitate the uptake of trustworthy AI.
What narratives and tactics have industry used in relation to the AI Act?
Bram: Across industry, the same narrative reappears: Regulation is bad for innovation. We’ve seen this play into geopolitical rhetoric too, with industry suggesting that Europe’s regulatory approach will lose the ‘AI race’ to the United States or China. The narratives reframe the AI Act from a rights-protecting instrument into an economic threat.
In addition, companies deploy highly technocratic arguments. For example, they make claims about technical feasibility or model behaviour. It is difficult for policymakers or civil society to verify these claims because so much expertise and data is concentrated within industry.
Industry’s lobbying power is enormous. Our
research has found that technology companies are now the EU’s top corporate spenders. For example, Meta spends around €10 million annually on EU-level lobbying, in addition to major efforts in member states such as France and Germany. Currently, there are more Big Tech lobbyists than MEPs. At CEO, we mapped this “lobby firepower” by tracing meetings between officials and industry representatives obtained through freedom-of-information requests. We also monitored think tanks and consultancies whose research is funded, directly or indirectly, by these companies.
We’ve published several reports throughout the process. Our first
report, published in February 2023, mapped the tactics Big Tech were using to influence the AI Act. During the trilogue negotiations, our follow-up
research revealed a ramping up of lobbying, with 84 out of 97 meetings held by senior Commission officials on AI occurring with industry and trade associations, compared to 12 with civil society and 1 with academia. It also mapped how European AI startups influenced national governments. Across this period, we mapped corporate tactics from private meetings with the Commission, funding academics and thinks tanks, working with lobby groups and the US government, and coordinated public campaigns.
This is ongoing with the implementation the AI Act. During the passage of the voluntary Code of Practice for General Purpose AI, our
research revealed that industry enjoyed privileged access to drafting the text. Meanwhile many companies have shifted from claiming to nominally support the Act to calling for a pause or delay. These narratives continue to gain momentum due to the Commission’s priority of deregulation.
Can you tell us about your complaint to the European Ombudsman concerning the AI Act standardisation process?
Bram: The providers of high-risk AI systems, such as those used for credit assessment or recruitment, must comply with detailed requirements. To assist compliance, the AI Act relies on harmonised standards. If a provider complies with the standards, they are presumed to be in conformity with the AI Act’s requirements.
The European Commission requested CEN (the European Committee for Standardisation) and CENELEC (European Committee for Electrotechnical Standardisation) to draft these standards. CEN/CENELEC set up Joint Technical Committee (JTC) 21 to coordinate several expert Working Groups who are developing the AI Act standards.
Standardisation bodies such as CEN/CENELEC are private, industry-led bodies which traditionally specify highly technical product requirements such as the acceptable dangerous chemical threshold in toys. As a result, expert membership has historically been dominated by industry-affiliated individuals. Unlike past product harmonisation requests, AI systems touch on a range fundamental rights considerations which these bodies do not have the experience or expertise to consider. This makes this work unprecedented and problematic.
We conducted
research to identify the individuals who were drafting the AI Act standards and their affiliations. It proved extremely difficult. As a private body, CEN/CENELEC maintain that the membership of JTC-21 is confidential and require Working Group members to sign non-disclosure agreements about meetings. To pierce the opacity, we turned to LinkedIn and identified around 150 Working Group members. More than half of the identified participants represented industry, whilst many others were consultants with unclear affiliations. CEN/CENELEC has involved several civil society and academic participants to address concerns with representation, but they face an uphill battle. Standardisation is dominated by corporate veterans whilst civil society and academia lack the institutional knowledge and resources to navigate the process.
Alongside this research we filed freedom of information requests with the Commission asking for the list of experts and meeting minutes. These were refused on the grounds that the Commission “did not hold” the information, despite the fact it mandated the process. We also tried to raise concerns directly with the Commission’s AI Office which referred us to DG GROW who oversee standardisation requests. Despite repeated follow-ups, we received no response. At this point, we commissioned a legal opinion from AWO about the opacity.
AWO looked into the legal framework governing EU harmonised standards, alongside its interpretation by the Courts and academic analysis. The sources all emphasised the importance of supervising the standardisation process and ensuring compliance with the rule of law to safeguard the constitutional principles fundamental to democratic governance. AWO concluded that there is strong normative and legal grounding to argue that the process of creating harmonised standards must comply with transparency and participatory requirements embedded in the rule of law. As such, not only the standards but also the processes that generate them may be subject to legal scrutiny and potentially challenge where these principles are not observed.
At that point, we filed a complaint to the European Ombudsman concerning the lack of transparency and the lack of balanced stakeholder participation. The Ombudsman has opened a
formal inquiry, requesting documents and planning an on-site inspection at the Commission. Based on their investigation, the Ombudsman will issue preliminary findings and recommendations to which the Commission must respond.
We hope the investigation will affirm that the Commission retains responsibility for ensuring transparency and balanced representation, even when delegating tasks to private bodies. At minimum, there should be public disclosure of experts’ identities, meeting minutes, and conflict-of-interest policies, alongside a clear guarantee that stakeholder representation will be balanced to include civil society. Standardisation on such sensitive matters must not happen behind closed doors.
Meanwhile, the standardisation process itself has run into serious trouble. Under pressure to meet tight implementation deadlines, CEN/CENELEC recently suspended their consensus-based Working Groups model and concentrated decision-making in small Drafting Groups dominated by long-standing experts, often industry participants. Civil society and academic experts have been sidelined, prompting several senior members of JTC-21 to protest publicly.
The Commission’s experiment in outsourcing fundamental rights governance to industrial standardisation bodies is now blowing up. The Commission’s implementation of the AI Act depends on a process it does not control. This has produced delay, opacity, and a structural bias towards weaker safeguards, with industry pushing for weaker international ISO standards to be adopted through the CEN/CENELEC process. As the deadline looms, contentious issues, especially those touching on fundamental rights, are being dropped just to get the standards done. Meanwhile, industry is using the delay, that it has in some cases been responsible for, as a reason to call for pausing the implementation of the AI Act.
Looking ahead, what interventions are needed to promote the public interest?
Bram: We will continue to follow the Ombudsman process and monitor how standards are finalised. But new challenges are emerging: The Commission’s deregulation agenda is now also affecting digital legislation. On 19 November, the Commission proposed a
drastic roll-back of digital rights, through its so-called digital omnibus. The AI Act, the GDPR, and ePrivacy are all affected, including a delay of the implementation of the AI Act and the serious weakening of data protection in the GDPR.
Big Tech firms have used the Trump administration to put pressure on the EU to weaken its digital rulebook, but equally worrying is that the EU’s deregulation agenda has opened the door to corporate interests. For example, the Commission invited industry to closed-door workshops to ‘reality-check’ legislation related to AI, effectively asking which rules they find burdensome. Several civil society organisations requested to participate but were not invited. Fortunately, there is also huge push-back from civil society, political groups in the European Parliament, and experts.
Looking ahead, our priority is defending these digital rights from being watered down and reforming the standardisation system. That means stronger transparency requirements, real conflict-of-interest rules, and genuine participation for civil society. Without these safeguards, decisions about fundamental rights will remain in private hands beyond democratic accountability.