Algorithm Governance Roundup #24
|
Community Spotlight: Ema Prović, European AI Office | GPAI Code of Practice, guidelines and training data transparency template
|
|
|
|
Welcome to AWO’s Algorithm Governance Roundup! It’s been a busy summer: The EU’s regulation of General Purpose AI has taken shape with the final Code of Practice, new guidelines and a transparency template for training data. Regulatory investigations continue apace with a focus on harms to children both in the UK and in the US, which has seen several actions taken against chatbot providers. Meanwhile, there’s opportunities to contribute to the review (and simplification) of the EU’s digital frameworks and the implementation of the UK’s OSA obligations, alongside some great upcoming events.
This month, our community spotlight is Ema Prović, who led the development of the GPAI Code of Practice for the European AI Office. We spoke about how the Code supports implementation and enforcement, its place in the global AI governance landscape and her experience leading the process.
As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content, please reply or send a new email to algorithm.newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance.
We would love to hear from you!
Many thanks and happy reading!
Esme Harrington
|
In the EU, the European AI Office has published the final Code of Practice for General-Purpose AI (GPAI). The voluntary Code helps industry signatories to comply with the AI Act's obligations on safety, transparency and copyright. Twenty-seven GPAI providers have signed the Code, including Amazon, Anthropic, Google, Mistral AI and Open AI, whilst Meta declined. The European Commission and AI Board (comprised of Member States) confirmed that the code is an adequate voluntary tool. In this month’s community spotlight, I spoke to Ema Prović who led the development of the Code for the AI Office.
The AI Office has also issued guidelines for GPAI providers, setting out clear definitions of ‘GPAI model’, 'provider' and 'placing on the market', and exemptions for transparent models released under free and open-source licences.
In addition, the AI Office published a training data template for GPAI providers to help them publicly summarise the content used to train their models. It provides a uniform format to list the main datasets and sources, which aims to empower copyright holders and data subjects to exercise their rights under EU law.
The European Commission (EC) announced its preliminary finding that Temu breached the DSA by disseminating illegal products. The EC conducted a mystery shopper exercise revealing widespread availability of non-compliant products, including toys and small electronics. The EC found Temu’s DSA risk assessment was inaccurate, relying on general industry information rather than marketplace-specific data. This may have led Temu to put inadequate mitigation measures in place.
Nine civil society organisations filed a formal complaint against X for alleged DSA violations. The complaint filed by EDRi, AI Forensics, Centre for Democracy and Technology Europe, Entropy, Gesellschaft für Freiheitsrechte e.V. (GFF), Global Witness, Panoptykon Foundation, Bits of Freedom, and VoxPublic alleges that X allowed targeted advertising based on sensitive personal data, including political opinion, sexual orientation and health conditions. This is based on an AI Forensics investigation of X's ad repository.
The Court of Justice of the European Union has dismissed Zalando's appeal against its designation as a Very Large Online Platform (VLOP) under the DSA.
In France and Germany, the Franco-German Council of Ministers announced a forthcoming digital sovereignty summit. The governments also reaffirmed their cooperation on AI, cloud sovereignty and digital public infrastructure. They will also submit a joint proposal for the EC’s first review of the Digital Markets Act, focused on simplifying rules relating to AI.
In Germany, the Bundesnetzagentur launched an AI Service Desk to help businesses comply with the EU’s AI Act. It includes interactive compliance tool to enable business to understand whether their system is subject to regulation, the risk classification of the system and whether transparency obligations apply. The service will also provide information about free employee trainings.
In Ireland, the Irish Court has dismissed X's challenge to Ireland’s Online Safety Code. The Court found that the Coimisiún na Meán’s approach complemented the DSA and was within scope of the Audiovisual Media Services Directive. The Code sets out detailed rules on content moderation, age assurance and commercial communications. The age assurance measures came into effect on 21 July.
In the UK, Ofcom has opened dozens of investigations under the Online Safety Act (OSA). Currently, the enforcement programmes are focused on 1) CSAM, with investigations opened into several file-sharing services; 2) pornographic content; 3) illegal content risk assessments, with requests for over 60 risk assessments from a range of services and an investigation launched into 4chan; and most recently, 4) protection of children from harmful content, with a focus on risk assessments and the use of age assurance measures.
Ofcom has also published its report on researcher access to data from online services. It presents three policy options, including creating an independent intermediary to enable or manage access. The report describes three types of intermediary that could be considered: 1) direct access intermediary, which facilitates secure access via an interface but does not host or provide data directly; 2) notice to service intermediary, which would review accreditation and requests to access specific datasets, or 3) repository intermediary, which directly facilitates access by providing an interface and hosting the data itself.
The Department for Science, Innovation and Technology (DSIT) has published a roadmap on third-party AI assurance. It sets out proposals to encourage the market, including a multi-stakeholder consortium, workforce skills initiatives and improving auditor access to information.
The High Court has rejected Wikipedia's challenge to the OSA secondary legislation which set the threshold for ‘Category 1’ online services, which must follow stricter rules on transparency and mitigation measures.
The Digital Regulation Cooperation Forum (DRCF) received additional funding to develop a “one-stop-shop” digital library for innovators. It aims to provide a unified regulatory resource for organisations across DRCF members’ remits, namely Ofcom, the Competition Markets Authority, the Information Commissioner’s Office and the Financial Conduct Authority.
The FTC opened an inquiry into AI chatbots, requesting information from Google, OpenAI, Meta, xAI, CharacterAI, Snap, and Instagram on how they measure and assess potentially negative impacts of their chatbots to children and teens.
Previously, forty-four Attorneys General sent a letter to twelve AI companies urging them to prioritise AI safety after reports of sexually inappropriate chatbots interactions with children. The Attorneys General of California and Delaware later sent a letter to OpenAI expressing their concern after its products were implicated in a recent murder-suicide and a suicide of minor Adam Raine, whose family have filed a lawsuit against the company.
The U.S. House Judiciary Committee held a hearing on “Europe’s Threat to American Speech and Innovation”, alleging that the EU’s DSA and UK’s OSA censor US-based social media platforms and users. Prior to the hearing, thirty legal and technology scholars sent a letter to the Committee explaining that the DSA was “content agnostic” and designed to strengthen user rights. Hearing witnesses included Nigel Farage (UK reform MP) whilst David Kaye (the former UN rapporteur for freedom of expression) was the sole witness defending EU law.
A U.S. District Judge ruled on remedies in the Department of Justice case against Google’s search monopoly. Whilst rejecting forced divestiture of Chrome or Android, the Court prohibited Google from entering exclusive contracts and required it to share certain data with rivals. However, Google was allowed to continue non-exclusive deals for distribution of its search and generative AI products, with the Judge noting rising competition from AI search engines and chatbots.
Anthropic reached a class settlement in a copyright lawsuit brought by authors. The company agreed to destroy downloaded copies of 500,000 books it was accused of pirating, with compensation amounting to $3,000 per book. It will also destroy its LibGen and PiLiMi datasets.
Meanwhile, Warner Bros Discovery has filed a lawsuit against Midjourney for copyright infringement. The lawsuit alleges that Midjourney was trained using illegal copies of copyrighted works and generates infringing outputs.
|
Community Spotlight: Ema Prović, European AI Office
|
Ema Prović is a policy officer at the European AI Office, where she led the development of the Code of Practice on General Purpose AI (GPAI). We spoke about how the Code supports implementation and enforcement of the AI Act, how it fits into the global AI governance landscape and her experience leading the process.
What is the European AI Office, and what was your role in developing the Code of Practice? Ema: The European AI Office was established in February 2024 within the European Commission’s DG Connect to support the growth of a European AI ecosystem rooted in innovation and trust. Our role is to support stakeholders across the EU and collaborate internationally to ensure the development and deployment of trustworthy AI. The Office has grown rapidly from just a handful of staff to more than 120 experts working across six specialist units: AI and robotics (A1), regulation and compliance (A2), technical research and safety (A3), innovation and policy coordination (A4), AI for societal good (A5), and health and AI (A6).
I work in Unit A2, which coordinates regulation and compliance across the AI Act, to ensure its coherently applied across the member-states. This includes the implementation and enforcement of the AI Act’s obligations on general-purpose AI, high-risk systems, and prohibited practices. My particular focus is on general-purpose AI, both with and without systemic risk. I work closely with colleagues in Unit A3, who supervise and enforce the rules for GPAI models with systemic risk.
Prior to joining the AI Office, I was a founding member the UK’s AI Security Institute and, before that, working in the UK’S Office for AI implementing the government’s National AI Strategy Action Plan.
Why was the Code developed, and how does it relate to the AI Act? Ema: The AI Act foresaw the need to regulate the most capable and risky AI models with broad use cases, called general-purpose AI (GPAI). These models underpin a wide range of applications, from consumer-facing chatbots to critical infrastructure, and their scale and capability mean that risks can propagate across sectors and geographies.
The obligations on GPAI were introduced towards the end of the negotiation of the AI Act which establishes high-level obligations for providers, such as maintaining up-to-date technical documentation and assessing and mitigating risks. However, the AI Act avoided over-specification to keep the legislation future-proof. Otherwise, detailed technical requirements risked quickly becoming obsolete as the technology evolves.
The Code of Practice was created to elaborate on these obligations. It translates the AI Act’s broad obligations into specific measures and processes that providers of GPAI with and without systemic risk can adopt. It is voluntary to sign but adherence serves as an authoritative tool to demonstrate compliance with the AI Act.
What are the key requirements detailed in the Code of Practice? Ema: The Code does not create new legal obligations. Instead, it operationalises and details the obligations already present in the AI Act. It provides structured guidance, templates, and examples across three major topics:
Section 1: Transparency All providers, except those releasing models under approved open-source licences, should publish core technical information about their models. The Code includes a standardised model documentation form, which improves consistency and comparability across providers. Transparency supports accountability and enables regulators, researchers, and downstream developers to understand how models are built, trained, and deployed.
Section 2: Copyright All providers should also document a copyright policy and adopt strategies to mitigate copyright infringement risks in training and model outputs, including measures to comply with rights reservations and a complaints process. Importantly, the Code does not create new copyright law or alter existing EU copyright frameworks. Instead, it offers practical mechanisms to meet the AI Act’s copyright-related provisions for GPAI models.
Section 3: Safety and Security The safety and security section is the most detailed part of the Code and applies to the most capable GPAI models, referred to as GPAI with systemic risk. It aims to capture frontier models, based on a compute thresholds of 1025.
Signatories should establish and maintain a documented risk management policy covering the full lifecycle of the model, from development through deployment and post-marketing monitoring, called the Safety and Security Framework, and share this with the AI Office. This section rests on three pillars: Risk assessment: Signatories should conduct a full systemic risk assessment and mitigation process at designated milestones, particularly before market placement and material changes, using recognised techniques like red-teaming, capability evaluations, safety margin calculations both internally and with the involvement of independent evaluators. The Code sets out specific categories of systemic risk (such as loss of control, cybersecurity threats, CBRN risks, harmful manipulation and societal harms) and acceptable risk thresholds. These requirements build on policies many leading companies already follow but place them within an enforceable framework. The Code also encourages providers to advance the state of the art in risk assessment, ensuring that safety practices keep pace with technical developments. Technical risk mitigation: Signatories must implement appropriate safeguards to reduce risks to acceptable levels. Where necessary, they must pause development, delay deployment, or even withdraw a model from the market entirely. Governance and reporting: Signatories must establish internal governance structures to oversee safety and compliance, including clear lines of accountability and procedures for serious incident reporting. Early drafts of the Code included whistleblowing provisions, but these were removed in the final version in favour of referencing the EU Whistleblower Directive, which already provides legal protections.
How was the Code developed, and what did you learn through the process? Ema: The Code was developed through a multi-stakeholder process, beginning in September 2024. The AI Office appointed independent expert Chairs and Vice-Chairs to draft the Code, leading five working groups. The AI Office facilitated the process but did not contribute to the content of the Code, but conducting a final adequacy assessment. Around 1,400 stakeholders participated, including academia, civil society, industry, and member states to public consultations, working group meetings, and workshops. Participants could join any of the working groups, engage in discussions, and provide written feedback on three successive drafts.
I led the process, working with an external contractor, the Chairs and colleagues to design the methodology, organise stakeholder engagement to ensure meaningful feedback. I also oversaw the final steps, including securing Member States approval for the Code to be adopted as a Commission document via an adequacy assessment.
I learnt several lessons from leading the process. First, meaningful multi-stakeholder engagement is possible, but it requires deliberate design and flexibility. Second, consensus-building takes time, and future processes should avoid overly compressed timelines to allow for deeper technical and policy work. We were working with a very challenging timeline, and in the end, we decided to delay publication by a month so the Chairs could draft the best version possible. Finally, transparency and independence are critical to legitimacy. By keeping the drafting process independent and the AI Office separate from content decisions, the final Code earned broad trust. Almost all GPAI providers have signed up, which is a testament to the final Code.
How does the Code fit within global AI governance efforts? Ema: The Code aligns with and builds on a wave of international voluntary frameworks that have emerged in recent years. These include the G7 Hiroshima Process, the Seoul, Paris and UK AI Summit commitments, and the US NIST AI Risk Management Framework. Across these initiatives, there is broad consensus on the need for structured risk management, model evaluations, cybersecurity provisions, external auditing, and information-sharing with governments.
The Code aligns with these approaches whilst adding specificity and enforcement. For example, it defines risk categories like loss of control, CBRN threats, harmful manipulation, and cyber risks, sets thresholds and provides concrete examples of acceptable practices. It also emphasizes upholding fundamental human rights. Unlike many voluntary frameworks, the Code is backed by legislation and enforcement powers.
What are the next steps regarding enforcement of the Code? Ema: The publication of the Code marked the start of its operational phase. Since 2 August 2025, GPAI providers have been required to 1) publish summaries of training data when releasing new models (i.e. involving large pre-training runs), 2) submit model documentation to the AI Office upon request, and 3) notify the AI Office (Unit A3) within two weeks of identifying they are training a GPAI model with systemic risk. We have been building up our technical capacity to enforce, for example, we have built a secure information sharing platform to enable providers to share technical documents.
In 2026, the AI Office will gain its full enforcement powers in relation to GPAI. These include the ability to request detailed technical information from providers and require risk assessment and mitigation measures to be put in place, as laid out in the AI Act. In terms of penalties, the AI Office can issue fines of up to 3% of global annual turnover for non-compliance and even order the withdrawal of a model from the market in extreme cases.
Of course, the Code does not put in place any additional legal obligations, and any new measures are recommendations. However, compliance with the Code is a way of demonstrating compliance with the AI Act itself. Providers that choose not to sign must still meet the Act’s obligations, but without the legal certainty the Code provides.
The Code itself may be reviewed every two years, but includes dynamic references for providers to follow the ‘state-of-the-art’, enabling the Code to remain relevant and future-proof without the need for updating.
|
A Different Approach to AI Safety: Proceedings from Columbia Convening on AI Openness and Safety, Camille Francois, Ludovic Péran, Ayah Bdeir, Nouha Dziri, Will Hawkins, Yacine Jernite, Sayash Kapoor, Juliet Shen, Heidy Khlaaf, Kevin Klyman, Nik Marda, Marie Pellat, Deb Raji, Divya Siddarth, Aviya Skowron, Joseph Spisak, Madhulika Srikumar, Victor Storchan, Audrey Tang and Jen Weedon
ACM FAccT Best Paper Awards:
The Levers of Political Persuasion with Conversational AI, Kobi Hackenberg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand and Christopher Summerfield, AISI, University of Oxford, LSE, Stanford University and MIT
|
The European Commission has launched two consultations: The European AI Office published a consultation and a call for expression of interest for stakeholders to provide input to guidelines and a voluntary Code of Practice on transparent generative AI systems. This supports Article 50 of the AI Act which requires providers to enable end-users to identify AI-generated or manipulated content. The consultation deadline is 02 October.
KU Leuven is seeking blogpost contributions for its symposium on AI and Democracy. Suggested topics include AI’s impact on democratic processes; misinformation, disinformation and AI in content moderation; and freedom of expression. The submission deadline is 10 October.
Tech Policy Press is seeking applications for its Fellowship Program. The year-long, part-time fellowship supports journalists, researchers and public policy professionals to pursue independent reporting and analysis on technology and democracy. Fellows receive a $10,000 stipend to participate in monthly sessions, contribute reporting and receive ongoing editorial guidance. The application deadline is 15 October.
Ofcom has launched several consultations to support its implementation of the OSA: Additional safety measures for the Children’s Code of Practice. Proposed measures include 1) stopping illegal content from going viral by improving recommender systems and crisis response protocols; 2) expanding proactive technologies to detect illegal content, such as hash matching; and 3) restricting livestreams interactions for children. The deadline is 20 October. Super-complaints: This enables expert organisations representing users or the public to share evidence of significant harms with Ofcom. The draft version includes guidance on eligibility and procedures to make a super-complaint. The deadline is 03 November. Media literacy: This explores how online platforms, broadcasters and streaming services could empower the public to critically engage with content. The deadline is 03 November.
|
In person: 16 - 17 February, Amsterdam The DSA Observatory is inviting paper abstracts for its conference on the DSA, including papers on its systemic risks, transparency architecture or enforcement. The abstract submission deadline is 30 September.
In person: 6 – 7 December, Copenhagen This EurIPS workshop convenes technical and governance communities to discuss how private governance mechanisms may work alongside government regulation to promote the responsible development and deployment of AI. The paper submission deadline is 17 October.
In person: 12 November, Seville The European Centre for Algorithmic Transparency is hosting a workshop for researchers focused on online platforms’ systemic risks to the mental and physical health of minors. ECAT provides technical and scientific expertise for the enforcement of the DSA. Researchers must apply to attend.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|
|