Algorithm Governance Roundup #25
|
Community Spotlight: Dave Buckley, OpenMined | Ombudsman investigation into AI Act standards
|
|
|
|
We don’t do any behavioural tracking on our newsletter, so we’ve created a 3-minute survey to understand your reading habits. The privacy notice is here.
This month, I spoke to Dave Buckley at OpenMined, a non-profit foundation developing open-source software which helps grant structured and secure access to proprietary systems. We spoke about OpenMined’s work with the Christchurch Call and the UK AI Security Institute to conduct privacy-preserving independent audits of platforms and frontier AI systems.
As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm.newsletter@awo.agency. The only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance.
I would love to hear from you!
Many thanks and happy reading! Esme Harrington
|
In Austria, the privacy group noyb has filed a criminal complaint against Clearview AI for systematic violations of the GDPR. Several EU data protection authorities have already imposed fines or bans on the facial recognition company.
In the EU, regarding the AI Act, the European Commission (EC) has launched the AI Act Single Information Platform, a central platform designed to help stakeholders navigate the AI Act’s requirements. It includes an online Compliance Checker to help stakeholders determine whether they are subject to legal obligations and an AI Act Service Desk to submit questions to the AI Office.
The EC has also opened a consultation on draft guidance and a reporting template for serious AI incidents. Under the AI Act, providers of high-risk AI systems are required to report serious incidents to national authorities in order to detect risks early, strengthen accountability and enable prompt action. The guidance clarifies key definitions and offers practical examples to help providers prepare for compliance. The consultation is open until 07 November.
The EC has launched its Apply AI Strategy to increase AI adoption across ten key industrial and public sectors, particularly within SMEs. It also seeks to strengthen the EU’s technological sovereignty by addressing barriers to AI development and deployment. The accompanying Apply AI Alliance serves as a coordination forum bringing together AI providers, industry, public sector, academia and CSOs. In addition, an AI Observatory will monitor AI trends and assess sector-specific impacts of AI. This complements the EC’s ongoing AI Continent Action Plan.
The European Ombudsman has opened an investigation into the EC and harmonised standards under the AI Act. It will examine how the EC ensures transparency, inclusiveness and accountability in the standardisation process conducted by CEN/CENELEC. This follows a complaint by Corporate Europe Observatory which flagged the lack of public information available about participants and the absence of meeting minutes. The complaint also claimed that the EC failed to guarantee a balanced representation of interests. The Ombudswoman has requested relevant information and documents from the EC.
On the DSA, the EC has preliminarily found that Meta and TikTok are in breach of the DSA. In particular, the EC found both platforms failed to grant researchers adequate access to public data under Article 40(12). It also determined that Meta had breached its obligation to provide Instagram and Facebook users with simple mechanisms to report illegal content and appeal content moderation decisions. The platforms now have the opportunity to respond to the EC’s preliminary findings and implement corrective measures.
The European Parliament’s Internal Market and Consumer Protection Committee has adopted a report recommending an EU minimum age of 16 for social media and AI companions without parental consent. It also calls for the EC to take stronger enforcement action under the DSA and to consider new restrictions on loot boxes, engagement-based recommendation systems, addictive design features, and AI-powered nudity apps.
On the GDPR, the European Data Protection Supervisor has published updated guidelines on the use of generative AI and the processing of personal data by EU institutions, bodies, offices, and agencies.
In Italy, the national AI Law has entered into force. It complements the EU AI Act by designating the Agency for Digital as the notifying authority and National Cybersecurity Agency as the market surveillance authority. The law also requires parental consent for minors under 14 to access AI systems and requires employers to inform and train workers on the use of AI tools in the workplace.
In Japan, the AI Safety Institute has released an open-source AI Safety Evaluation Environment. This tool and dataset is designed to support AI safety assessments, including an automated red-teaming feature that integrates domain-specific requirements by generating adversarial prompts from input documents.
In the Netherlands, the Amsterdam District Court has found that Meta breached the DSA prohibition on dark patterns by preventing users from setting a chronological feed by default. The judge found that Meta’s design – which reverts users to a personalised feed upon reopening the app – constitutes a prohibited dark pattern because it restricts the ability of users to make autonomous choices about how they consume information. The court ordered Meta to preserve users’ selection of a chronological feed by default. This followed a complaint brought by Bits of Freedom.
In the UK, the Department for Science, Innovation and Technology has opened a call for evidence on its AI Growth Lab, a cross-sector AI sandbox designed to test AI products in real-world conditions with certain regulatory requirements temporarily relaxed. Initially, the sandbox will focus on products in the healthcare, professional services, transport and manufacturing robotics sectors. The call for evidence closes on 02 January.
Ofcom has launched a consultation on draft guidance for ‘super-complaints’ under the Online Safety Act. Super-complaints allow expert organisations representing users or the public to submit evidence of significant harms arising from online services. The consultation deadline is 03 November.
The UK’s Upper Tribunal has ruled that Clearview AI is subject to the GDPR. The Upper Tribunal accepted that the activities of a foreign company, even if used exclusively by state authorities for national security or law enforcement aims, fall within the material scope of the GDPR (Article 2(2)(a)). AWO acted for Privacy International in its third-party intervention in the appeal, which was accepted by the court.
The Competition and Markets Authority (CMA) has confirmed that Google holds ‘strategic market status’ (SMS) in search and search advertising services under the Digital Markets, Competition and Consumer Act. Following extensive stakeholder consultation, the CMA found that Google has substantial and entrenched market power. While Google’s Gemini AI assistant is not included in the designation, its other AI-based search features – such as AI Overviews and AI Mode – fall within scope. The SMS designation enables the CMA to introduce targeted interventions to ensure effective competition.
In the U.S., the Governor of California has signed several AI-related bills into law. In particular, AB 853 California AI Transparency Act requires large online platforms to provide users with clear, prominent labelling of AI-generated content. It also requires camera and phone manufacturers to allow users to embed provenance data in authentic images, video and audio they capture. Meanwhile, SB 243 requires companion chatbot platforms to implement processes to identify and address users’ suicidal ideation or expressions of self-harm. Platforms must also clearly disclose that all interactions are artificially generated. The law also mandates additional safeguards for minors, such as reminders to take breaks and tools to prevent exposure to sexually explicit images.
This follows a U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism hearing on the “Harm of AI Chatbots”. This featured testimony from parents of minors who died by suicide or harmed themselves after interactions with Character.AI and OpenAI chatbots. Since then, OpenAI announced the formation of an independent Expert Advisory Council on Well-Being and AI, which will advise the company on defining and promoting healthy AI interactions across age groups. The Council includes several experts focusing specifically on young people’s well-being.
The New York State Attorney General has announced that social media companies must begin to submit content moderation reports under the Stop Hiding Hate Act. The law requires platforms to submit biannual reports detailing how they address hate speech, racism, misinformation and other harmful types of content. Reports must include the number of posts flagged as potential violations, number of actions taken, and the specific enforcement measures such as removal, demonetisation or deprioritisation.
The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) published an evaluation of DeepSeek’s AI models. The assessment found that DeepSeek’s models underperformed compared with U.S. models across 19 benchmarks related to performance, cost, security and adoption.
|
The EC’s Joint Research Centre has published six reports on technical and scientific challenges involved in assessing GPAI models:
How People Use ChatGPT, Aaron Chatterji, Tom Cunningham, David Deming, Zoë Hitzing, Christopher Ong, Carl Yan Shan and Kevin Wadman, OpenAI, Duke University and Harvard University
Safety Frameworks and Standards: A Comparative Analysis to Advance Risk Management of Frontier AI, Marta Ziosi, James Gealy, Miro Plueckebaum, Daniel Kossack, Simeon Campos, Lama Saouma, Uzma Chaudhry, Lisa Soder, Merlin Stein, Nicholas Caputo, Connor Dunlop, Jakob Mökander, Enrico Panai, Tom Lebrun, Charles Martinet, Ben Bucknall, Rebecca Weiss, Koen Holtman, Patricia Paskov, Saad Siddiqui, Fazl Barez, Ranj Zuhdi, Peter Slattery and Florian Ostmann, Oxford Martin School at University of Oxford
|
The Omidyar Network is inviting proposals for its Tech Journalism Fund. This provides grants between $5,000 - $25,0000 for journalists covering 1) legislation and policy proposals on AI and technology regulation; 2) investigative pieces on companies, organisations, individuals and ideas shaping AI; 3) the impact of AI on children, youth and families; or 4) how workers, unions and employers are contending with AI in the workplace. Applications are reviewed on a rolling basis.
The Stanford School of Humanities and Science is accepting applications for its Ethics and Technology Practitioner Fellowship. The programme will support 12 - 15 mid-career practitioners involved in technology development and deployment. Projects should foster new conversations that challenge conventional perspectives on issues in technology. Application deadline is 14 November.
|
Hybrid: 7 - 9 November, Barcelona Mozilla Festival hosts workshops, debates and collaborative sessions to reimagine the role of technology in society.
Hybrid: 15 January, 17 - 18 February, New Delhi The Participatory AI Research and Practice Symposium (PAIRS) is calling for abstract submissions for papers and presentations. The independent Symposium will be held alongside the India AI Impact Summit, and will focus on three guiding principles: People, Planet and Progress. It prioritises previously unpublished work and particularly encourages submissions from CSOs based in India, South Asia and the Global Majority. Travel and accommodation grants are available. Abstract deadline is 31 October.
In-person: 16 - 20 February, Bharat Mandapam, New Delhi The India AI Impact Summit aims to position AI as a catalyst for inclusive human development, environmental sustainability and equitable progress worldwide. It builds on previous summits in the UK, Paris and Seoul, shifting focus from ‘action’ to the ’impact’ of AI on humanity. The Summit is currently accepting proposals for main panel events, such as panel discussions, roundtables, workshops and academic presentations. Topics could include safe and trusted AI and the democratisation of AI resources. Proposal submission deadline is 15 November.
Hybrid: 5 - 8 May, Lusaka, Zambia RightsCon convenes global stakeholders on the intersection of human rights and technology. Registration opens in November.
|
Community Spotlight: Dave Buckley, OpenMined
|
Dave Buckley works on the policy team at OpenMined, a non-profit foundation developing open-source software which helps grant structured and secure access to proprietary systems. We spoke about OpenMined’s work with the Christchurch Call and the UK AI Security Institute, which demonstrates that privacy-preserving, independent scrutiny of platforms and frontier AI systems is possible.
What is OpenMined and can you introduce your work? Dave: OpenMined is a non-profit foundation that develops open-source technology to facilitate safe, secure and privacy-preserving use of sensitive or protected data. One of the key use cases of our technology is third-party auditing: enabling researchers and auditors to study sensitive data or proprietary systems without compromising user privacy, system security, or the intellectual property and trade secrets of organisations. The core of OpenMined’s work is developing Syft, an open-source protocol that enables individuals and organisations to run federated, privacy-preserving computations across a data network. Syft provides a platform for integrating various privacy-enhancing technologies (PETs) throughout the data analysis pipeline. This allows external researchers to run computations on sensitive datasets without ever accessing the raw data, which remains on the data owner’s infrastructure. This addresses the ‘copy problem’ in audits: companies are often reluctant to share data because they could lose control over its use and redistribution, creating significant legal and reputational risks.
Syft is flexible software and thus has broad applications beyond AI auditing: from enabling research on sensitive personal information, such as healthcare data, to empowering publishers and creators to have full control over how their data can be utilised by AI systems. As a result, OpenMined has run projects tackling various use cases with partners including government agencies, online platforms, frontier AI providers, research institutes, and publishing houses. It also co-founded the UN PET Lab, which brings together national statistics offices to explore privacy-preserving approaches to collaboration on sensitive data in order to tackle global challenges such as climate change, international trade, and public health.
Can you tell us about OpenMined’s pilots with the Christchurch Call and the UK AI Security Institute? Dave: The Christchurch Call is an initiative launched by New Zealand and France in response to the Christchurch terrorist attack. It brings together 56 governments, 19 online service providers, and 12 partner organisations. A strong civil society and academic network informs and guides this work, via the Christchurch Call Advisory Network. Together, these participants form the Christchurch Call Community. The Call aims to reduce the prevalence of terrorist and violent extremist content on online platforms. A central question for this work, and online safety more broadly, is understanding and measuring how platform design and recommender systems amplify harmful content. To meaningfully answer this question, independent researchers need adequate access to data. However, platforms have struggled to provide access, citing legal obligations to protect user privacy, system security and intellectual property.
In 2022, the Christchurch Call set up the Initiative on Algorithmic Outcomes to develop and test methods for privacy-preserving independent audits of online platforms. As part of this project, four independent researchers conducted a pilot audit of LinkedIn and Dailymotion’s recommender systems to test whether PETs could address access barriers. OpenMined was invited to develop the technical infrastructure and interface between the platforms and the auditors.
OpenMined worked with the researchers and social media companies to deploy PySyft (a Python implementation of the Syft protocol) to facilitate privacy-preserving audits by combining two PETs: remote execution and differential privacy. The auditors submitted their audit code to the auditee via the PySyft interface, which was then reviewed and executed by the platforms on their own servers. The platforms only shared the result of the queries, ensuring the underlying user data never left their infrastructure. This was integrated with an open-source differential privacy technique from the OpenDP framework, this adds controlled noise to the data such that accurate aggregate results can be shared with mathematical guarantees that the researcher cannot reverse-engineer sensitive information about any individual in the underlying dataset. This allowed auditors to conduct aggregated analyses, such as measuring how recommender systems affect different demographic groups, whilst protecting individual privacy. As a preliminary step, auditors used synthetic datasets, mirroring the structure and statistical properties of the actual datasets, to develop and test their audit code before remote execution.
The pilot was proof-of-concept that it is technically feasible for external auditors to study social media platforms using PETs. It enabled auditors to analyse platform impression data that captured 1) user viewing habits and recommendation behaviours across groups, and 2) the impact of different algorithms on content recommendations. If deployed at scale, this approach has the potential to address the reproducibility crisis in social science by allowing multiple researchers to run replication studies more readily as they do not need direct access to sensitive datasets. Compared to traditional ‘secure research rooms’ or trusted research environments, which often require a researcher to travel to a physical location to study sensitive data following a lengthy accreditation process, PETs-based approaches offer a more scalable method that enables auditors to work asynchronously across the globe.
However, the pilot also revealed challenges. First, implementing PETs can be technically complex and require substantial iteration, especially when applying differential privacy techniques. Since this pilot, OpenMined has been further developing our software to make deployment as simple as possible (see: syftbox.net), and organisations such as NIST and OpenDP have been maturing and simplifying implementations of differential privacy. Second, auditors often lack robust baseline data to assess the social impacts of platform design and recommender systems. During the pilot, the auditors struggled to find suitable baselines to draw conclusions and noted the need to link platform data with external datasets, such as labour statistics, demographic information and health outcomes.
The UK AI Security Institute (AISI) was established during the UK’s Bletchley Park Summit, where frontier AI providers made voluntary commitments to support independent study of their models. This research faces similar access challenges: researchers may need insight into training data, user logs, and model internals, which poses sensitive intellectual property, security and privacy challenges. Governments want to understand the national security risks of frontier AI systems, which may require running evaluations with classified information that cannot be shared with private companies. At the same time, labs cannot simply share their models – this would expose their intellectual property and require major resources to deploy the model on government infrastructure.
To address this, OpenMined partnered with the UK AISI and Anthropic to pilot the use of secure GPU enclaves. The use of enclaves enabled AISI researchers and Anthropic to contribute sensitive data and models into a secure computation environment while maintaining mutual secrecy. Using this method, researchers can run evaluations, including on chemical, biological, and nuclear risks, on Anthropic’s models without either side seeing the other’s inputs. This established a proof-of-concept for evidence-based oversight of frontier AI systems.
How do you see these methods influencing the future of AI governance? Dave: Auditing methods that integrate PETs could underpin a global assurance ecosystem for AI. This is because they provide scalable, privacy-preserving infrastructure that allows multiple independent auditors to study models and platforms. This could enable diverse perspectives, reproducibility of findings, and evidence-based governance. Legislation to introduce greater AI oversight and researcher access to social media data, such as the EU’s AI Act and Digital Services Act, are laudable. But their success will ultimately depend upon the technical infrastructure on which researcher access and AI oversight regimes are built. We believe that approaches underpinned by PETs offer the best route to align incentives between platforms, researchers, and regulators. The future of online safety research lies not in choosing between privacy and transparency, but in using innovative technologies to achieve both.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|
|