ReEnTrust engaging with policy discourse

The “responsible policy and practice” work-package (WP1) of ReEnTrust aims to unpack the tensions between the drive for innovation in the design and use of algorithms and the growing desire for meaningful regulation and protection for users. As part of this work ReEnTrust is engaging with the policy discourse on algorithmic regulation through participation in “AI governance” conferences and working with policy setting organizations. In the first three months of 2019 this has included participating at CPDP Data Protection and Democracy, the Global Governance Roundtable on AI at World Government Summit, workshops of the European Commissions’s AlgoAware project, and the Just Net Coalition, a Winter School on AI: ethical, social, legal and economic impact by the JRC HUMAINT project, and a meeting with Ofcom.

At CPDP Data Protection and Democracy, ReEnTrust (represented by Ansgar) participated as a speaker on the “By Fair Means or Foul: What Can Notions of “Fairness” Contribute to the Governance of Data Processing?” panel as well as attending many of the interesting sessions on platform regulation, GDPR, algorithms in insurance, and the role of industry standards. Our contribution to the “By Fair Means or Foul” panel focused on “Unfairness in Algorithmic Systems” ranging from issues with bias in data sets based on historical data, to context dependent nature of definitions of fairness, to the question of defining performance and accuracy measures.


The Global Governance Roundtable on AI was a one-day workshop organised by The Future Society on behalf of the UAE Minister of State for Artificial Intelligence that was one of a number of related events that took place at the World Government Summit in Dubai. In the months leading up to the event the organizers held a series of preparatory conference calls to get input from the invited participants for the discussions on “Mapping the rise of AI and its governance”, “Governing AI in different global contexts”, “International Panel on AI & AI for the UN SDGs”, and “Making the AI revolution work for everyone”. During the workshop participants were divided into expert groups and subcommittees for 75min discussions with each participant contributing to four of the 14 subcommittees (see GGAR_Briefing Docs). Among these, Ansgar participated in the following discussions:

  • Working Group 1 – 4A: Explainable & Interpretable AI: What, Why and How?
  • Working Group 2 – 6A: Building Capability for “Smart” Governance of Artificial Intelligence: Building Competency for Governing AI in the Public Sector
  • Working Group 3 – 9A: From a Data Common to an AI Commons: AI Commons vs. Data Commons
  • Working Group 4 – 14C: AI Narratives: A/IS Infrastructure and Ecosystem

The AlgoAware project was procured by the European Commission to support its analysis of the opportunities and challenges emerging where algorithmic decisions have a significant bearing on citizens, where they produce societal or economic effects which need public attention. In December 2018 AlgoAware published its first interim report on “The State of the Art“, with a call for peer review and comment. As part of the review process a workshop was held on January 25th with nine experts from policy or academia to:

  • Discuss perspectives and areas which deserve further policy and research attention;
  • Present and discuss emerging policy and governance approaches;
  • Complete and discuss the long-list of case studies of algorithmic decision-making with a policy stake.

The invited experts included:

  • Mireille Hildebrandt, Vrije Universiteit Brussel (VUB)
  • Sandra Wachter, Oxford Internet Institute
  • Sophie Stalla-Bourdillon, University of Southampton
  • Ansgar Koene, University of Nottingham
  • Michael Veale, University College London
  • Charlotte Altenhöner-Dion, Council of Europe
  • Annie Blandin-Obernesser, CNNum
  • Michael Latzer, University of Zurich
  • Joris van Hoboken, University of Amsterdam and Vrije Universiteit Brussel (VUB)

Following on from some of the work we did with the Third World Network on briefing e-commerce trade negotiators in 2018 (as part of UnBias), we received an invitation to join a planning workshop by the Just Net Coalition. The JNC is a global network of civil society actors committed to an open, free, just and equitable Internet. Founded in February 2014, the coalition engages on topics of the Internet and its governance, with the goal to promote democracy, human rights and social justice.

The three day workshop brought together, around 40 participants from “civil society actors in different social sectors who are facing digital challenges and opportunities but do not feel well-equipped to deal with them, and on the other hand, digital activists who are inclined to work on issues of equity and social justice but have not had structured opportunities to do so effectively.” The workshop provided a space for constructive dialogue among the participants with:

  • two sessions examining key elements of the emerging digital society and economy from a progressive and Southern perspective, and the challenges faced by progressive actors and movements;
  • three half-day session for sector specific examination of issues;
  • a fourth half-day session tying up the sectoral discussions into convergent themes;

As a final outcome of the workshop, participants committed to various contributions to the JNC going forward, including the development of a ‘digital justice manifesto’.


The JRC HUMAINT project Winter School on AI: ethical, social, legal and economic impact, at the start of February, provided a multi-disciplinary forum with experts from various AI related domains addressing technical, socio-economic, legal and policy related developments.

As part of his contribution to the Winter School, Ansgar gave a presentation on the role of industry standards as vehicle for addressing socio-technical challenged of AI.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.