Date(s) - 06/12/2019
12:00 pm - 5:00 pm
It is our great pleasure to welcome you to the 2nd ReEnTrust stakeholder workshop on Friday 6th December, at the Mary Ward House Conference and Exhibition Centre (5-7 Tavistock Place, WC1H 9SN) in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of algorithm trust issues to a focus on practical solutions. We will present our latest technology tools and prototypes that are designed to help users develop their trust in online algorithms.
Aims of stakeholder workshops
Our ReEnTrust stakeholder workshops bring together individuals from a range of professional backgrounds to share our differing perspectives on issues of trust in relation to algorithmic decision-making. Given how much trust in algorithms is a multifaceted issue that requires all of us working together, these workshops are aimed to generate some much more well-rounded insights than focusing solely on a technology solution.
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, and to facilitate dialogue and collective reflection with these platforms in order to jointly recover from trust breakdowns with the help of AI-driven mediation technologies.
Structure of the 2nd stakeholders workshop
In the first workshop, we have been broadly focusing on understanding what trust means. In this follow-up workshop, we aim to gain feedback and invite you to contribute co-designing of our online tool prototype, which is aimed to help users establish the trust of algorithms, such as transparency, reliability, etc.
To this end, we are going to present two of our latest research tools:
- Algorithm Playground: which enables users to achieve a better understanding of algorithms and results generated by them,
- Algorithm Mediation Tool; which enables users to take control and configure algorithms in order to increase their trust of the algorithms.
The underlying hypothesis of our approach is that being able to understand how algorithms work and how results are generated will improve users’ trust of the algorithms and corresponding results.
The workshop will consist of two parts.
- In the first part we will present key findings from our first workshop, and our latest research tool development;
- In the second part, participants will break up into focus groups to explore our prototype tools through concrete tasks. In this way, we could gain insights regarding how our tools may or may not support the development of trust in algorithms and their related decision-making outcomes.
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
- 12:00 – 13:00 Lunch/informal networking
- 13:00 – 13:10 Brief introduction with an update about the ReEnTrust project & outline of the workshop
- 13:10 – 14:45 Algorithm Playground — for algorithm understanding and choice-making
- 14:45 – 15:00 Coffee break
- 15:00 – 15:20 Digital wellbeing questionnaire
- 15:20 – 16:30 Algorithm Mediation — for trust solicitation and control of algorithms
- 16:30 – 17:00 Wrap up and open discussion
Bookings are closed for this event.