The ReEnTrust programme of work consists of four integrated workpackages (WPs), described below.
Work Package 1: Responsible policy and practice (lead: Oxford)
This WP will unpack the tensions between the drive for innovation in the design and use of algorithms and the growing desire for meaningful regulation and protection for users. The WP will adopt an RRI approach and carry out literature scoping, interviews, surveys and workshops to consider key questions such as
- How do commercial and non-commercial organisations establish and maintain user trust in the algorithmic systems they use?
- How do they themselves establish trust in their own systems?
- How might the level of trust in an algorithmic system be quantified in a “trust index” and how might such an index be of use to system users and providers?
- When might it be reasonable for different stakeholders to trust an algorithm and when might other governance structures be needed?
- How societally acceptable are trust rebuilding systems?
- How can the design-policy-design relationship be understood and optimised in RRI terms?
- Policy guidelines for engendering trust in the design, development, and use of algorithms.
- Evidence and documentation describing the RRI methodology developed through ReEnTrust work.
- A portfolio of real-world case studies to inform responsible practice.
Work Package 2: User-Centred Trust (lead: Nottingham)
This WP will identify the requirements of trust rebuilding tools and explore the capacity for technical solutions to responsibly rebuild user trust once it has been lost. This will be achieved via two strands of multi-method, user-driven data collection and analysis.
We will identify how social mechanisms in the rebuilding of trust manifest online, and the requirements for the tools and interfaces that would be most effective in supporting the rebuilding of trust. We will run a refined version of the Juries methodology (previously used in UnBias), a methodology for participatory research similar to focus groups in which scenarios (i.e., vignettes) are discussed with the view
to identify juror’s concerns, solutions and critical thinking. Participating groups of older (65+) and younger adults (16+) will co-design and develop interactive (‘hands-on’) scenarios in partnership with project partner Polka Theatre to illustrate instances where trust in algorithmic processes was lost and also what the participants did in response, how they felt the platform was responsible, and how the issue was resolved – including how well the platform dealt with the situation and whether trust was regained.
Quantitative measures of psychological wellbeing, self-determination theory, agency and self-competence in trust-related situations will inform the development of a “trust index”. Colleagues at the Mental Health Foundation will provide advice to ensure that the index is sensitive to the issues explored through the co-created scenarios and determine the level of users’ trust when interacting with AI algorithms.
Complementing these first two approaches, we will also conduct video-based studies to capture users’ practical experience of trust breakdown. Analysis will focus on the interactions that occur between user and algorithm/platform. It will highlight the kinds of practical behaviours users undertake when seeking to resolve problems of trust breakdown, and the particular platform affordances that alternately encourage or discourage trust.
- Guidelines for the development of trust rebuilding tools, including requirements and assessment criteria.
- A framework of guidelines for assessing impact of trust in algorithms on wellbeing, including recommendations about the use and misuse of technologies for citizens’ mental wellbeing.
- A “trust index” calculation method to quantify the level of trust users have for an algorithmic system.
Work Package 3: Computational methods for rebuilding trust (lead: Edinburgh)
This WP will develop (semi-)automated mediation techniques that allow users and data-driven online service providers to recover from situations of trust breakdown. A successful recovery would see a productive relationship between users and services rebuilt following negotiated changes to the algorithms that mediate this relationship. Conceptually, we consider the problem of choosing the most acceptable algorithm within a given space of possible choices, based on different users’ feedback on their experience including trust index
metrics, balanced with the platforms’ interests.
We will follow an approach that is based on making the adversarial nature of these interactions explicit, and employing automated negotiation and mediation techniques to tackle the overall challenge. Our approach is based on viewing these interactions as games, where the users’ action space includes behaviours and judgments, the providers’ action space includes algorithm choices and statements about the properties of these algorithms, and payoffs can be derived from the “players’” preferences over the algorithms and their
understanding of the algorithm’s properties.
- New computational models and algorithms for AI-supported semi-automated mediation.
- Analytical and experimental results assessing complexity, efficiency, and usability of these methods.
Work Package 4: A trust rebuilding tool (lead: Edinburgh)
This WP will develop and evaluate a concrete software tool that will embed the computational methods developed in WP3 to provide intelligent trust rebuilding assistance, and several additional functionalities to support the users’ understanding of the algorithms. More specifically, this tool will offer three different sets of functionalities.
- First, it will allow users to “sandbox” different variations of algorithms to assess their behaviour on different datasets, using tools such as Faircheck/Fairtest and other measurements concerning the performance of machine learning algorithms. This facility will enable users to gain an understanding of different trade-offs and the effects of different parameters, as well as allow them to experience their own response to algorithm behaviour, which will help them shape and refine their views and judgment of the technologies deployed by online platforms.
- Secondly, it will provide a component to capture experiences, attitudes and opinions of users toward algorithms and the platforms that use them, thus not only acting as an “experience base” to inform collective discourse about trustability, but also allowing us to build the quantitative models of preferences needed for the automated mediation techniques developed in WP3.
- Finally, it will include a negotiation and mediation component where users and service providers can discuss, debate, and negotiate over the use of algorithms using the structured negotiation dialogue workflows and the automated mediator developed in WP3.
The tool developed in this work package will provide the primary vehicle for validating what technical solutions are possible when attempting to tackle the overall problem of rebuilding trust between users and algorithms on online platforms.
- Implemented prototypes of trust rebuilding tools.
- Experimental results validating the benefits derived from use of these tools.