On 24 January, ReEnTrust team, led by Dr Philip Inglesant, provided our inputs to and-the-turing-consultation-on-explaining-ai-decisions-guidance/”>ICO and The Turing’s latest Consultation on Explaining AI.
ReEnTrust has two interests in AI-enabled decisions: in our own research, to ensure that our own processes are trustworthy, as being compliant with GDPR, DPA compliant, as well as being transparent and responsible in our research involving people; and with respect to the topic of our research, to facilitate trust-building by users in algorithm-driven online systems.
We are particularly interested in investigating the extent to which explainable AI will “build trust with your customers” (and other stakeholders), as the guidance suggests. We take this as both a quantitative question: to what extent explainable AI increases trust; and qualitative: what forms of explanation are most effective in increasing trust.
Our feedback was grounded upon our two empirical experiments:
- Jury workshops with young people (16+) and older citizens (65+) to explore their current perception of trust of online recommender systems; and
- An experimental study set up in a technology prototype, to explore how explanations help users make sense of recommender results, and how this understanding may impact on their trust of algorithms.
In our response, we emphasised the importance of considering how explanations can best be presented to affected individuals, in terms of the interaction. Examples may include: as a pop-up small preview window when the user hovers a mouse over a part of the response, a chatbox, or an interactive space where users can explore the implications of different algorithms.
We also added our observation of how explanations can be both beneficial and risky to individual users: users largely felt that they had more understanding of the algorithmic search results; while at the same time, they could feel less trust of the algorithms because now they know more.
ReEnTrust also welcomes the focus on explainable AI for wider societal implications for an informed public. In our work with target demographics – older people, young people – we have noted a wide variance in awareness of the application of AI and of the use of personal data.
Our full response can be made available upon request.