We are pleased to share with you our First Stakeholder Workshop Summary Report, summarising key findings from our First Stakeholder Workshop on April 15th, 2019, taken place at the Digital Catapult Centre in London, UK. The workshop brought together participants from academia, education, NGOs and enterprises to discuss trust in relation to algorithmic decision-making services and platforms.
As the first stakeholder workshop of the series, we aimed to achieve an initial understanding from the various stakeholders regarding how trust is conceptualised by these organisation and how it is established and maintained for themselves or for their users. Following on a plenary discussion around anticipated drivers and inhibitors of trust in algorithmic online services, we used two use case studies about online shopping and online hotel booking to dive deeper into this conceptualisation of trust.
Our key findings include:
- Trust is commonly recognised as fundamental in the context of algorithmic decision-making, and the emerging big data and computational power are raising new challenges to our existing notion of trust.
- Trust in relation to algorithms needs to be considered in a broader context by considering the various other elements constitute the Internet and the digital services/systems that algorithms operate on.
- Consistent with literature from sociology and psychology, our stakeholders have pointed out that trust is an emotional and psychological state of an individual. Like most emotional states, trust can hardly be expressed as a binary yes or no, but often as a matter of degree.
- The emotional state of feeling trust is closely related to a subjective perception of the trustworthiness of a piece of information or platform. This trustworthiness is a multifaceted concept with many components, such as transparency, privacy, reliability, reputation, appearance and behaviour of the website, all filtered through the user’s previous experience.
- The gap of digital literacy observed in most Internet users could make their recognition and establishment of trust in relation to algorithmic decision-making much more challenging, and it is a critical issue that we need to bear in mind when designing trust-enabling technologies.
These findings are critical to our ReEnTrust project, to recognise the complex socio-economic context in relation to algorithmic decision-making when designing new technology solutions to facilitate users’ establishment of trust. In the next stakeholder workshop, we would like to inspire more focused discussions with stakeholders regarding the role of different trust components in helping them negotiate trust in algorithms, and how technologies may help them mediate these different components more effectively. This would give us deeper insights regarding how a responsible trust (re-)building digital system should look like.
The full report can be downloaded here. If you have any feedback or comments to the report, please do not hesitate to write to the report authors:
- Jun Zhao ( email@example.com), University of Oxford
- Menisha Patel (firstname.lastname@example.org), University of Oxford
- Ansgar Koene (Ansgar.Koene@nottingham.ac.uk), University of Nottingham
Our following-up stakeholder workshop is scheduled for early December 2019. If you would like to be kept posted regarding this as well as related news about the ReEnTrust project, please feel free to sign up to our stakeholder group.