Research Fellow at Horizon Digital Economy Research – Computer Science at the University of Nottingham. Prior to working on the ReEnTrust project and other research projects relating to the Digital Economy within Horizon, Helen worked as part of the EPSRC funded project UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy.
Helen’s research interests focus on engaging with young people to find out their experiences of being online, and in promoting digital literacy amongst this age group and others. She enjoys public engagement through STEM activities.
Research Fellow at Horizon Digital Economy Research – Computer Science at the University of Nottingham. Currently working as part of the ReEnTrust project and previously part of the UnBias: Emancipating Users Against Algorithmis Biases for a Trusted Digital Economy and various projects under the Digital Economy theme and Horizon Services Campaign.
Liz has a multi-disciplinary background, predominantly grounded in psychology and human factors. She completed her thesis on crowdfunding in the creative industries in 2018, looking at attitudes towards backing and the motivations for doing so. She enjoys bringing the social psychological approach to work with researchers in different disciplines, and employing mixed methods to get to the core of research problems. Her research interests include attitudes and behaviour in online crowd systems, including crowdfunding, social media, and citizen science. She is especially interested in how attitudes and motivation relate to psychological needs and values; outreach and science communication; and the effects of the online world on well-being.
Professor of Human Centred Computing and head of the Human Centred Computing Group – Department of Computer Science at the University of Oxford. Throughout her career Marina has been concerned with bringing a richer comprehension of socially organised work practices and interaction into the process of engineering technological systems. Marina’s current work focusses on how new developments in machine learning and AI can be shaped to respect human agency, ensuring accountability of systems and the digital rights of individuals and communities.
Marina is at the forefront of recent work in Responsible Innovation in the UK and the European Union concentrating on:
- the theoretical and conceptual underpinnings of new forms of governance for RI – new methods for the dissemination of materials and innovative ways of engaging the public in debates on RI in ICT
- unpacking the practices and concepts of innovation of digital systems in the context of professional organisations
- investigating new, and enhancing existing methods for RI practice for digital system developers
- creating a social charter for embedding novel platforms into Smart Societies to provide enhanced agency for people and communities
Ansgar Koene is a Senior Research Fellow and Research Co-Investigator on the ReEnTrust project at Horizon Digital Economy Research (University of Nottingham). Prior to ReEnTrust he was Co-Investigator on the UnBias project and Senior Researcher on the CaSMa project. He has a multi-disciplinary research background, having worked and published on topics ranging from bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. Prior to joining Horizon, Ansgar worked at the University of Utrecht (Utrecht, the Netherlands), INSERM (Lyon, France), UCL (London, UK), ATR (Kyoto, Japan), NTU (Taipei, Taiwan), University of Sheffield (Sheffield, UK), RIKEN BSI (Tokyo, Japan) and University of Birmingham (Birmingham, UK).
As part of ReEnTrust (and previously UnBias), Ansgar is focusing on stakeholder engagement for the development of policy recommendations. In addition, Ansgar also actively engages with the Horizon Policy Impact agenda which seeks to support local, national and international policy initiatives with the knowledge and expertise of the Horizon Digital Economy Research institute.
Research Assistant at the School of Informatics at the University of Edinburgh, Benedicte is doing her first postdoc. She is highly interested in the question of subjectivity in reasoning and decision. She did her PhD at the Laboratoire d’Informatique de Paris 6 at Sorbonne University, which was about weighted modal logic and reasoning on graded beliefs.
Benedicte’s current topics include the study of trustworthiness as it is strongly related to her research and a natural following of her previous works. She has also always been concerned by the question of ethics in AI, especially regarding transparency, as well as data and privacy issues, for better use and understanding of the algorithm.
Research Assistant in the Human Centred Computing Group, Department of Computer Science at the University of Oxford. Menisha’s interests focus on understanding, acknowledging and incorporating stakeholder perspectives in the processes and outcomes of innovation has always been of huge importance to me. She has a great interest in exploring how we can use qualitative approaches (alongside other approaches) to engage stakeholders and to enable a more responsible development of technologies. This is particularly now, given the increasing pervasiveness of technologies and growing concerns about their impact on society- especially on vulnerable users. Menisha is very interested in how we can give a voice to those from more disenfranchised or impoverished communities, as their views are often overlooked- and yet they are most likely to experience the negative implications of the technological developments we are witnessing today. Through her work Menisha continues to work with stakeholders at different levels of consideration- very local, to more broad-ranging, in order to contribute to work in Responsible Innovation and related socio-technical disciplines.
Elvira Perez Vallejos is a Senior Research Fellow and Co-Investigator on the UnBias project (Horizon Institute for Digital Economy Research at the University of Nottingham). Her research background is broad and heterogeneous including speech perception, language production (i.e., psychoacoustics and psycholinguistics) and also translational research on hearing, and applied research on clinical communication and mental health. She is a member of the Institute of Mental Health and deputy lead of the Integrated Research Group (IRG) Children and Young People Focussed Group at Social Futures. Elvira has always been fascinated by the unavoidable interaction between humans and technology. Relevant for the CasMa project is her recent work on Health Communication and e-Language applying corpus linguistics to ‘big data’ derived from internet users accessing specialised platforms such as www.youngminds.org.uk which focus on promoting children and young people mental health.
Research Associate at Horizon Digital Economy Research – Computer Science at the University of Nottingham. Prior to ReEnTrust she worked as part of the ESPRC funded project UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy. Virginia has a multidisciplinary research background, having worked as a Postdoctoral Researcher in Biological Sciences at The University of Manchester and later as a NIHR Programme Manager in public health at the University of Nottingham. In both roles, she developed an extensive experience in research ethics and governance, engaging with a wide range of stakeholders. Her motivation for participatory research, ethics and research that drives policy change has led Virginia’s research interests into the impact of digital technologies on society.
Her current research focuses on engaging with young people and older adults to find out their experiences/issues when using the Internet. She is particularly interested in giving vulnerable citizens a space to express their views and concerns when being online, as well as promoting digital literacy and mapping users’ recommendations to guide the development of regulation, design and educational resources to tackle unnecessary bias and issues of trust in algorithmically-driven systems.
Key areas: meaningful transparency and trust by design; policy impact; STEM public engagement and outreach events.
Michael is a Reader (Associate Professor) at the School of Informatics and Director of the Bayes Centre. He has also recently been appointed Turing University Lead for the University of Edinburgh at the Alan Turing Institute.
In his research, Michael developes AI algorithms and architectures to support collaboration between humans. Think sharing economy, electronic markets, logistics and supply chains, or complex financial services. Making AI safe and ensuring it behaves in responsible ways is an important part of this vision.
Previously, Michael was Director of the Centre for Intelligent Systems and their Applications, and has been involved in several large research initiatives supported by more than £10 million of external funding. He co-ordinated the ESSENCE network, which focuses on using human communication techniques to enable AI systems to negotiate and evolve meaning. Michael led work on social orchestration systems for coordinating collective human activity in the SmartSociety project. In the UnBias project, he led activities for developing fair data-driven algorithms that help address people’s concerns about algorithmic bias.
Senior Researcher, Human Centred Computing Group – Department of Computer Science at the University of Oxford. Helena is interested in the ways in which users interact with technologies in different kinds of setting and how social action both shapes and is shaped by innovation. The projects she works on typically seek to identify identify mechanisms for the improved design, responsible development and effective regulation of technologies. Helena is a qualitative researcher and is very interested in the ways in which detailed, granular analysis can be combined with larger scale computational work.
Since November 2014 Helena has been working on projects about algoritmic bias and fairness. From September 2016 to November 2018 the EPSRC funded project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy”. explored the user experience of algorithm driven internet services and the process of algorithm design. The project was a collaboration with the Universities of Nottingham and Edinburgh and focused on key questions including:
- Are algorithms ever neutral?
- How might algorithmic systems produce unexpected outcomes that systematically disadvantage individuals, groups or communities?
- How can we make sure that algorithmic processes operate in our best interests?
The project involved a range of empirical and engagement activities to address these questions. We identified the existence of concerns across different societal and professional groups over the contemporary prevalence of algorithm driven online platforms. At the same time, we also identified amongst these groups a desire for change to improve the user experience of platforms. Our analysis highlighted several opportunities for positive change – in particular in relation to education, societal engagement and policy. As part of this work we produced a ‘fairness toolkit’ to raise awareness of and stimulate a civic dialogue about how algorithms shape our online experiences. We have also contributed to a recent European Parliament Science and Technology Options Assessment panel report on algorithmic transparency and regulation.
Helena teaches in the CS Department on the Requirements course and Cyber Security CDT elective courses on research methods. She also regularly supervises individual students and the 2nd year UG group projects. Helena continues to contribute to initiatives on research ethics and the ethics of technology. She is very committed to widening participatation at Oxford and regularly take part in outreach activities organised by the Department. Helena is also a member of the Departmental Equality and Diversity Committee.
Helena has presented her work at a range of conferences and in written journal publications. She is also a regular contributor to Inspired Research magazine.
Senior Research Fellow, Human Centred Computing Group – Department of Computer Science at the University of Oxford. Jun’s research focuses on enabling better control of personal information on connected devices, such as smartphone devices or Internet of Things devices. Jun takes a human-centric approach, focusing on understanding real users’ needs, in order to design technologies that can make a real impact. More particularly, Jun is actively working on raising children’s and parents’ awareness of privacy risks related to the use of tablet computers or connected toys. For this, she works closely with schools, families and organisations that have young children’s best interests in mind. Jun moved back to the University of Oxford in 2015, after a short career break. She received her Ph.D. in Computer Science from the University of Manchester in 2007, and an EPSRC Postdoctoral Fellowship in 2009. Between 2013 and 2015 Jun was a Lecturer at Lancaster University.
Research Associate within the School of Informatics at the University of Edinburgh. Bruno’s research deals with knowledge representation, the logical inference of data from knowledge bases and reasoning in the presence of inconsistencies. In his research, Bruno focuses on logic-based argumentation, in order to model human-like interaction from statements.
As part of ReEnTrust project, Bruno is focusing on building practical tools and theoretical models for rebuilding trust on algorithm.
Prior to joining the ReEnTrust project, Bruno was member of the INRIA research team GraphIK at LIRMM and worked at the University of Montpellier (Montpellier, France) where he will defend his thesis in Computer Science in July 2019.
Bruno has presented his work at a range of conferences and in written journal publications.