Understanding user trust after software malfunctions and cyber intrusions of digital displays: a use case of automated automotive systems

Investigating the cybersecurity, human factors and trust aspects of screen failures during automated driving using threat analysis.

Lead contact: William Payre, Assistant Professor in Transport Design & Human Factors, University of Coventry

Consent verification in autonomous systems

To replace existing deterministic regulatory verification activities that Data Protection Officers rely on, with automated reasoning techniques.

Lead contact: Inah Omoronyia, Senior Lecturer in Privacy, University of Bristol

Trust me? (I’m an autonomous machine)

Building trust and fostering the adoption of autonomous systems (AS) by capturing, documenting and explicating ‘Master Narratives’. The project utilizes design research, documentary methods, and ethnographic analysis to explore the gap between citizen and expert viewpoints on AS.

Lead contact: Joseph Lindley, Research Fellow, University of Lancaster

RoAD: responsible av data ethical, legal, and societal challenges of using data from AVs

Investigating the ethical risks and legal implications related to the collection, access and use of data in autonomous vehicles. Testing the usefulness of datasets and evaluating public acceptance of data recorders.

Lead contact: Marina Jirotka, Professor of Human Centred Computing, University of Oxford

Trustworthy light-based robotic devices for autonomous wound healing

Demonstrating wound healing in the laboratory, and defining an envelope of operation that balances risks and benefits of machine learning and autonomous control.

Lead contact: Sabine Hauert, Associate Professor of Swarm Engineering Bristol Robotics Laboratory, University of Bristol

ARGOS: AI-assisted Resilience GOvernance Systems

Investigating the applicability of machine learning methods to support anticipatory planning for resilience, simulation-based AI techniques for policy appraisal in view of compound risks, and AI-based coordination mechanisms for resilience-aware decision making.

Lead contact: Enrico Gerding, Professor, Director of the Centre for Machine Intelligence (CMI), University of Southampton

Imagining robotic care: identifying conflict & confluence in stakeholder imaginaries of autonomous care systems

Using LEGO Serious Play workshops to Identify the conflicts and confluences in the imaginaries of robotic and autonomous systems (RAS) in the health-social care ecosystem

Lead contact: David Cameron, Lecturer in Human-Computer Interaction, University of Sheffield

A participatory approach to the ethical assurance of digital mental healthcare

Developing a novel approach to assurance through participatory methodology, to underwrite the responsible design, development, and deployment of autonomous and intelligent systems in digital mental healthcare.

Lead contact: Christopher Burr, Ethics Fellow, Alan Turing Institute

COTADS: COdesigning Trustworthy Autonomous Diabetes Systems

Designing algorithms for diabetes management during life transitions using co-design, provenance and explainable AI. This project aims to increase trust and understanding by bringing together clinicians, data scientists, and people with type-1 diabetes.

Lead contact: Michael Boniface, Professorial Fellow of Information Systems, Director of the IT Innovation Centre, University of Southampton

SA2VE: Situational Awareness and trust during shift between autonomy levels in automated VEhicles

Understanding the effect of Situational Awareness and take-over request procedures on trust between drivers and highly autonomous vehicles.

Lead contact: Bani Anvari, Professor in Intelligent Mobility, Director of Intelligent Mobility Lab, University College London

Kaspar Explains: the impact of explanation on human-robot trust using an educational platform

Identifying how causal explanation can influence trust in an educational robotic platform, the Kaspar robot, which has been used as a tool for Autism education for more than a decade.

Lead contact: Farshid Amirabdollahian, Professor of Human Robot Interaction, University of Hertfordshire

OPEN-TAS: An open laboratories system for trustworthy autonomous systems

Creating infrastructure for access via web/VR interfaces and telepresence robots to UK laboratories researching TAS.

Lead contact: Tony Prescott, Professor of Cognitive Robotics, Director Sheffield Robotics, University of Sheffield

Trustworthy human-swarm partnerships in extreme environments

The aim of this project is to understand the contextual factors and technical approaches underlying trustworthy human-swarm teams.

Lead contact: Mohammad Divband Soorati, Alan Turing Research Fellow, University of Southampton

Trustworthy Human-Robot Teams

These challenges present increased opportunities for human-robot collaborative teams but questions remain relating to trust towards the robot within the team and more broadly, the trust of affected groups (e.g., patients) towards tasks carried out by robot-assisted teams.

Lead contact: Nicholas Watson, Associate Professor, University of Nottingham

Trustworthy autonomous systems to support healthcare experiences

This project explores how trustworthy autonomous systems embedded in devices in the home can support decision-making about health and wellbeing.

Lead contact: Liz Dowthwaite, Research Fellow, University of Nottingham

SafeSpacesNLP

Behaviour classification NLP in a socio-technical AI setting for online harmful behaviours for children and young people

 

Exploring the use of Socio-Technical Natural Language Processing (NLP) for classifying behavioural online harms within online forum posts (e.g. bullying; drugs & alcohol abuse; gendered harassment; self-harm), especially for young people.

Lead contact: Stuart Middleton, Lecturer in Computer Science, University of Southampton

Chatty car

Designing an exemplar, socially responsible, anthropomorphised, natural language interface for automated vehicles.

Lead contact: Gary Burnett, Professor of Transport and Human Factors, University of Nottingham

Inclusive autonomous vehicles

The role of human risks perception and trust narratives

Investigating the mechanisms that can address consumers’ concerns when relinquishing human control to autonomous vehicles.

Lead contact: Paurav Shukla, Professor of Marketing and Head of Digital and Data Driven Management Department, University of Southampton

Twitter feed