Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems

 

As computing systems become increasingly autonomous–able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans—we face a growing problem for social trust in technical systems, known as responsibility gaps. Responsibility gaps arise when we struggle to assign moral responsibility for an action with high moral stakes, either because we don’t know who is responsible or because the agent that performed the act doesn’t meet other conditions for being responsible. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.

 

Autonomous systems create new responsibility gaps. They operate in morally high-stakes areas such as health and finance, but software systems aren’t morally responsible agents, and their outputs may not be fully understandable or predictable by the humans overseeing them. To make such systems trustworthy, we need to find a way of bridging these gaps.

Delegating concept. Wooden figurines and arrows as symbol of delegation.

Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps, by boosting the ability of systems to deliver a vital component of responsibility, namely answerability. Responsible agents answer for their actions in many ways; we can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Importantly, the very act of answering for our actions often improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps.

Our ambition is to provide theoretical and empirical evidence and computational techniques that can gradually expand the capabilities of autonomous systems (which as “sociotechnical systems” encompass developers, owners, users, etc) to supply the kinds of answers that people rightly seek from trustworthy agents.

Read our project launch blog

Bridging Responsibility Gaps by Making Autonomous Systems Answerable

Project Team

Meet Our Project Team

Shannon Vallor

Shannon Vallor

Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence,

Director, Centre for Technomoral Futures, Edinburgh Futures Institute

Principal Investigator
Nadin Kokciyan

Nadin Kokciyan

Lecturer in Artificial Intelligence, School of Informatics, University of Edinburgh

Co-investigator
Michael Rovatsos

Michael Rovatsos

Professor of Artificial Intelligence, Deputy Vice-Principal of Research (AI), Director of The Bayes Centre, University of Edinburgh

Co-investigator
Nayha Sethi

Nayha Sethi

Chancellor’s Fellow, University of Edinburgh

Co-investigator
Tillman Vierkant

Tillmann Vierkant

Senior Lecturer in Philosophy of Mind & Cognition, University of Edinburgh

Co-investigator
Partners

Our Project Partners