Bridging Responsibility Gaps by Making Autonomous Systems Answerable

Project launch blog | February 2022 | Responsibility Project |Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems

 

How can you trust a machine to make a life-or-death decision for you or your loved ones, if it cannot be responsible for it—that is, be made to answer for the decision, should it be wrong?  And yet we do often rely on autonomous systems in such circumstances, from autopilot systems in aeroplanes to pacemakers or insulin pumps implanted in our bodies.

 

Delegating concept. Wooden figurines and arrows as symbol of delegation.
Delegating concept. Wooden figurines and arrows as symbol of delegation.

In those cases, it is the humans who design, build and maintain the machines that we expect to answer for the machines’ performance, and normally this is sufficient for us to place our trust in the system.

 

Today, however, autonomous AI and robotic systems are becoming increasingly complex and interconnected in ways that make them more effective, but also more opaque and resistant to human understanding, interpretation, prediction and control. This drives the growing problem of responsibility gaps – where it is not straightforward to determine who should be held to answer for what the machine does.

 

This is the problem that our multi-disciplinary team at the University of Edinburgh has set out to tackle in our new UKRI-funded project within the Trustworthy Autonomous Systems programme: Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems.’

 

Our project begins from what we already know from philosophy, law and cognitive science about responsibility gaps – after all, there are many cases where we can’t easily hold a person morally responsible for what they have done, or find the right person to answer for it!  We’ll look at how human communities bridge these gaps and preserve social trust through the creative construction of answerability practices. Here we’ll be guided by our insight that such practices don’t merely preserve social trust, they can also help people gradually become more trustworthy and responsible than they were before.

 

We’ll apply those existing lessons to guide the construction of new answerability practices for autonomous systems. It’s important to remember that such systems are more than assemblages of data, algorithms, sensors and actuators. Today’s autonomous systems are still sociotechnical systems, meaning people are still an essential part of how these systems function, even when many individual operations are automated.

 

So when we talk about ‘making autonomous systems answer,’ we aren’t talking about building machines that can themselves be morally responsible; we’re talking about finding new ways to enable the system as a whole – hardware, software, and humans – to give us the kinds of answers we rightly expect and demand from trustworthy partners in society.

 

To do this, we’ll build on two other kinds of expertise in our team. First, we’ll undertake socio-legal research to determine where new responsibility gaps are emerging with autonomous systems and what kinds of answers people expect or need when determining which of these systems to trust.

 

These answers will vary. Imagine that you are misdiagnosed with brain cancer, and as a result suffer unnecessary invasive surgery and psychological trauma. Now imagine that the misdiagnosis was heavily influenced by a classification error made by an automated AI diagnostic tool, which your medical team used to assess your brain scans.

 

What do you want to happen next? Do you want to know why the error occurred? To know how often the system makes mistakes like this? Or how you will be compensated for your pain? Do you need to know what safeguards were in place, and why they weren’t enough? Or what new safeguards can keep others from suffering your fate? These are all questions a responsible agent might be expected to answer in this situation.

 

To better understand where these answerability capabilities are most needed, we‘ll work with our partners in Scotland’s Digital Directorate, the NHS AI Lab, and SAS to identify specific pressure points in design and deployment of these systems for emerging applications, where improved answerability capabilities are vital for assuring the systems’ responsible and trustworthy use.

 

To show how we might better deliver those needed capabilities, we’ll draw upon our team’s expertise in multiagent and dialogical AI systems design to develop ‘wraparound’ system interfaces that can create new answerability flows between an autonomous system and its stakeholders, from end users to regulators, who rightly expect that system (including the humans behind it) to answer to them.

 

Just as individual persons gradually become more responsible and trustworthy social partners by being made to answer for what we do, we aim to show that autonomous sociotechnical systems can do the same.