Asking the Experts: So, what is Trust anyway?

February 2022 | A project blog update for Trust Me? (I’m an Autonomous Machine)

We held a workshop with 22 experts representing various sectors including the tech industry, the media, and the academic world. The aim is to establish a ‘master narrative’ of Trust.

 

This project aims to bridge the gap between expert and everyday peoples’ view about how and why we should (and perhaps shouldn’t) trust Autonomous Systems—from mundane everyday things like recommendation systems or automatic checkouts, through to emerging disruptive technologies like self-driving cars.

 

In Summer 2021 we held a workshop with experts and generated many examples of how a sense of trust around autonomous systems is constructed. The examples we generated were extremely diverse; systems to recommend motor mechanics through to drones being used for search and rescue.

 

The workshop was necessarily conducted remotely. Using an interactive whiteboarding facility, along with videoconferencing, we asked participants to begin by creating ‘trust maps’ of the technical, social, legal, and commercial concerns relevant to each example. The second half of the workshop then focused on distilling these ideas to identify common themes.

 

The most obvious takeaway from the workshop is that trust is extremely complex. It cannot be captured by a single view or perspective. Trust directly relates to a system’s context of use and the people involved. How the systems are understood and communicated has an impact, not only on those that use them, but also from a design or regulation perspective. When it comes to trust, one size does not fit all, as one of our experts put it “trust is very much circumstantial […] based on the application and who you’re affecting, your trust is going to differ”.

 

 

Part of the interactive whiteboard used to run the workshop

Sketchnotes summarising the workshop discussions

Sketchnotes summarising the workshop discussions

Sketchnotes summarising the workshop discussions

Sketchnotes summarising the workshop discussions

 

An unexpected direction our conversations took is the question of whether autonomous systems trust us? While the idea of ‘the human-in-the-loop’ is widely cited as a way to help build trust, as one of our participants put it, this “assumes that a human knows best, and we all know humans haven’t always made the best decisions all the time”. Similarly, humans are also often responsible for ‘gaming’ autonomous systems, or deliberately trying to derail their normal function. Thinking of it this way, trust between humans and machines isn’t a one-way street, but a reciprocal idea.

One of the central issues with trust is understanding how the use of a system relates to any risks which arise from that system. While machines might help us understand those risks, it will always fall to a human to make the value judgement, weighing risks versus benefits. Our experts agreed that ensuring that the consequences of a system’s autonomy are transparent and explainable is key to making these judgements.

 

For any autonomous system, trust is uniquely constructed for everyone who is involved in it, for example, its designers, its operator, the regulator, its users, and even people connected to the user. Moreover, each of these stakeholders may draw upon one of many perspectives to form their sense of trust. To summarise these ideas, we have introduced the idea of Trust as a Distributed Concern.