Hack me if you can: are drivers keen to use automated vehicles when exposed to cyberattacks?

January 2022: A project blog for Understanding user trust after software malfunctions and cyber intrusions of digital displays: a use case of automated automotive systems

If you were to use an automated car capable of handling its acceleration, deceleration and direction (also known as level 3), what would you do if it was hacked? This is the question we examined at the National Transport Design Centre, in our driver-in-the-loop simulator.

 

The driving simulator control room, at the NTDC.

Figure 1: The driving simulator control room, at the NTDC.

In order to investigate the effect of a cyberattack on drivers’ behaviour (e.g. resuming control of the car, engaging in a non-driving related task) and attitudes (e.g. whether people like driving automation or not), we conducted a study consisting of the following stages:

 

1. Pre experimental questionnaires to assess personality traits, driving habits and demographics;

2. Four trials in the driver-in-the-loop simulator: one to get familiarised with the system, one control condition (no cyberattack), one with an explicit cyberattack (ransomware popping on the centre console) and one with a silent failure (turn signals fail to activate when the automated car performs an overtaking manoeuvre);

3. A questionnaire measuring trust and attitudes towards the automated driving system, after each of the three experimental conditions;

4. A final questionnaire and a semi-directed interview to collect qualitative feedback on participants’ experience in the simulator.

 

 

The study was conducted in November and December 2021 at Coventry University and involved 35 participants. It was a challenging study due to the sanitary conditions and related restrictions at that time.

 

A man sweeping the floor next to the driving simulator

The results are quite interesting as a large variability of behaviours and attitudes were observed. For instance, some people did not bother much about the cyberattack, whereas some others were frightened and resumed control from the automated driving system, which sometimes led to a crash. The data analysis and coding are ongoing at the time of the blog publication, and there is still a large chunk of eye-tracking data to be analysed. Gaze behaviour data can shed light on where participants looked at, how often and for how long, which can indicate to what extent they trust the automated driving system.

 

 

 

 

 

 

Figure 2: Do you need a clean simulator to get a clean data set? That’s the question.

 

This piece of research is supported by the UKRI Trustworthy Autonomous Systems Hub awarded to Dr William Payre. The team includes Jaume Perelló-March, Dr. Giedre Sabaliauskaite, Dr. Hesamaldin Jadidbonab, Pr. Stewart Birrell and Pr. Siraj Shaikh.