Car Science: AI road rage
Hello! You're reading Car Science; please consider subscribing, it's free and helps me.
Hey,
Well, we've made it to another Friday. At what cost.
editor's note: apparently at the cost of me realising what day it was and sending out Car Science, which is very specific but I suppose a relatively small price to pay
There's a lot of interesting discussion about AI in automotive currently. Like can Elon Musk take time out of ruining the only social media I'm any good at to actually try to claim extremely stupid stuff he said about "full self driving" was maliciously deepfaked to make him look like he could be investigated? Much to think about.
This isn't the news edition, though, so you can breathe a sigh of relief and say we'll just be looking at the definition of road rage and whether robots might start doing it.
Pretty much the first thing anyone asks, about autonomous driving is: what happens when it kills someone? Most of us are aware that cars are dangerous; they kill a hell of a lot of people and it's not just the drivers, it's mechanical failures, it's the weight and power of vehicles, etc. Human error, distraction, lack of skill, etc accounts for a lot but the fact is cars are big, powerful, dangerous things that existing in close proximity to carries some level of inherent threat.
I say 'pretty much the first thing' - in fact we were asking this in, like, 2016 and now things sold as autonomous cars that absolutely are not (the closest we have is Level 2 autonomy) have killed a bunch of people, from their drivers to pedestrians. In fact probably the one thing most people know about autonomous vehicles is their uncanny ability to hit people.
That's normally classed as a malfunction, not something inherent to the autonomous programming. Generally, driver assistance software like cruise control and AR displays are intended to reduce traffic accidents; the great hope of AI on roads is that robot drivers won't be subject to the sort of poor judgement humans are, especially road rage.
Probably everyone's lost their temper driving at some point, whether it's exasperation at an incomprehensible road system (shout out Lewisham roundabout) or the behaviour of other drivers (shout out anyone behind the wheel of a Nissan Qashqai) and although simulating that is difficult, a study from the University of Warwick gave it a try.
“While it’s unethical to let aggressive drivers loose on the roads, participants were asked to recall angry memories, putting them in an aggressive state, while performing a driving simulation. These were compared to a control group, who weren’t feeling aggressive," explained Zhizhuo Su, the lead author of the study.
The idea of the research was to narrow down specific traits that would identify an aggressive driver. In theory, these would then help autonomous drivers modify their own driving behaviour around cars that were exhibiting these traits, to be more cautious.
The three, specific traits the study identifies are:
Aggressive drivers have a 5km/h mean faster speed than non-aggressive drivers;
Aggressive drivers also exhibit more mistakes than control groups – such as not indicating when changing lanes;
Aggressive driving is categorised as any driving behaviour that intentionally endangers others psychologically, physically, or both.
Two of those aren't especially easy to measure on the roads, at least not without being around a car driving aggressively for an extended period, which presumably the idea would be to avoid. But a car going a measurable 5kph faster than other cars around it might be easier to spot, both on traffic cams and as an AI driver. Even when there aren't other cars around, noting the average speed of drivers encountered on a stretch of road would probably be enough information.
(drivers making more mistakes could also just be exposed to traffic fumes, worryingly)
What's an AI supposed to do about an aggressive driver, though, other than not risk passing them or try to keep a distance? And although this wasn't the intention of the study, if aggressive driving now has a measurable set of parameters, what if those fit to an AI? After all, drivers speed more when they're using adaptive cruise control assists.
In theory, AI wouldn't have emotions, so wouldn't get mad at getting cut up on a roundabout. But it could well start making mistakes and driving a little faster, so although it would be hard (without getting fully into specific programming) to prove it intended to harm anyone with its actions, there's no reason that an AI might not exhibit aggressive driving.
Of course, that would mean an AI that could actually drive would have to exist. Which they don't - maybe yet or maybe at all. Although car companies swing around on this, with VW shedding ARGO and Ford investing heavily within a few months of each other, ultimately the idea of fully autonomous driving is a long way off still.
That's reflected by the Autonomous Drivers' Association, a pre-emptive NGO set up to represent autonomous drivers by Bryn Balcombe, the former chief technology officer of prototype AI racing series Roborace.
ADA has a 'Turing test' of conditions that AI drivers have to meet for their use on public roads to be conscionable:
The Three Laws for AI on our Roads
Prove AI meets, or exceeds, the performance of a competent & careful human driver
Prove AI never engages in careless, dangerous or reckless driving behaviour
Prove AI remains aware, willing and able to avoid collisions at all times
If only anyone worried that much about lower stage pseudoautonomy eh.
Hazel
x
ps: if you like Car Science and for some reason happen to be a commissioning editor then I currently quite badly need work, after a big contract fell through. Please feel free to recommend me to people.