Car Science: crash prone
Hello! You're reading Car Science; please consider subscribing, it's free and helps me.
Hey,
Happy Friday, assuming I send this on Friday. If so, I'll have achieved an unprecedented goal of both sending this on the day it's supposed to go out and with the subject I said it'd be on.
This one is more about driving than tech. And a little about neurodiversity and how we understand each other. And driving in circles. Mostly because the other topics this week were so staggeringly depressing that like, we don't want to go there. Not this week. Let's take a little break from that. Next week we'll get back to methane leaks and the steelocalypse.
Normally simulation for driving in circles is about race cars. This is a little more suburban. The University of Michigan has created a detailed simulation of this one roundabout.
It's a roundabout in Ann Arbor, which is a small city just outside Detroit. It's unusual for being a roundabout, in the US, where traffic light junctions are more normal; despite roundabouts usually being safer, America just isn't keen.
Apart from Florida, the roundabout capital of the United States with 1,283 - more than ten times the number in Milton Keynes. What does this mean? Does it explain either place? That's one for the scientists in future.
The roundabout in question ranked 15th in a listing of most crash-prone Michigan intersections, racking up 79 crashes and one injury during 2021. It's between State Street and Ellsworth Road and by all accounts looks pretty innocuous, except for the regularity of accidents there; about 1.5 per week.
So modelling it must be to try and reduce accidents there, right? Well, err, wrong. It's that this is one of the more complex road intersections, in terms of needing to understand other drivers and road users, that the university had immediate access to.
Because rather than relying on metered traffic signals the drivers on the intersection have to react to each other it's easily to simulate how it should work. Driver approaches roundabout, waits for a clear space, drives into that space, makes their exit.
That's not how people actually behave, though and this Ann Arbor roundabout is two lane, meaning although it's small it needs quite a lot of communication, cooperation and understanding from drivers to get around safely. Certainly a lot of awareness that people might not be about to do what they look like they set out to do, having found themselves in the wrong lane or just wrongfooted by the process.
All that intuitive behaviour made it apparently perfect for simulation to train AI in the SAFE TEST (Safe AI Framework for Trustworthy Edge Scenario Tests) which is a mixed virtual-real environment for reducing the number of hours that autonomous systems would need to undertake testing to be considered road safe. By making the cars perform real manoeuvres, on real pavements and road surfaces at test sites, the virtual environment then adds hazards such as pedestrians, cyclists and other driver behaving normally or erratically to trial responses.
Systems for testing the roadworthiness of computing that doesn't exist are all very well. But there is this problem, which we discussed last time out, that self-driving cars just don't understand human social cues. In particular in situations like roundabouts and close traffic, where the language of who goes and who stops is something based on the rules of the road a little but much more about lots of specific social inputs, including localised behaviours.
We've all been to that one junction where you know you need to watch for X even though the normal behaviour would be Y but there's a hidden exit and also people have to cross the road in a place that's safer than the actual crossing but you need to be aware of it, etc. For a human, these are hard to understand in a new place. Many of us have made accidental traffic faux pas through no fault of our own.
Professor Barry Brown says although driving might look rules-based on a programming level, it's actually not at all. It's more, erm, vibes.
"The ability to navigate in traffic is based on much more than traffic rules. Social interactions, including body language, play a major role when we signal each other in traffic. This is where the programming of self-driving cars still falls short. That is why it is difficult for them to consistently understand when to stop and when someone is stopping for them, which can be both annoying and dangerous," was his conclusion from years of studying self-driving cars.
Not being able to read cues is one thing. I'm pretty severely autistic and if I'm honest, I'm very poor at social cues. I don't really know how to react normally and I'm getting worse, I suspect. It's not something I've ever been able to learn.
I am not an AI, though. I'm just someone who perceives things a little differently to some people. I still have opinions, which are rapidly shuffled through, about what social cues might mean - and what might be one.
One of the barriers to autonomous driving is that being in control of a vehicle is something that's surprisingly possible to automate and it has a lot of uses in agriculture and mining where vehicles need to follow the same paths and complete the same behaviours. Computers are actually a lot better at repetitive accuracy than humans are, in that respect. Automatons can do that job with fairly minimal supervision and adding any layer of intelligence only makes them slightly better at it.
We confer a lot of human features onto cars because when we're driving we are working with them in collaboration. That cooperation is something we are fundamentally hardwired with; I covered this last issue but humans are seven times more likely to do minor things to help than they are not to.
When you're driving a car there's a significant extent to which you're helping it. Without getting too far into auto-journalist wank, driving is a lot of things like knowing when to shift gear, hearing the car screaming when it's trying to accelerate or brake, shifting it so it can turn correctly, based on what the car tells you it needs. On modern cars that tends to be some kind of alert but it might be a relatively subtle cue on older ones, that 'feel' for a car that gets lionised.
Autonomous cars are a misnomer, even if they were real because hardware and software are not the same. The software would still need to manage the limitations and feedback of the hardware, as an AI driver, not a singular system; the AI's sensors are more embedded into the chassis but the car doesn't stop being one without the AI, even if you'd need to hotwire it and stitch some manual controls in.
I'm not an AI because whatever the hell is going on with how I read situations differently to other people hasn't been programmed. The basic human things, like being seven times more likely to be kind than not, are still the same on me as anyone else. Smarter people than me have written about this but sometimes it feels as though the approach to making self-driving cars is treated as a manufacturing problem, something to fix on the car itself, not the search to actually build something intelligent enough to drive.
Needless to say ChatGPT won't be passing a practical test any time. So it seems strange that we're building complex simulations to test how artificial intelligences that don't exist could better read social cues on a crash-prone roundabout, instead of just looking at why people misunderstand each other enough to crash there more than once a week.
The answers are somewhere in the simulation: the University of Michigan noted that the intersection is busy and reliant on intuition, that it's crowded for its function and complex. So why try to fix a theoretical driver, not the actual roundabout?
See you next Wednesday
Hazel
x