The Uber Crash
A couple of days ago, an autonomous car run by Uber crashed into and killed someone. This is the first death caused by an autonomous car (I think…). Unsurprisingly, it has generated a lot of discussion. Video available here.
I’m writing this because some of the coverage I’ve seen didn’t include what I thought was important. So here I’m writing about that.
Context
Autonomous vehicles have been being tested for a few years. There are a number of companies doing it, including Uber but also Waymo (formerly the Google self-driving car project) and Tesla and some others.
When it’s ready, then it’ll be great because now nobody needs to waste time driving, either themselves (ie you can read a book while being driven) or when getting a taxi. It will mean a loss of employment too, which in a capitalist economy in which almost everybody is exchanging their labour for money, is a big problem. But the problem is the economic system, not that there will be a loss of jobs, because the dream is that humans should not have to spend time on menial jobs.
The main discussion is safety. It is obviously difficult to get cars to drive themselves safely, because you have to replace the human brain. It isn’t just factual safety, ie when there are 3x fewer collisions from autonomous vehicles than human-driven ones, but also perceptions of safety. Not to say that the car isn’t at fault, but it seems we want software to be 100% safe, whereas humans we allow to drive if they are safe enough.
(Missing from the discussion is the fragility of it. What about if it turns out that you can stick a photo of something in the road that makes cars stop? Then pranksters can put cat-photos all over the place and halt traffic. Or if some part of the signals infrastructure – eg a phone mast – breaks and no cars can work. Or what about if there is a glitch in the software that means it doesn’t work on February 29th. Etc.)
The Safety Driver
This one was being tested with a human ‘safety operator’ in the car, who is able to control the car and step in. On this occasion, it seems like they were looking at something else in the car at the time, and only glanced up just before the collision. The narrative will blame that operator a lot, as Uber wants to say that it is not at fault and find a fall guy, and the media loves an individual blame story instead of a discussion of the actual issues.
I’m not going to talk about the safety person, that’s not what’s interesting. Instead I want to touch on a few of the themes which are not being focused on.
The Crash Itself
Let’s establish some facts (based on the video from the car).
- The car is driving at night, it is dark. There are some streetlights but these are not to illuminate the street, just to show where the side of the road is.
- The car has its headlamps on, but not on full beam, so it can only see a short distance in front of it. Perhaps 20 metres.
- The car was driving, I guess, at 30-40mph.
- The victim was crossing the road, wheeling a bike. They were not wearing any lights or reflective gear.
- The person is visible about one second before collision.
- It would take the car approx 2.5s (3s-2s) to stop fully, but human reaction time would be about 0.25s. So there would be ~0.75s of braking, which is significant but only a third of the braking.
The Starting Point/Comparator
There seem to be assumptions that the robot car is at fault, just because an autonomous vehicle is involved. If it was just a normal crash – there are about 1700 road deaths each year in the UK – it would not be headline news. But because one is autonomous, it is. Understandable to report on, of course, but there is an assumption that the car is at fault.
Instead, what we should be doing is comparing the autonomous car to a human-driven car. That gives us something useful to compare against. Would it have been different if the car had been human-driven?
Let’s talk about the victim
There has been little discussion about whether the victim is at fault. This is partly understandable, because nobody likes blaming dead people, and media-wise, this isn’t the most interesting part of the story.
But we had the same problem in the bike-collides-with-and-kills-pedestrian in London late last year. The focus was on how the bike didn’t have a front break and couldn’t stop in time (even though the bike instead tried to swerve, which may well have been safer). There was little discussion about the person who walked out into the road without looking while on their phone, right into the way of the bike (which was where it should be, in a road, cycling at a reasonable speed).
Pedestrians are often quite dangerous and reckless, from my experience of cycling round London for the last year. Cyclists are often also dangerous and reckless, as are various other vehicles, but we shouldn’t respond to ‘pedestrians are reckless’ by saying ‘so is X’. Pedestrians are often distracted and not paying enough respect to the tonne of metal going at speed nearby. Crossing roads, most people act like a deer or pheasant: ‘can I currently see danger? No, ok let’s go’, and then assume that because they didn’t see danger when they started, no danger will appear. People are frequently on the other side of the road near a corner without checking whether a new vehicle has come round that corner.
The victim of the Uber crash was crossing a road in the dark and did not appear to be paying any attention to the vehicle that hit them. It is dark, and they were not wearing anything to light them. They can see the car coming with lights from a long way away. That should be mentioned here too.
Was the car at fault?
So now we can ask: if the car had been driven by a human, what would have happened? Open question, and what I care more about is the framing of the question – eg the discussion before this – than whether you agree with my answer on it.
I think: a human driver, if they were paying attention, would have slammed on the breaks and been braking for half a second before colliding. This might have been enough to slow it down from 40 to 30, which (according to the adverts at list) turns it from 80% chance of death to 20% chance of death. If they weren’t paying attention, such as if they were chatting to a passenger, on their phone, looking out the window or whatever, then they would probably not have braked and the outcome would be the same.
So the car is probably at fault. Its sensors did not detect the person, where a human driver probably would have.
However, there’s an interesting twist: the car’s sensors are usually better than humans. According to Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles, ‘[it was] a dark road, but it’s an open road, so Lidar (laser) and radar should have detected and classified her”.
The autonomous car has the potential to be better than a human driver, because its sensors can be better than our eyes (eg work in the dark) and it can also pay attention to much more than we can (eg looking forwards and backwards simultaneously). On this occasion it failed.
Manslaughter charge?
Our cultural approach to justice is: someone did something wrong, punish them. The family of the victim have called for a criminal case.
Maybe the safety operator was not doing their job properly, and that’s one aspect of it. Were they too distracted that they were unsafe? Were they on their phone, or were they looking down to operate the vehicle? That’s for investigators to investigate, and is probably best not speculated on by the press, witch-hunt style.
The more significant question is about the software. Presumably something had gone wrong that they weren’t detected. Is Uber liable for this?
– they likely breached their civil law duty of care (aka ‘negligence’) to the victim, because that is done with a strict liability or ‘no fault’ approach. It doesn’t require an individual to have met a certain standard of failure, instead it is about whether the person (Uber) fell below the standard of care that they owe, which does not require subjective fault.
– whether it is manslaughter depends on whether something was grossly negligent. Looking at Uber, this is not about the safety operator, but is about the software they made. They are testing software, and of course that won’t be perfect. It probably never will be perfect. One way of looking at it is: in this case, what went wrong. But another would be to look at their general crash record. If over their general testing period they have had significantly less incidents than a human driver would – eg if a safe human driver has about a 1% chance of having an accident each year, if Uber’s cars have been at about 0.2% chance (in human-driver-year equivalents) then maybe they are sufficiently safe to not be liable here.
So, those are my thoughts on the Uber crash and how it should be approached and discussed.