The death of a pedestrian due to an autonomous vehicle was inevitable. But the accident involving Elaine Herzberg and a self-driving Uber car can be traced not only to potential mechanical faults but to problems in way that all autonomous vehicle manufacturers test their machines.
Automated vehicles use a combination of video, radar, and lidar (using lasers to measure distances from objects); and occasionally sound sensors to navigate. Current evidence indicates Uber’s car was using video and lidar at the time of the accident, but there is no way to be certain exactly who or what is to blame before the incident is properly investigated.
What is known is that two major systems that could have caused the car to fail are the aforementioned sensors, which may have misread the scene, or the algorithms which processed the data. It is possible that the car spotted Herzberg but misread her speed or position, or was confused by the fact she was pushing a bike, rather than assuming that it did not notice her at all.
Martyn Thomas CBE, professor of information technology at Gresham College and fellow of the Royal Academy of Engineering concludes from currently available information that “we don't know what happened but quite clearly the technology was not fit for purpose”.
While the dust settles on the fatal Uber crash, Thomas believes that our attention should be turned to the wider context of autonomous vehicle development. He has particular complaints about the method by which autonomous vehicle builders test their cars.
In Thomas’ opinion, the whole sector’s testing process lacks a cohesive shape. “In a scientific experiment you have a hypothesis, and you try to prove it,” he explains. “At the moment, all they are doing is conducting a set of random experiments with no structure around what exactly they need to achieve and why these experiments would deliver exactly what they need to achieve.”
The general and immense goal of human-standard driving under all conditions, and no measurable way of achieving it, means the tests are therefore somewhat meaningless. The old adage of taking small (and quantifiable) steps to realise your aims are being ignored by some of the brightest minds in tech when it comes to their self-driving projects.
The process is then made additionally difficult by frequent incremental changes to hardware and the learning process of the software itself. “If your system is leaning all the time, then it’s changing all the time, and any change could make things worse as well as better. So you could completely undermine the work you are doing,” Thomas explains. All these factors leave the testers with too many variables to effectively work through any problems buried deep in a car’s systems.
This is the problem with the regulators’ activities, who Thomas says are unwilling to devise the necessary criteria for testing and licensing for fear of putting off innovative companies from setting up in their jurisdictions.
In response to this accident, Thomas says that the public should put pressure on regulators to set manufacturers appropriate standards for autonomous cars, so the manufacturers can then shape their tests around them and provide the necessary evidence that their technology is safe enough for use on public roads. “If we don't have a debate about what level of evidence is going to be needed and make the regulations fit for purpose, then I think we're heading off down the wrong path."
Furthermore, he wants to see autonomous cars be made very easy to spot out on the roads, “if these cars are going to be moving around on streets where there are pedestrians, then the pedestrians need to have a decent chance of realising they are coming and take extra care around them."
That Elaine Herzberg was killed is undeniably tragic. But what we should hope is that her death can make manufacturers of autonomous vehicles reflect on their work up to this point, and enact the changes needed to make their tests effective and ultimately make their cars as safe as possible.
This article was originally published by WIRED UK