The death in March by an Uber self-driving prototype in Phoenix has been a serious speedbump on the road to what many think is the imminence of this revolutionary technology. The car’s braking system was apparently mis-programmed and the human overseer possibly distracted.
In media stories, all the emphasis has been on the unacceptability of even one fatality. Cities and states (such as Arizona) that have been willing to allow self-driving cars to test on public roads have re-thought their permissiveness. But what no story that I’ve seen has emphasized is that the major advantage of self-driving cars is that they will save the great majority of the 30-40 thousand lives currently lost on this country’s highways every year. Even in the current experimental and testing stage, surely many lives have been saved compared with the same mileage driven by human drivers. How many lives were saved even the very day of the Phoenix accident?
Yes of course, a life matters and manufacturers should learn from the accident. But unless we harbor unrealistic expectations of perfection, why the sudden caution? Does it mean that we will be holding machines more accountable than human drivers? That somehow a life killed by machine failure is more of a tragedy than the same person killed by a human driver? That losing, say, 1000 people on the highways per year because of computer error will somehow be worse (or at least feel worse) than losing 35,000 to human error?