Skip to main content

Uber’s Fatal Crash Raises Big Questions About Self-Driving Cars

On Sunday evening, a self-driving Uber struck and killed a woman crossing the street in Tempe, Arizona. The crash appears to be the first time a self-driving vehicle has killed someone, and it could alter the course of the burgeoning autonomous driving industry.

Released on 03/20/2018

Transcript

[Narrator] A woman has died after being hit

by a self-driving Uber vehicle in Tempe, Arizona.

And while this is believed to be the first time

an autonomous vehicle has killed somebody,

chances are it won't be the last.

While crashes are inevitable,

it underscores just how new this technology is

and how little society at large has done

to prepare for the onset of self-driving robotic vehicles.

Those who are already concerned about automated driving

will be much more concerned.

Those who are enthusiastic

will, I think, ascribe this to a failure,

but put it in the context of the tragedies

that we have every day on the road.

[Narrator] In a country where human drivers

kill 40,000 people every year,

a solution that does away with distraction, drinking,

and sleeping behind the wheel sounds like a great idea.

And for the companies working to build the things,

there are trillions of dollars to be made.

It's all coming soon, too.

Waymo has taken the human backup out of its cars

and wants to start commercial service in a few months.

Uber is sending robo-trucks hundreds of miles

and figuring out how to make them work

at the logistical level.

In the US, every state makes its own rules

about where the robots can drive

and what sort of information their creators have to share

with the government and the public.

Arizona's rules are especially lax,

which is one reason why Uber tests there

and Waymo wants to launch

its ride-hailing service in the state.

But now that somebody has been killed, attitudes may change.

And questions that some people have been asking quietly

will start to echo.

First, just how safe are these cars

and do they belong on public roads

before they're fully ready?

So far, no one even has a clear idea

of what ready actually means.

Second, is the human backup enough to keep everyone safe

when the car makes a mistake?

Third, should governments have more control

over how this technology works

and would they have the right expertise

to make thoughtful regulations?

Each side is really going to see in this what they want,

at least at this early stage.

My hope, my expectation, is that the serious actors

in this field, the developers, regulators,

their primary interest is in figuring out what happened

and how this can be prevented in the future.

[Narrator] And with technology that can save lives

or take them, many people want a better

or at least clearer way forward.

In other words, maybe it's time to tap the brakes.