Fatal Tesla Autopilot Crash is a Reminder Autonomous Cars Will Sometimes Screw Up

Posted by on June 30, 2016 10:12 pm
Tags:
Categories: Blender

Fatal Tesla Autopilot Crash is a Reminder Autonomous Cars Will Sometimes Screw Up

Fatal Tesla Autopilot Crash is a Reminder Autonomous Cars Will Sometimes Screw Up

A fatal accident involving Tesla’s automated driving system raises the question of how safe driverless cars need to be.

Tesla Motors’ Model S Sedan.

More than 30,000 people are killed by cars in the U.S. each year. The figure is often quoted by people working on autonomous driving technology at companies such as Google and Tesla as the biggest reason why any technology that can significantly reduce deaths on the roads deserves serious attention.

But even if automated cars can be many times safer than conventional ones, they will still be involved in accidents. No software can be perfect. And as self-driving technology matures, regulators and society as a whole will have to decide just how safe these vehicles need to be. Indeed, it has been argued that in some situations autonomous vehicles must be programmed to actively choose which people to harm.

Those thorny issues became more concrete today with news that Tesla is being investigated by the U.S. National Highway Transportation Safety Administration after a fatal crash involving the company’s Autopilot automated driving feature, which can do things like change lanes and adjust speed during highway driving for some of the company’s cars.

In Florida in May, a Tesla Model S sedan drove into a tractor trailer crossing the road ahead while Autopilot was in control of the car. Neither Tesla’s Autopilot feature nor the driver applied the car’s brakes. In a blog post Thursday, Tesla said that Autopilot didn’t register the white side of the trailer against the bright sky.

Tesla’s Autopilot can steer the car, detect obstacles and lane markings, and use the brakes, all on its own. But it is far less capable than a human driver and lacks the sophistication and high-detail sensors seen in more mature autonomous car projects like that of Google.

Tesla has been criticized for promoting the convenience of Autopilot—a name that suggests no human intervention is needed—while also maintaining that drivers must constantly be ready to take over from the software. The leader of Google’s autonomous car project, Chris Urmson, has said his company’s experiments have proven that humans can’t be relied on like that, because they quickly come to trust the car knows what it’s doing. All the same, Tesla CEO Elon Musk has said his company’s data suggests Autopilot is twice as safe as human drivers.

We don’t yet know exactly what happened in May’s fatal accident. Tesla’s statement emphasizes that the driver knew he should always keep an eye on what Autopilot was doing. But if the NHTSA finds the design of Autopilot to blame, Tesla could be forced to issue a recall, or feel it has to dumb down the feature. That could hurt both Tesla and enthusiasm for the technology in general.

Whatever the outcome of NHTSA’s investigation, the incident is an opportunity to consider the standards to which we hold autonomous driving software and the companies that design it. If it is to be widely used, we will have to accept it being involved in accidents—some fatal, and some caused by its own failings.

Human drivers set a low bar: about 90 percent of crashes are caused by human error and dumb mistakes like driving while texting or drunk kill far too many people. It’s easy to see how machines could improve on that. But deciding how much better they need to be will be much more difficult.

Leave a Reply

Your email address will not be published. Required fields are marked *