When discussing self-driving cars, the trust factor usually crops up at some point. More often than not, the focus is on the lack of faith many of us (three out of four drivers, according to a recent AAA survey) have for cars driving themselves. In terms of safety on our roads, though, perhaps the real faith issue should be: Can we trust humans with autonomous technology?
Putting that question squarely on the front burner is the recent death of a Tesla driver killed in Florida, when his Model S, controlled by Tesla’s Autopilot, failed to recognize the white trailer on a semi that was turning in front of him. The Tesla submarined under the trailer, shearing off the sedan’s roof. It seems he put more faith in the self-driving system than he should have; apparently, neither the system nor the driver made any attempt to apply the brakes. Only luck prevented anyone else from being injured or killed.
In this case, putting too much faith in Autopilot was a judgment call. We make judgment calls every day. Are we ever wrong? Sure. As driver-assistance technology becomes more widespread, the number of judgment calls we make in relation to it will increase. When faced with autonomous-driving systems, can we be trusted to not follow the lead of that Florida Tesla driver by placing too much confidence in their capabilities?
More advanced than most of the current generation of driver-assistance systems available in assorted makes and models, Tesla’s Autopilot can perform a number of tasks. Using its array of cameras, sensors, radar, ultrasonic sensors and database, it can steer down the highway, change lanes and respond to the speed changes of surrounding traffic, as well as locate an available parking space and park the car after reaching its destination. In ideal conditions, it can do all of these things without much in the way of driver input.
Wow. That’s pretty comprehensive, right?
The Big But
Yes, Autopilot is leading-edge comprehensive — it simply can’t accomplish all its advertised tasks 100 percent of the time, however. Nor does Tesla claim it can. All manner of external influences, such as weather and road construction, can interfere with Autopilot’s ability to recognize its surroundings and react appropriately.
In its literature, Tesla promotes Autopilot as a system to assist the driver, making the driving experience easier. It also cautions the driver to remain engaged, even when Autosteer and other systems are enabled. It even reminds drivers to keep their hands on the steering wheel.
The Human Factor
Humans being human, however, tend to take familiar things (from spouses and steady jobs to friends and so forth) for granted. It’s human nature. We hear so much about self-driving cars and the dazzling technology that’s becoming more sophisticated and prolific with each passing day, and it’s easy to assume the driver-assistance systems in the newest batch of vehicles will live up to future expectations. As we use these systems every day, becoming more familiar with them, we tend to provide less oversight. That’s exactly what killed the Tesla driver. Remember, he didn’t even touch the brake pedal. Either he was aware and put too much confidence in the system, or he was disengaged, unaware of the impending crash until it was too late.
The Starting Line
Self-driving cars aren’t going to suddenly pop onto our streets after an autonomous-vehicle Big Bang event. Driver-assistance technology will continue to expand and improve until, one day, all these systems will evolve, dovetailing into a car that drives itself with absolutely no human input.
Until we reach that point, though, all we should trust any driver-assistance system to do is assist the driver, not replace the driver. Can we be trusted to do that? The death of the Tesla driver would seem to argue against it.
What it means to you: Don’t take any driver-assistance technology for granted. Don’t expect it to perform perfectly, 100 percent of the time, because it won’t.