Self-driving cars, or autonomous vehicles (AVs), and human-controlled cars share a basic difference: AVs make preprogrammed decisions based on rules and regulations, while humans behind the wheel make choices based on a lifetime of driving experience.
Although artificial intelligence that actually learns from experience is in our future, that technology is nowhere near developed enough to control the early crop of self-driving cars. In the meantime, these computer-controlled cars will rely on programming to make split-second decisions on the road. Gathering data through a variety of means — such as radar and laser sensors, cameras, satellites and car-to-car communication — AVs then translate that data into actions based strictly on programming. Such programming will have narrow boundaries for what is safe and legal. Like Star Trek’s Mr. Spock, genetically programmed to never lie or behave in any way not based on logic, AVs won’t stray from the strictest interpretation of highway rules and regulations.
Human judgment is fallible. Most reality television shows are driven by the poor choices we humans make. But for better or worse, humans use experience to form the moral and ethical foundation on which they make their decisions. Most of us are willing to bend the rules a little to achieve a goal, especially if our experience tells us it’s reasonably safe and morally acceptable to do so. Typically we weigh actions, consequences and probabilities before choosing to drive five miles an hour over the speed limit or slowly roll through a 4-way stop rather than coming to a complete stop at an intersection with no other traffic. There are lies, and then there are little white lies. Most of us believe lying is wrong, but telling a little white lie to spare someone’s feelings is OK. It’s a judgment call. Just as Captain Kirk relied on judgment to make decisions, humans don’t base their decisions strictly on what is and isn’t legal.
Ethical Versus Legal
As an AV zooms along a city street, a mother pushing a baby stroller steps from between two parked cars directly into the AV’s path. In a split second, the AV sifts through all its data, determining that there is no time to stop. Its options are to hit the mother and baby or swerve into the path of oncoming traffic.
Ultimately, a human behind the wheel would face the same choice. The human driver, however, would view the choice of hitting a mother and child versus swerving into an oncoming car as an ethical decision. Either is wrong, but one might be less wrong than the other. A human’s personal moral compass, based on a lifetime of experience, would lead to one choice or the other.
An AV has no moral compass — only programming. In this instance, its programming would probably be at odds with itself. Programmed to do what’s legal — staying within its lane on its side of the road, as well as avoiding a collision with a pedestrian or another vehicle — an AV might be totally incapable of a decision in this scenario. When both options violate programming, how will the computer react?
If the AV did make a choice, on what would that choice be based? Can an AV be programmed to abandon core directives, such as operating legally and safely, to make an ethical decision between the lesser of two bad choices? Probably not — at least not yet.