In a recently published paper, the management consulting firm McKinsey & Company warned that fully self-driving cars, or AVs (those that can drive themselves unrestricted in any environment), could be at least 10 years away, primarily because of the mountain of software required. Despite continuing advances in hardware and the relentless drop in cost for sensors — particularly laser-related sensors — software development will emerge as the choke point keeping AVs out of the hands of consumers in the near future.
With much of the required hardware, such as high-resolution cameras, radar, sonar and laser-related (LIDAR) 360-degree coverage units, already perfected or approaching perfection, the hardware required for a vehicle to drive itself is basically available. The problem remains getting the many separate puzzle pieces to function together as well as interact with surrounding vehicles and infrastructure.
At the heart of the issue is reaching a point where self-driving cars can interact both with other AVs and human-controlled vehicles alike. The industry is nowhere close to achieving this. By some counts, the transition from human-controlled vehicles to all AVs could require 20 to 30 years, during which AVs must be able to detect and interpret the behavior of both.
The McKinsey & Company paper offered three examples of the daunting software issues facing AV developers in just the area of object analysis.
Because detection is primarily a function of hardware, like cameras and LIDAR, it is already fairly advanced. If a vehicle has a sufficient number of cameras and sensors that are properly located, they will detect surrounding objects. Sure, as with any technology, the hardware will improve, becoming more compact and less expensive, but today’s cameras and sensors are good enough to see what they need to see. AV developers are still determining the number and type of cameras and sensors required.
An AV’s several sensing systems must be able to not simply see an object but, in a split second, be able to interpret exactly what that object is. When cresting a hill, is that object ahead a large sign suspended from an overpass, or is it an 18-wheeler across the highway? Is the 2-wheel object ahead a bicycle or a motorcycle? For an AV to successfully and safely interact with that vehicle, it must accurately identify it. Although sharing a basic shape and chassis, bicycles and motorcycles have vastly different capabilities. Mistaking a motorcycle for a bicycle could have serious consequences. Software must not only ensure the several sensing systems work together to get the analysis correct, but it must do so in a fraction of a second.
Currently, most autonomous decision-making is based on programming software with if-then scenarios. If the vehicle ahead suddenly brakes, then the AV should apply its brakes accordingly, and so forth. If the 2-wheel object ahead behaves in such a way, then it’s a bicycle; or if it behaves in a different way, then it’s a motorcycle. Reaching the point where we can trust the AV make the correct decision every time will require hundreds of thousands of driving miles and testing in an unknown number of scenarios. It’s all going to require a lot of time. Moreover, there’s no way programming can account for every possible scenario. The only real solution is augmenting the if-then programming with artificial intelligence (AI) capable of learning and making decisions on its own. AI has come a long way during the past few years, but it’s not yet to the point where it can make intuitive decisions on the road. Oh, and there will need to be hundreds of thousands more test miles to validate AI as it develops.
What It Means to You: When you hear carmakers and the other developers of self-driving cars talk about having an AV on the road by 2020 or 2025, don’t get too excited. These may be vehicles that can drive themselves in limited, controlled areas, but they aren’t going to be full-blown AVs capable of operating on their own everywhere. There’s still much to be done.