Google has devoted considerable time and resources to building self-driving cars, and the effort seems to be paying off, with driverless vehicles circulating the streets of Mountain View, California and Austin, Texas.
While Google remains tight-lipped about its ultimate business strategy for the cars, the possibilities are endless. It could license the technology to automakers. It could start its own self-driving taxi service—a possibility that already freaks out Uber. Or it could build and sell its own vehicles, although that’s a much lower-margin business than software.
Before Google’s cars can hit the road in mass numbers, though, they face an endemic problem: human drivers. According to The New York Times, although self-driving cars obey traffic laws without question, humans do not—sometimes resulting in situations where the autonomous vehicle does everything right at an intersection or turnoff, only to collide with a human driver who thought it was okay to roll slowly through a stop sign.
The Google vehicle sensors also have issues with cars parked badly (which they misinterpret as about to pull into traffic) and other drivers approaching red lights too fast before stopping (Google cars will take evasive maneuvers, thinking the human driver intends to run the light).
“The real problem is that the car is too safe,” Donald Norman, director of the Design Lab at the University of California, San Diego, told the newspaper.
“Too safe,” of course, sounds like a problem that Google would like its cars to have. An autonomous vehicle programmed for aggression tends to make people think about things like Skynet, which is not a sentiment Google wants widely held if it hopes to sell its technology to automakers, or even build its own fleet.
For developers of all types of hardware and software, Google’s travails with its self-driving cars provide an interesting reminder: No matter how efficiently you build your product, you’re going to have to take some degree of human error into account.