Google Cars’ Biggest Enemy: Human Error

Google Self Driving

Google has devoted considerable time and resources to building self-driving cars, and the effort seems to be paying off, with driverless vehicles circulating the streets of Mountain View, California and Austin, Texas.

While Google remains tight-lipped about its ultimate business strategy for the cars, the possibilities are endless. It could license the technology to automakers. It could start its own self-driving taxi service—a possibility that already freaks out Uber. Or it could build and sell its own vehicles, although that’s a much lower-margin business than software.

Before Google’s cars can hit the road in mass numbers, though, they face an endemic problem: human drivers. According to The New York Times, although self-driving cars obey traffic laws without question, humans do not—sometimes resulting in situations where the autonomous vehicle does everything right at an intersection or turnoff, only to collide with a human driver who thought it was okay to roll slowly through a stop sign.

The Google vehicle sensors also have issues with cars parked badly (which they misinterpret as about to pull into traffic) and other drivers approaching red lights too fast before stopping (Google cars will take evasive maneuvers, thinking the human driver intends to run the light).

“The real problem is that the car is too safe,” Donald Norman, director of the Design Lab at the University of California, San Diego, told the newspaper.

“Too safe,” of course, sounds like a problem that Google would like its cars to have. An autonomous vehicle programmed for aggression tends to make people think about things like Skynet, which is not a sentiment Google wants widely held if it hopes to sell its technology to automakers, or even build its own fleet.

For developers of all types of hardware and software, Google’s travails with its self-driving cars provide an interesting reminder: No matter how efficiently you build your product, you’re going to have to take some degree of human error into account.

Image Credit: Google

Comments

One Response to “Google Cars’ Biggest Enemy: Human Error”

September 02, 2015 at 12:31 pm, RobS said:

“The real problem is that the car is too safe,”
I disagree. As someone who never gets into car accidents, I’d say that much of this is because sometimes evasive maneuvers are unsafe, but the alternative (remain “safe”) is worse. Statistically, there is a lot of leeway in driving. e.g. If you go through a light that just turned red, probably 99% of the time there will be no accident, largely because most people respect the laws of not doing so.
Defensive drivers know that being a safe driver is not just about following the rules (which sometimes create bad situations) but also understanding the situation (including the conditions, drivers around you, environment, likeliness that something will happen*, etc.)
Until there are no more humans driving, automated cars can only be truly safe by “acting” like the best human drivers.

* if you see children playing ball on the sidewalk, the likeliness that the ball will roll into the street and a child will follow is much greater than if adults are playing ball on that same sidewalk.

Just a side note: laws and signage often create bad situations. I recall a sign in New Jersey that indicated that Highway 1 was the next right. However, there was a hidden street in between that cause a lot of people to get confused so they’d quickly stop in the middle of the 45mph road thinking they should turn on that side street when they saw it.
Also, if the speed limit is 65 and everyone is doing 75, an unmanned (or manned) car doing 65 that car becomes a hazard on the road. Sometimes to be safe, you have to break the law because of others around you…which is why the police will rarely stop/ticket you for breaking the law if you’re driving safely and like those around you.

Reply

Post a Comment

Your email address will not be published.