In a blog posting on the Tesla website, the electric-car company revealed that a Model S owner was recently killed in a traffic accident while the vehicle’s autopilot was active.
“This is the first known fatality in just over 130 million miles where Autopilot was activated,” the blog posting read. “Among all vehicles in the US, there is a fatality every 94 million miles.”
The vehicle was reportedly on a “divided highway” when it hit a tractor trailer driving across a perpendicular road. “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied,” the blog added.
Despite the fatality, Tesla maintains an optimistic perspective on the future of autonomous driving. “As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing,” the blog concluded. “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.”
The National Highway Transportation Safety Administration (NHTSA) is apparently investigating the incident. “NHTSA’s Office of Defects Investigation will examine the design and performance of the automated driving systems in use at the time of the crash,” the agency wrote in a statement. “During the Preliminary Evaluation, NHTSA will gather additional data regarding this incident and other information regarding the automated driving systems.”
Will this accident influence the evolution of autonomous-driving systems? It’s too soon to tell, although car manufacturers researching the technology will doubtlessly have to answer questions about whether they’re liable for whatever their vehicles do while under computer control. More accidents in the short term, of course, will lead to more questions.
If you’re a tech professional interested in helping develop the next generation of autonomous-driving solutions, be aware that safety will play an all-important role in ideation and design. The emphasis on safety, in turn, will lead to some interesting programming and philosophical questions—for example, should your self-driving car be programmed to kill you if the need arises?