Last week, Nest pulled its ultra-sleek Nest Protect smoke detector from the market after stumbling upon a potentially dangerous glitch in its software.
“During recent laboratory testing of the Nest Protect smoke alarm, we observed a unique combination of circumstances that caused us to question whether the Nest Wave (a feature that enables you to turn off your alarm with a wave of the hand) could be unintentionally activated,” Nest CEO Tony Fadell wrote in an open letter to consumers. “This could delay an alarm going off if there was a real fire.” Nest (which was recently acquired by Google) will offer a full refund for any Nest Protect customer who requests one.
Nest Protect is considered a flagship device in the nascent “Internet of Things” movement, which seeks to load up everyday products with Internet-connected sensors, the better to feed data back to companies. By 2018, Machine-to-Machine (M2M) communications devices for healthcare monitoring and remote care will make up the largest part of the $7.9 billion spent on M2M devices and the Internet of Things, according to a January 2014 report from Ovum. Medical and emergency-services companies, many of which face enormous cost pressures, are some of the leading entities in this push—but as the Nest Protect situation demonstrates, hardware and software can often fail in unexpected ways. As healthcare organizations double down on wearable and implantable sensors, smartphone-based apps that measure everything from blood pressure to blood-sugar levels, and sophisticated monitoring software, the chances rise that something could go wrong at a crucial moment.
Medical staffs may ignore data indicating a problem if the device tends to raise false alarms when the network connection is lost or when batteries die. Devices that are hacked or misconfigured, flaws in software, or the inability of a sensor to detect problems may also send misleadingly positive data that would keep ambulances in the garage when patients need help.
In an April 3 report proposing regulatory guidelines for the use of in-hospital and mobile healthcare IT, the U.S. Food & Drug Administration warned that medical devices come with risks to patients from failures in design, operation, connectivity, and configuration. The FDA has regulated medical-related apps and devices it classifies as Medical Device Data Systems since 2011.
Proposed regulations would lighten the FDA’s supervisory load by loosening regulations on smartphone apps that monitor whether patients are taking their meds on time, for example, so the agency can focus more intently (in its words) on remote-monitoring systems “that could potentially pose greater risks to patients if they do not perform as intended.”
On the surface, the failure of telematics that keep cardiac surgeons up-to-date on a recovering patient’s new pacemaker is a long way from a smoke alarm that gets confused by a hand gesture—nonetheless, both require that humans entrust some portion of their physical safety to a device that may or may not be reliable, and could fail based on errors in networks, power supplies or other environmental factors that have nothing to do with how they’re designed, installed or used.
That risk necessitates, at the very least, a system of fail-safes that ensure health-monitoring systems and the data they collect are secure “despite the weaknesses of common personal devices,” according to both medical researchers and the FDA’s existing risk/benefit guidelines for mobile-healthcare IT. So developers, take note: Even as the Internet of Things evolves into a revenue-driver, ultra-connected devices come with pitfalls, including (for infrastructure-critical hardware) concerns over safety. But those dangers do open up additional opportunities for IT security experts and others who specialize in plugging holes in hardware and software.
- Job Opportunities in the Internet of Things
- Shocker: The Internet of Things Is Insecure
- How to Program for the Internet of Things