Do A.I. and Apps Really Help Healthcare?

Electronic Healthcare
A.I. can help doctors diagnose conditions.

New findings from Google and Apple show that self-reporting and artificial intelligence (A.I.) challenge human doctors on both speed and accuracy when it comes to diagnosing patients and conducting studies. Even so, there are hurdles that may slow A.I.’s implementation in the healthcare arena.

Google obtained biopsy imagery from the Camelyon16 project, which asks participants to create a cancer-detecting algorithm. After scanning 400 slides, Google says its machine-learning platform was able to detect 92.4 percent of tumors. Compare that to previous attempts at automation, which were only able to detect 82.7 percent of tumors, and a panel of doctors, who were only able to agree 48 percent of the time that tumors in patients were actually tumors.

Using gigapixel images, Google’s A.I. was able to find tumors as small as 100 x 100 pixels on images sized at 100,000 x 100,000 pixels. It even found tumors in two slides erroneously labeled ‘normal.’ In its whitepaper, Google writes that missed, inconclusive or delayed diagnoses may affect up to 20 percent of cases, and highlights its A.I. image-scanning algorithm as both faster and more accurate than human doctors.


Released in 2015, ResearchKit is Apple’s method for crowdsourcing medical information. Studies can include just about any illness, condition, or treatment that researchers want to track, and is conducted entirely by medical professionals.

In a recent study on asthma, Mount Sinai Hospital says ResearchKit data can be a reliable diagnostic source for doctors. With the platform, users self-report on a variety of topics having to do with their ailment, which could range from symptoms to treatment. Asthma was one of the original five ResearchKit topics that Apple announced at launch.

Of the 50,000 users who downloaded an app corresponding to the research, 7,600 enrolled in a six-month study. The survey queried users on how they were treating their asthma, but also took into consideration metadata on air quality and location. The study then compared the user-submitted data with air quality reports to see how users were caring for themselves (an example: study participants in Washington logged more severe symptoms during a rash of wildfires in the area).


Problems with Programs

While there’s no argument that A.I. and self-reporting can work under the right circumstances, there are still some glaring issues.

Many ResearchKit studies are open to the public; that’s how you end up with 50,000 downloads for an app dedicated to asthma research. There are ways to narrow the scope of a study, such as using TestFlight or assigning username/password credentials to participants, but most studies are simply left in the App Store unchecked. The ResearchKit website invites the public to download and use apps related to studies involving concussions, asthma, hepatitis C and postpartum depression (among other topics).

That’s likely why Mount Sinai researchers opened their study with the line, “The feasibility of using mobile health applications to conduct observational clinical studies requires rigorous validation.” As they’re open to the public, ResearchKit apps are easy to manipulate if you have enough people willing to do so (but please don’t tumble down a ‘big pharma conspiracy theory’ rabbit hole).

Google’s A.I. is impressive, but not without its own possible shortcomings. The study used only biopsy slides that could be digitized as gigapixel images. While it says “future work will focus on improvements utilizing larger datasets,” Google doesn’t say if it will focus efforts beyond the most pixel-dense imagery. If A.I. research work is limited to high-resolution scans, it leaves a lot to be desired.

Neither platform is actually ‘better’ than doctors, either. Like so much bleeding-edge tech, Google and Apple efforts in healthcare change the dynamic for healthcare professionals, jobs which are always in high demand. These technological efforts are meant to save doctors time in diagnosis and treatment, which may actually end up buying patients more time to live.

Health App

A.I., ResearchKit and Implementation

Conceptually, there are plenty of reasons to want to use these technologies. Self-reporting of treatment is a blindspot for doctors. As-is, you report back every few weeks or months about treatment, leaving the doctor with nothing but a glimpse into your condition at the moment you’re in front of them. With ResearchKit, those same doctors can see granular information on your day-to-day condition, so long as you’re responsibly reporting.

A.I. has the opportunity to scan and analyze imagery faster than doctors, and can be a good set of ‘eyes’ for physicians well ahead of assigning treatment. Because A.I. is objective, its findings have no context, leaving doctors with cold, hard data to appreciate before deciding on a path forward.

At the same time, keep in mind that healthcare is also a business. IBM’s Watson A.I. engine is already hard at work trying to cure cancer, with researchers having invested tens of thousands of hours teaching it how to analyze clinical data. That’s a promising start, but Watson was also fired from one of its more notable roles.

The M.D. Anderson Cancer Center at the University of Texas spent more than four years and $62 million on a project dubbed the Oncology Expert Provider, $61 million of which didn’t have board approval. IBM received $40 million of that money.

Part of the issue was that Watson wasn’t able to read the hospital’s new medical record system, and was only used 12 times in four years. IBM has since canceled its work with the University of Texas, and says its system “is not ready for human investigational or clinical use, and its use in the treatment of patients is prohibited.” An auditor, examining procurement and compliance, said there were other, less costly bids the hospital may not have considered in the ramp-up to Watson implementation.

Payment was sometimes made in spite of Watson not delivering on contracted goals. Responding to the audit, university chancellor Willam McRaven said: “The research and development nature of the work inevitably led to goals and expectations that shifted over time, often making original contracts moot in terms of specified deliverables. To the extent that targets and expectations shifted over time, the incomplete documentary record makes it impossible to determine whether or not the revised milestones were met.”

This sort of incident doesn’t help IBM, Google, Apple or any other health-related tech service or company find a path forward for helping patients; it also raises questions of whether healthcare and business interests are properly aligned. Unfortunately, those most affected by medical conditions sometimes don’t have time to wait for regulators to iron out the wrinkles in such systemic problems, making technology even more critical as it becomes ready to branch out from research labs and into day-to-day use.