Rise of the Machines in Cancer and Other Areas of Health Care: How to Make AI Useful
December 23, 2019Artificial intelligence (AI) is moving fast and breaking things everywhere. It is built into multinational supply chains, web-based services, cars, and, slowly, health care. Although the adoption of AI in health care has been bumpy, it continues unabated.
Last year, the U.S. Food and Drug Administration (FDA) gave premarket clearance to the WAVE Clinical Platform, an early-warning system that integrates real-time data to identify hospitalized patients at risk of vital sign instability. As the use of AI in the form of machine learning and algorithms in health care increases, an unanswered question looms large: How should policymakers regulate AI?
A new article from University of Pennsylvania scientists Ravi Parikh, MD and Amol Navathe, MD, PhD, is a useful place to start. Both are faculty members and researchers at the Perelman School of Medicine and the Abramson Cancer Center’s new Penn Center for Cancer Care Innovation (PC3I). Both are also Fellows at Penn’s Leonard Davis Institute of Health Economics (LDI).
Writing in Science, they demystify and de-mythologize machine learning and AI and suggest some practical guidelines for regulating the rise of machine learning.
Algorithms are not new in health care. The first algorithms were essentially digital flow charts that relied on a small number of user-provided data points and binary (Yes/No) choices. In health care, these early algorithms served as physician facilitators—a series of rules-based decision trees and checklists designed to guide and standardize care.
Over the past decade, however, exponential growth in computing power has enabled immediate predictions based on hundreds — even thousands—of variables. The simultaneous development of machine learning—in which computers are programmed to automatically change predictive algorithms—allows for optimizing algorithms in real time. As a result, the logic of new predictive analytics can sometimes appear opaque.
Within health care, ambitious AI companies have gained FDA approval for products that mine electronic health record (EHR) data to predict adverse events and read diagnostic test results. As large health systems amass digital data in EHRs, some proponents believe AI will identify new drug interactions, risk factors for infections, and even diagnose cancers.
So far, that potential is only potential. A lax regulatory environment, which doesn’t demand the most useful new tools, may be partly to blame. So, what can be done to spur meaningful innovation?
Navathe and Parikh provide several ideas. First, regulators should expect more than mathematical elegance from AI. This means predictive analytics must demonstrate impact on the delivery of health care and patient outcomes.
To unlock the clinical potential of AI, new tools need to be tied to real clinical decisions and interventions. Predicting a surgical complication isn’t helpful if the data can’t meaningfully drive change. In addition, the measures used to evaluate AI should relevant to patients, such as survival, length of stay in hospital, or misdiagnosis. These recommendations would simply make the same demands of AI that we make of new drugs. Simple prediction is not synonymous with better health.
Second, regulators need to be careful about what algorithms are compared to. Many AI products seem impressive because they can mimic physicians. But that doesn’t mean the tool improves health systems. The relevant question is: does the algorithm perform better than a physician, or does a physician with the assistance of an algorithm perform better than just an algorithm or just a physician? By measuring algorithms against existing standards of care, regulators and policymakers can identify the best combination of protocols, human practitioners, and artificial intelligence.
Finally, algorithms are built on real-world data generated in a healthcare system with known biases and access disparities. This carries serious implications for regulators. Machine learning tools change over time and across new populations, so a systemic bias may only arise after an approved tool is brought to scale. As a result, AI should be monitored to prevent an entrenchment of algorithm-backed inequality.
Silicon Valley tech giants relish disrupting incumbent industries. And in the United States, few systems are as entrenched as health care. But safely unlocking the potential of any innovation in health care—whether it be drugs, devices, or software—requires fine-tuned regulation. Artificial intelligence, while new and potentially revolutionary, is no different.