Media Spotlight

In a Viewpoint article in the Journal of the American Medical Association (JAMA), three University of Pennsylvania researchers affiliated with the Penn Center for Cancer Care Innovation (PC3I) discuss potential ways to counter or correct clinical algorithm bias.
Ravi Parikh, MD, MPP, Amol Navathe, MD, PhD, and Stephanie Teeple, an MD-PhD Candidate, write in their conclusion:
“Because of its reliance on historical data, which are based on biased data generation or clinical practices, Artificial Intelligence can create or perpetuate biases that may worsen patient outcomes. However, by strategically deploying AI and carefully selecting underlying data, algorithm developers can mitigate AI bias.”
Passing biases on
In the body of their JAMA paper entitled “Addressing Bias in artificial Intelligence in Health Care,” the trio details a number of methods that might be used to counter algorithm bias in clinical AI settings. One widely-recognized problem is that large swaths of medical literature and various record collections that are used as the “big data” that algorithms feed on, contain racial, gender and other kinds of biases that can be passed on.
As the use of AI continues to spread throughout the diagnostic, decision support and therapeutic areas of the health care, the issue of how such “baked in” biases may impact the decision-making software systems’ conclusions has become a serious concern for clinicians, hospital administrators and insurance companies alike.
Importance of unbiased source data
Major national headlines have recently drawn attention to study findings showing how racial bias skews algorithmic systems already in wide clinical use, potentially affecting the care of millions of patients. However, racial “bias” is only one of many ways AI can get it wrong if not provided correct source data. The authors emphasize that a clear solution to this problem is to pay greater attention to using unbiased source data.
They also point out that the power of AI itself could be used to do things like identify real-time bias in physician decision making, such as “flagging a questionable opioid prescription at the end of a primary care clinician’s day, providing a needed check on this decision.”
Related News
Media Spotlight
PC3I Associate Director Lola Fayanju on Breast Cancer Screening Recommendations
09.27.2023
In an interview with the Philadelphia Inquirer, PC3I Associate Director Lola Fayanju commented on recently updated breast cancer screening recommendations.
Media Spotlight
Proton Therapy Study at Penn Medicine Gains New Attention
02.14.2023
As part of the RadComp study, Penn Medicine and over 20 radiation centers nationwide are comparing the effectiveness of photon therapy and proton therapy to treat breast cancer.
Media Spotlight
Machine Learning-Triggered Behavioral Nudges Quadrupled Serious Illness Conversation Rates
01.12.2023
A team led by PC3I Innovation Faculty Ravi Parikh has found that machine learning-triggered behavioral nudges quadrupled the rates of seriousness illness conversations for patients with advance cancer.