Watch On:
Summary
Artificial intelligence (AI) and algorithmic decision-making systems algorithms that analyze massive amounts of data and make predictions about the future are increasingly affecting Americans’ daily lives. Bias in Medical and Public Health Tools. In 2019, a bombshell study found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias Black patients had to be deemed much sicker than white patients to be recommended for the same care. Acknowledging this failure was the result of under-regulation, the FDA’s new guidelines point to these tools as examples of products it will now regulate as medical devices. The FDA’s approach to regulating drugs, which involves publicly shared data that is scrutinized by review panels for adverse effects and events contrasts to its approach to regulating medical AI and algorithmic tools. Regulating medical AI presents a novel issue and will require considerations that differ from those applicable to the hardware devices the FDA is used to regulating. Perhaps more important than assessments after a device is developed is transparency during its development.
Show Notes
But there’s another frontier of AI and algorithms that should worry us greatly: the use of these systems in medical care and treatment.
Details about the tools’ development are largely unknown to clinicians and the public — a lack of transparency that threatens to automate and worsen racism in the health care system.
This happened because the algorithm had been trained on past data on health care spending, which reflects a history in which Black patients had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities.
While this algorithm’s bias was eventually detected and corrected, the incident raises the question of how many more clinical and medical tools may be similarly discriminatory.
Tools Used in Health Care Can Escape RegulationSome algorithms used in the clinical space are severely under-regulated in the U.S.
Source
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism