AIs can predict a patient’s self-reported race from X-rays, CTs and mammograms - without that detail appearing anywhere in the data - with alarming implications.
You know not to expect anything good when you see “AI” and “race” in a headline.
Algorithms are notoriously prone to absorbing human bias and its effects during their deep learning, and perpetuating it in their output: US law enforcement may be the best (i.e. worst) example, but healthcare algorithms are also a problem.
One well-used US healthcare allocation algorithm systematically assigned lower health risk scores to Black people and referred them less often to personalised care programs – this turned out to be because less healthcare money was spent on Black people, and the AI apparently equated this with better health, when in fact both their health and their access to healthcare was a lot worse.
Diagnostic algorithms have shown a tendency to underdiagnose already underserved populations.
Now MIT researchers have found something weird and sinister in medical imaging: AIs can predict a patient’s self-reported race (white, Black or Asian) from X-rays, CTs and mammograms without that detail appearing anywhere in the data.
Not only is a patient’s race beyond the perception of human radiologists, but this team could not identify the correlates the programs were relying on for their predictions.
The researchers, building on previous observations of this effect, tested a range of deep-learning models against multiple datasets. They adjusted for a range of possible confounders to do with anatomical features and health conditions that vary in prevalence across populations. They even altered the images using filters to remove the appearance of higher bone density (more common in Black people), and kept applying filters till the images were so degraded they were unrecognisable as medical images at all …
… but the machines still successfully predicted race.
This is alarming, the authors write, as it provides “a direct vector for the reproduction or exacerbation of the racial disparities that already exist in medical practice … our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging.”
The mysterious process behind the prediction, and fact that it can’t be easily observed and compensated for by human radiologists, mean some bad decisions are bound to be made.
They recommend that any AIs used in medical imaging should have their performance audited for bias “and that medical imaging datasets should include the self-reported race of patients when possible to allow for further investigation and research into the human-hidden but model-decipherable information related to racial identity that these images appear to contain”.
If your AI sees something you can’t, ask it (nicely) to email penny@medicalrepublic.com.au