Recommendations published to tackle bias in medical AI tech
- 20 December 2024
- A set of internationally-agreed recommendations has been published with the aim of reducing the risk of potential bias in AI for healthcare technologies
- The guidelines from the STANDING Together initiative were published The Lancet Digital Health and NEJM-AI on 18 December 2024
- They are intended to ensure that datasets used to train and test medical AI systems represent the diversity of the people that the technology will be used for
A set of internationally-agreed recommendations has been published with the aim of reducing the risk of potential bias in AI for healthcare technologies.
The guidelines from the STANDING Together initiative, published The Lancet Digital Health and NEJM-AI on 18 December 2024, are intended to ensure that datasets used to train and test medical AI systems represent the diversity of the people that the technology will be used for.
STANDING Together is led by researchers at University Hospitals Birmingham NHS Foundation Trust, and the University of Birmingham, with the research study involving more than 350 experts from 58 countries.
Dr Xiao Liu, associate professor of AI and digital health technologies at the University of Birmingham and chief investigator of the study said: āData is like a mirror, providing a reflection of reality and when distorted, data can magnify societal biases.
āBut trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt.
āTo create lasting change in health equity, we must focus on fixing the source, not just the reflection.ā
Although medical AI technologies may improve diagnosis and treatment for patients, studies have shown that medical AI can be biased, meaning that some individuals and communities may be left behind or harmed.
People who are in minority groups are particularly likely to be under-represented in datasets, so may be disproportionately affected by AI bias.
The guidelines include:
- Encouraging medical AI to be developed using appropriate healthcare datasets that properly represent everyone in society, including minoritised and underserved groups;
- helping anyone who publishes healthcare datasets to identify any biases or limitations in the data;
- enabling those developing medical AI technologies to assess whether a dataset is suitable for their purposes; and
- defining how AI technologies should be tested to identify if they are biased, and so work less well in certain people.
The two-year research study was conducted with collaborators from more than 30 institutions worldwide, including universities, regulators, patient groups and charities, and small and large health technology companies.
It has been funded by The Health Foundation and the NHS AI Lab, and supported by the National Institute for Health and Care Research.
Sir Jeremy Farrar, chief scientist at the World Health Organisation said: āEnsuring we have diverse, accessible and representative datasets to support the responsible development and testing of AI is a global priority.
āThe STANDING Together recommendations are a major step forward in ensuring equity for AI in health.ā
Researchers hope the recommendations will be helpful for regulatory agencies, health and care policy organisations, funding bodies, ethical review committees, universities, and government departments.
In addition to the recommendations, a commentary, published in Nature Medicine on 13 December 2024, written by the STANDING Together patient representatives highlights the importance of public participation in shaping medical AI research.