NHS in England to trial new approach to AI biases in healthcare

  • 8 February 2022
NHS in England to trial new approach to AI biases in healthcare

The NHS in England is to trial a new approach to the ethical adoption of artificial intelligence (AI) in healthcare with the aim of eradicating biases.

Designed by the Ada Lovelace Institute, the Algorithmic Impact Assessment (AIA) will mean researchers and developers will have to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data.

Part of the trial will also involve researchers and developers being encouraged to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.

It is hoped this will lead to improvements in patient experience and the clinical integration of AI.

It is also anticipated that in the future, AIA could increase the transparency, accountability and legitimacy for the use of AI in healthcare.

Octavia Reeve, interim lead at the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.

“We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

The Algorithmic Impact Assessment complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. The lab was first announced in 2019 with the government pledging £25million to improve diagnostics and screening in the NHS.

Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, added: “Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.

“The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI tool to help detect lung cancer deployed in Greater Manchester

AI tool to help detect lung cancer deployed in Greater Manchester

AI that helps detect diseases such as lung cancer quicker is being rolled out at seven trusts within the Greater Manchester Imaging Network.
Ambient voice technology to draft patient letters piloted for NHS use

Ambient voice technology to draft patient letters piloted for NHS use

Great Ormond Street Hospital for Children is leading a pan-London, 5,000 patient assessment of the use of ambient voice technology.
AI software improves odds of good maternity care by 69%, say researchers

AI software improves odds of good maternity care by 69%, say researchers

Women are more likely to receive good care during pregnancy when AI and other clinical software tools are used, researchers have found.