NHS report recommends the deployment of AI educational materials for staff

  • 9 June 2022
NHS report recommends the deployment of AI educational materials for staff

The development and deployment of “educational pathways and materials” for healthcare staff on the use of AI is the main recommendation from an NHS report.

The ‘Understanding Healthcare Workers’ Confidence in AI’ report is the first of two reports to be released in light of the Topol Review in 2019 which recommended the use of digital technologies such as AI and robotics to achieve digital transformation.

The report, which was developed by Health Education England and NHS AI Lab, explores the confidence healthcare workers have in AI and what could drive that to help support the further implementation of AI within the NHS. It suggests that clinicians require training and education opportunities to help manage the gap between their opinion or intuition on a patient’s condition and the recommendations made by AI technology.

“The main recommendation of this report is therefore to develop and deploy educational pathways and materials for healthcare professionals at all career points and in all roles, to equip the workforce to confidently evaluate, adopt and use AI,” the report states.

“During clinical decision making, this would enable clinicians to determine appropriate confidence in AI-derived information and balance this with other sources of clinical information.”

According to the report the NHS’ use of AI is accelerating, highlighting a Health Education England survey of 240 AI technologies which revealed 20 per cent were ready for large scale deployment, with a further 40 per cent being ready within three years.

Despite this, NHS staff lack the expert knowledge and familiarity with AI technology to confidently leverage it for clinical benefit, according to the report.

The concept of regulation was also raised and considered to also be key to improving NHS healthcare workers’ confidence in the ability of AI.

The report states: “Interviews for this research suggest that confidence in any AI technology or system used in health and care can be increased by establishing its trustworthiness. Increasing confidence in this way is desirable and requires a multifaceted approach including regulatory oversight, real-world evidence generation and robust implementation.”

On the topic of regulation and governance the report highlighted the creation of MAAS (Multi-Agency Advice Service), a cross-regulatory advisory service for both developers and adopters of AI. MAAS is being developed by the National Institute for Health and Care Excellence; the Medicines and Healthcare Products Regulatory Agency; the Health Research Authority and the Care Quality Commission.

A second report, yet to be released, will go in to more depth about what these suggested pathways for related education and training should be for NHS staff.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Recommendations published to tackle bias in medical AI tech

Recommendations published to tackle bias in medical AI tech

A set of recommendations has been published with the aim of reducing the risk of potential bias in AI for healthcare technologies.
‘Desperate shortage of clinical coders creates financial uncertainty’

‘Desperate shortage of clinical coders creates financial uncertainty’

A shortage of clinical coders has wide-ranging consequences, argues Dr Marc Farr, chair of the Chief Data and Analytical Officers Network
NHS England launches digital clinical safety standards review

NHS England launches digital clinical safety standards review

NHS England has launched a review of digital clinical safety standards, requesting input from NHS stakeholders and IT manufacturers.