AI in healthcare: We are still finding our way

  • 29 August 2024
AI in healthcare: We are still finding our way

Clinicians need a culture of education and understanding to get the best out of AI and protect themselves and their patients from inappropriate information, writes Simon Noel

In the data-heavy and complex world of the NHS the promise of a tool to help take some of the load off, cut through the noise, and make the best use of expensive resources is very appealing. But AI is not just one thing. There are many levels and types of function and use cases, which arguably add to the fog of misinterpretation, introducing risk and potential harm.

For example, a recent study by researchers at the University of Massachusetts Amherst and healthcare AI company Mendel found ‘hallucinations’ in almost all the medical record summaries produced by two large language models. The authors warned that inconsistencies and incorrect reasoning in the summaries could potentially lead to medical professionals making mistakes in areas such as prescribing.

AI and ‘Mrs Jones’

What influence could AI have on the patient journey? Let’s consider the following possible scenario.

‘Mrs Jones’ is being investigated for suspected breast cancer. At the start of her patient journey, she has a mammogram, which is augmented using AI to detect abnormalities which may have been missed by a technician. The outpatient appointment she is offered is organised and overseen by AI, leading to better use of resources. The technology is also used to select the most appropriate appointment based on Mrs Jones’ personal preferences and accessibility needs.

The outpatient appointment itself is optimised using ambient listening, where AI listens to the conversation between the patient and the clinician. This leads to the automated generation of notes, tasks, tests, and suggested prescriptions.

Ambient listening is further used during the admission and clerking process when Mrs Jones is admitted to hospital for surgery. There is also automation using algorithmic rules and AI to use the observational data to understand her status and risk of deterioration. The technology provides a nuanced AI overview of patient status using multiple parameters beyond the vital signs collected at the bedside.

Clinicians also use AI tools to help generate Mrs Jones’ inpatient notes. These notes are further interrogated by AI to generate clinical coding and to alert the clinician to potential risk to Mrs Jones’ welfare.

Losing the human touch

AI tools are being promoted to improve efficiency, reduce costs and improve the patient experience. Much of the functionality I describe in ‘Mrs Jones’ patient journey (see box) has already been demonstrated.

Recent studies of how AI is perceived by clinical teams and NHS staff show there is generally a willingness to adopt the technology, but this is countered by uncertainty surrounding function, infrastructure and governance.

A recent investigation by the Health Foundation showed general support for the use of AI on the part of NHS staff but a degree of caution surrounding the manner of implementation and how this was supervised. Staff also had concerns about the impersonal nature of AI workflows and the loss of the “human touch”, especially when patients are interacting with the technology.

In 2022, NHS AI Labs and NHS England identified the need for several layers to drive and to guide AI: the shaper (national governance), the driver (AI champions), the creator, the embedder (implementer), and the user. They underlined the need for education to enhance the understanding of AI at all levels.

AI offers the possibility of a multitude of tools to help improve the management of care. But clinicians often do not understand the differences between management of complex data to provide guidance based on finite information, or generative AI which is used to interpret instruction and generate content. There is also cognitive bias on the part of the clinician, which may cause them to disproportionately challenge or accept AI guidance generated by data driven rules.

A culture of education and understanding is essential so clinicians can get the best out of AI and protect themselves and their patients from inappropriate information.

We also need to be mindful of other governance risks: ChatGPT and Gemini are common and free to use AI platforms, but they are intended for general consumption, not for the analysis and interpretation of complex medical data. Therefore, if a clinician chooses to use these platforms to generate a clinical narrative or a clinical note, the output should be treated with great caution due to the potential misinterpretation on the part of the AI, maybe even generating the ‘hallucinations’ mentioned earlier. This is quite apart from the use of potentially sensitive patient data outside of a core electronic record, or the risks of copying and pasting the resulting output from one platform to another.

Garbage in, garbage out

I’m sure you are familiar with the adage ‘garbage in, garbage out’ when talking about digital and databases. It seems the risks may be amplified by asking AI to do some of the work for us. The risk of algorithmic bias and the issue of training the algorithm effectively needs to be acknowledged and should be factored into the risks of deployment; this applies when using discrete data or image interpretation in areas such as radiology.

Post implementation, we need effective infrastructure and the long-term provision of people and services who can monitor and keep these systems working efficiently. Use of AI must be supported by the education and training of the workforce, including the implications and potential risks of the technology.

The Academy of Health Sciences released a position statement on healthcare AI in 2023 outlining four areas of consideration: improving the confidence and trust of end users, enhancing capacity and capability of healthcare adoption, better definition of governance, and building a system ready for AI adoption.

AI capability should be viewed in the same way as any other digital healthcare function and supported effectively – both in deployment and the way that we enable our staff to use it safely and successfully.

A long way to go

Against a backdrop of huge disparity in the way that essential digital systems are being deployed across the NHS, effective use of AI will inevitably be subject to the same range of disparity. It does have huge potential to improve the working lives of our clinical teams and the experience of patients and service users. And there’s no getting away from the fact that AI is here to stay.

But it is also becoming increasingly clear that we have a long way to go. We must continue to find ways to effectively support and exploit this vital tool. We owe it to our patients and service users to get the best out of AI so they can use the healthcare system with confidence.

Simon Noel is CNIO at Oxford University Hospitals NHS Foundation Trust and chair of Digital Health’s CNIO Advisory Panel.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Hexarad and Newton’s Tree partner to expand radiology AI access

Hexarad and Newton’s Tree partner to expand radiology AI access

Hexarad and Newton’s Tree have partnered with the aim of transforming how NHS radiology departments access technology, including AI tools.
AI tool can identify patients at risk of heart-related deaths

AI tool can identify patients at risk of heart-related deaths

An AI algorithm can identify those at highest risk of conditions leading to heart-related death, according to researchers.
AI could speed skin cancer diagnosis, finds NHSE report

AI could speed skin cancer diagnosis, finds NHSE report

The use of autonomous AI could improve effectiveness and reducing wait times for skin cancer pathways, according to an NHS England report.

1 Comments

  • Some important points are raised here. Many people assume that the best use of AI in healthcare is in ambient listening to write notes and summarise consultations. While there is value in this, I don’t believe it represents the greatest efficiency unlock and indeed comes with risks, such as hallucinations, that could be overlooked (assuming a responsible clinician would always review any summarised letter or text). Obvious efficiency gains include triaging (whether in radiology or patient check-ins) and coding. However, there are other significant opportunities to improve clinician efficiency by suggesting and initiating actions within the EPR that the clinician or administrator can then approve. This approach ensures that workflows are completed on time, to consistent standards, and saves significant time for those users as well.

Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.