Health professionals cautioned about risks of misusing GenAI
- 12 February 2025

- Digital Health Networks have issued a statement cautioning medical professionals about the risks of using generative AI to create clinical notes and records
- The position statement warns that AI tools such as Chat GPT have the potential to change the context and meaning of the content, misinterpret clinical findings and make transcription errors
- It also raises concerns about the information governance risks associated with the data sent across such platforms.
Digital Health Networks have issued a statement cautioning medical professionals about the risks of using generative AI to create clinical notes and records.
The position statement warns that AI tools such as Chat GPT have the potential to change the context and meaning of the content, misinterpret clinical findings and make transcription errors.
It also raises concerns about the information governance risks associated with the data sent across such platforms.
Euan McComiskie, health informatics lead at the Chartered Society of Physiotherapy, and member of Digital Health Networks’ Chief Nursing Information Officer (CNIO) Advisory Panel, said: “The concern is that we’re jumping too far too soon with AI, and we’re starting to think of how we can procure and implement these solutions without fully understanding exactly how that would work.
“So for example how are we exposing our very sensitive health and social care data to an AI tool? We need to be absolutely certain that it’s safe and secure.”
The statement follows research commissioned by The Health Foundation, published in July 2024, found that 76% of NHS staff support the use of AI to help with patient care and 81% favoured its use for administrative tasks.
McComiskie said that he has heard some “horror stories” about how medical professionals are using AI.
“My concern about using generative AI is that we know that they hallucinate, and to base any decisions upon what might be a hallucination starts a chain of really concerning events, so we want to try stop clinicians using inappropriate AI, rather than stopping using it.
“We’re just worried that people are doing it without being aware of the potential risks, and that’s what we want to highlight to them.”
McComiskie also raised the issue of bias which can be inherent in AI algorithms.
“A lot of health data that we have in the UK is based upon someone white, male, middle-aged, middle class, relatively health literate, university educated English as the first language, heterosexual and cisgendered.
“What about, for example, the over 50% of people who aren’t male?
“If we’re basing an AI tool and algorithm on an incomplete dataset, then we’re baking in the possibility of health inequalities.”
The position statement also warns against medical students relying on the use of generative AI for coursework.
“It’s been proven that GPT hallucinates academic references, so we wouldn’t want students over-relying on generative AI to generate coursework.
“I want them to use it as a source of information, but not as the single source of information, and they need to apply all of their academic critique to that, as they would with a piece of published, peer-reviewed journal article, policy document or strategy document.
“We’re not saying absolutely do not use it. We’re just saying, make sure you’re using it for the right purposes.”
Meanwhile, research published by the General Medical Council in February 2025 found that doctors who use generative AI see benefits for their own efficiency and for patient care and feel confident managing its risks.