AI can help build sustainable services – but only if we mitigate its risks

  • 1 July 2024
AI can help build sustainable services – but only if we mitigate its risks

Concerns about AI should not stop progress. They should prompt us to think about how to apply such powerful processing, argue Rebecca Hughes and Paul Davies

In the next 10 to 15 years, AI technology can help health and care services to shift from a reactive model of care, to a proactive, preventative one, building a more sustainable service. This is possible due to the vast amounts of personal health data captured by smart phones, wearables, and other IT systems.

Central to automation is AI that can use large amounts of datasets to uncover hidden patterns, trends, customer preferences and other useful data that can help inform better decisions. For example, AI-powered smart phone apps allow people to monitor their blood pressure at home, arming them with the knowledge to self-manage their conditions better. Game-changing technology is being rolled out to every NHS radiotherapy department in England to help locate cancer cells 2.5 times quicker.

Errors scaled faster

While AI is exciting, it doesn’t come without risks. Think about it this way: the amplification of any error is scaled faster and further with AI than a mistake made by a single clinician. To uphold the safety of AI solutions, we must make conscious decisions on all aspects of data. HD Labs, who specialise in data orchestration for the public sector, know that a rush to embrace innovative technology can bring risk. This is why they apply the triple-aim framework of health and care to every algorithm development and application of automation: the health and wellbeing of the people; the quality of services provided; the sustainable and efficient use of resources.

Without this approach, unintended consequences can appear, such as the much-cited example of bias when images of mainly white patients were used to train algorithms to spot melanoma.

Standards drive change

We often think of AI and use it as a mine of information – but how can we ensure that the information it gives is meaningful and can be used effectively? AI is an enabler of digital change, but we can’t grasp its full potential without information standards – they are the driver of that change. By defining what information should be recorded and shared by AI technologies in health and care, standards enable clinicians to have the right data in front of them to make informed decision-making and effective delivery of care.

Professional Record Standards Body’s information standards help ensure the accuracy and reliability of data inputs and AI-driven outputs. They help advance interoperability by facilitating safe and seamless data exchange between systems, ensuring that AI tools can function effectively across different platforms and care settings. Importantly, recording and sharing standardised information also helps reduce the burden of data interpretation that is often placed on clinicians, while reducing risks of errors and duplications, and supporting overall clinical safety.

Navigate safely

There is huge optimism for automation and AI. The data available today offers the potential of far greater personalisation and prediction than previously possible, transforming how the system delivers everything from diagnostics to treatment. The confidence in automation is equally matched by caution. But this concern shouldn’t stop progress; it should act as a warning sign to stop and think about how we apply such powerful processing.

Let’s not forget why we’re embracing this technology in the first place – to help deliver better care outcomes for people. Marrying the use of standards with AI can help us navigate through the rapidly changing world of data and deliver digital advances in the most effective and safest way possible.

Rebecca Hughes Rebecca Hughes is director of partner solutions at the Professional Record Standards Body.

 

Paul Davies Paul Davies is the founder and CEO of HD Labs and a member of the BSI committee (BS30440) that has developed a Validation Framework for the use of AI within healthcare.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

One in five GPs using AI tools in clinical practice, finds BMJ survey

One in five GPs using AI tools in clinical practice, finds BMJ survey

An online survey of UK GPs by the BMJ has revealed that one in five are using generative AI tools such as ChatGPT in clinical…
Leeds Teaching Hospitals trials AI prostate cancer diagnosis tool

Leeds Teaching Hospitals trials AI prostate cancer diagnosis tool

Leeds Teaching Hospitals NHS Trust (LTHT) is piloting an AI tool to assess whether it can improve prostate cancer diagnosis.
NHS clinical scientist warns of AI ‘deployment blockage’

NHS clinical scientist warns of AI ‘deployment blockage’

A "deployment blockage" in the NHS is preventing AI from being adopted at scale, according to an NHS consultant clinical scientist.