AI and data: Let’s get the basics right for patients and staff
- 14 February 2024
As AI systems become ‘pervasive’, the risks and opportunities increase. Digital leaders must put people first at all stages of design and implementation, writes Simon Noel
Over the last 20 years of working with digital, I have had interactions with diverse types of clinician decision support systems. The majority of these have been at a basic level, using algorithmic rules to use discrete data to present the user with guidance based on certain scenarios.
The systems included real time guidance within critical care workflows, guidance to promote appropriate blood transfusion, admission rules for targeted assessments and care plans, as well as VTE (venous thromboembolism) risk alerts. The common thread is that, despite providing real time clinical support, they were not AI but still needed timely, accurate data and appropriate coding. However, these systems’ number-crunching algorithmic rules are relatively simple. True AI and machine learning is far more complex and potentially pervasive as the scope of impact evolves over time.
AI is offering me suggestions as I write this article; I see it on my phone and in the recommendations for what I should watch on my streaming services. These systems learn from what I do every day and do not abide by a predetermined set of discrete data.
ChatGPT and Googles Bard have provided a voice to raise the profile of AI and are useful toys. But the appropriateness of these platforms in frontline healthcare, and especially role specific guidance, is questionable – particularly when you consider that the free version of ChatGPT’s dataset is from early 2022, and healthcare moves on quickly. Let us also not forget about last year’s high-profile meeting at Bletchley Park, which examined the general safety of AI as the use of it expands and impacts all facets of our lives.
Making sense of the data
There is enormous potential for AI to make sense of the complex environment in which we work. However, we are also working with a technological environment where AI has not been fully considered in the deployment of many of our systems, and this can be challenging. There may be many obstacles to access or making sense of the data to hand.
For example, an application may use AI-based natural language processing to analyse unstructured text in clinical notes, with these extracted data then analysed further using AI. What is the consequence of this compound processing of data? There needs to be an assurance that bias or processing failures do not undermine the reliability of results. This also should help guide how we profile future deployment of digital systems.
The place of data and how this influences what we do must be understood at the inception of design and implementation. This should be alongside the impact that system deployment will have, both negative and positive, on our working environment, and how it affects us professionally through skills and capability, as well as professional identity.
Accessible at the point of need
Clinical systems have not always been deployed with the data extraction or what we do with it as a primary goal. The gold standard should be for system design to account for data use and effective data collection, with a lens of expanding data availability and increase in system functionality.
We also need to be aware of issues which influence accessibility and use, such as health inequalities, service users and staff, and how we can prepare our workforce for the future. National guidance, such as What Good Looks Like, highlights the importance of enabling staff and service users. But it is essential that we get the basics right so our clinical digital environments are ready to enable accessibility at the point of need; the hardware and basic infrastructure should not get in the way.
We cannot separate informatics from AI, but we need an environment where users can use technology with confidence and with an understanding of the scope of factors affecting healthcare technology. AI needs to be part of this awareness, not just for the risks, but also the opportunities.
We need a comprehensive, structured approach to systems design, build, deployment, user engagement and training. This will allow for the optimum collection of appropriate data, and reduce the design and documentation burden. Users also need to be provided with the skills to use and understand digital and data effectively. Where this cannot be done, we should make the right thing the easy thing to do, so the technology is not seen as a barrier.
Every data point has a face
Incorrect data and AI has the most impact on patients and staff at the point of care or missed care. This can be a consequence of point of care guidance, the effect of a wider organisational process, or research which affects a patient subgroup. We need to remember this at all stages of health record design, deployment, and utilisation. These data are representative of our staff and service users, so the systems we develop must be effective, accessible, inclusive, and free from bias.
Simon Noel is CNIO at Oxford University Hospitals NHS Foundation Trust. He is also chair of Digital Health’s CNIO Advisory Panel.