Looking into the use of artificial intelligence in healthcare

  • 24 February 2022
Looking into the use of artificial intelligence in healthcare

In his latest column for Digital Health, Andrew Davies, digital health lead at the Association of British HealthTech Industries (ABHI), explores the use of artificial intelligence (AI) in healthcare.

Most of us will have seen films or read sci-fi books about malevolent robots taking over the world. Although usually, in best Hollywood style, humanity wins out in the end. This is the scary end of AI, the out-of-control robot that is autonomous and bent on world domination. Of course, the reality of the situation today is far from this, but that is not to say that AI cannot cause harm if deployed carelessly.

Its deployment, therefore, has been subject to much research, debate and angst.

Stanford University is undertaking a long-term project into the use of AI and have highlighted that “we should not be waiting for AI tools to become mainstream before making sure they are ethical”.

The reality of our lives now is that AI is all around us (type ‘AI’ into the search bar at the top of this page and you will get over 1,500 results from this website alone) and its use is going to become even more prevalent. In a recent survey by Deloitte, 61% of respondents said AI will substantially transform their industry in the next three-five years. This all chimes with a report from 2019 by NHSX that indicated a three-five year horizon for a range of AI products to be market ready, which brings us to the now.

This increase in the use of AI is also being seen in health applications. A recent survey conducted by Health Education England and Unity Insights highlighted a range of AI deployments, with diagnosis (34%) being the largest, followed by ‘Automation/Service efficiency’ (29%) with ‘P4 Medicine’ and ‘Remote monitoring’ technologies at 17% and 14% respectively.

Addressing ethics

As AI algorithms play an increasing role in critical health decision making, for example in risk scoring and population management, we need to address the ethics.

A report by the Alan Turing Institute said: “AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”

There are a number of broad-brush policies and guidance already published that help us with the values and principles. For example, the EU’s High-level Expert Group on AI’s assessment list for trustworthy, and more recently, and more specifically in health, the World Health Organisation published “Ethics And Governance Of Artificial Intelligence For Health” which outlines six key principles:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest
  • Ensuring transparency, explainability and intelligibility
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity
  • Promoting AI that is responsive and sustainable

A recent report, “Algorithmic Impact Assessment” (AIA) from the Ada Lovelace Institute, highlighted that there are over 80 AI ethics guides and guidelines available. This helps explain the feedback that ABHI receives from its members about the need for practical tools that can be used to ensure an ethical and, importantly, agreed approach to development and implementation.

The work from the Ada Lovelace Institute starts to provide a framework for how developers can surface, document and address potential ethical issues. The benefits claimed by the AIA process include a clearer framework for meeting NHS expectations, synergies with established regulatory risk classification and improved access to patient input.

Such benefits would of course be welcomed and the approach is being piloted as part of the National Covid Chest Imaging Database and the National Medical Imaging Platform.

This approach to ‘algorithmovigilance’ is a claimed ‘world first’ and “demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred”. It will be interesting to see the results from an NHS setting and particularly how well such an approach can be implemented at scale – a challenge raised in the report.

The process outlined risks being overburdensome, not least on the NHS itself, and any evaluation of the pilot will need to consider how this approach can be adopted as routine for AI implementations. An evaluation needs to consider not just the use of the AIA but the wider context of assurance for algorithms such as audits and transparency registers that the report highlights.

Threats to privacy

The potential power of AI technology creates a novel set of ethical challenges, including threats to privacy and confidentiality, informed consent, and patient autonomy.

We need to address possible risks in the development of algorithms, ensuring mechanisms are in place so that we can harness the undoubted potential of AI to deliver enhanced health services and patient outcomes.

To do this we need to create a framework of regulation, guidance and governance that providers reassurance to citizens, users and patients that the technology they are using has been developed and implemented in an ethical manner.

The other development in this area that could support a robust framework is the planned work by MHRA on software AI as a medical device, with the dual aims of protecting patients and public, whilst supporting innovation.

It is interesting that MHRA is not tackling AI as a separate issue and seeing it a part of a larger SaMD category, and this approach of avoiding ‘AI exceptionalism’ resonates with industry.

It will also be interesting to see the merger between the approach for AI, which is centred very much on the specifics of a given algorithms, with some of the emerging thinking outlined in the TIGRR report around more agile, outcomes focused regulation that are starting to filter through into mainstream government thinking and policy.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Charlotte Refsum to sit on data and tech group for 10 year plan

Charlotte Refsum to sit on data and tech group for 10 year plan

Charlotte Refsum, director of health policy at the Tony Blair Institute will sit on the data and tech group for the 10 year health plan.
AI tool to help detect lung cancer deployed in Greater Manchester

AI tool to help detect lung cancer deployed in Greater Manchester

AI that helps detect diseases such as lung cancer quicker is being rolled out at seven trusts within the Greater Manchester Imaging Network.
Ambient voice technology to draft patient letters piloted for NHS use

Ambient voice technology to draft patient letters piloted for NHS use

Great Ormond Street Hospital for Children is leading a pan-London, 5,000 patient assessment of the use of ambient voice technology.