Imperial College scientists create AI system to help treat sepsis

  • 26 October 2018
Imperial College scientists create AI system to help treat sepsis

Scientists at Imperial College London claim to have developed an artificial intelligence (AI) system that could help the treatment of sepsis.

After analysing around 100,000 records of US patients and every single doctor’s decision affecting them over a 15-year period, the system – called AI Clinician – was able to ‘learn’ the best treatment strategy.

The findings, published in the journal Nature Medicine, claimed that 98 per cent of the time the AI system either matched, or was better than, the human doctors’ decision.

The team from Imperial hope the clinical tool could be used alongside medical professionals, to help doctors decide the best treatment strategy for patients.

Dr Aldo Faisal, senior author from the Department of Bioengineering and the Department of Computing at Imperial, said: “Sepsis is one of the biggest killers in the UK – and claims six million lives worldwide – so we desperately need new tools at our disposal to help patients.

“At Imperial, we believe that AI for healthcare is the solution. Our new AI system was able to analyse a patient’s data – such as blood pressure and heart rate – and decide the best treatment strategy.

“We found that when the doctor’s treatment decision matched what the AI system recommended, they had a better chance of survival.”

The Imperial team is now hoping to trial AI Clinician in intensive care units in the UK.

Sepsis, also known as blood poisoning, is a potentially fatal complication of an infection.

It can a cause a drastic drop in blood pressure which can leave organs deprived of blood flow and oxygen, and can ultimately lead to multiple organ failure and death.

Health Minister Lord O’Shaughnessy said: “Sepsis is a devastating condition which claims far too many lives in the UK. We need to be better at spotting the signs early and artificial intelligence has the potential to do this quickly and more effectively than humans – supporting doctors so they can spend more time with patients.

“We’re already making steps to improve diagnosis with our new sepsis tool, but we must also embrace any new technology solutions that can improve patient care and save lives.”

The use of AI to help clinicians has become a hot topic in the health tech space.

Recently, a team of scientists from Anglia Ruskin University developed an app that uses AI to help spot tuberculosis (TB).

Meanwhile, researchers from Moorfields Eye Hospital NHS Foundation Trust, DeepMind and University College London (UCL) claim to have created a machine learning system to identify eye diseases from scans, and recommend appropriate referrals.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

‘Desperate shortage of clinical coders creates financial uncertainty’

‘Desperate shortage of clinical coders creates financial uncertainty’

A shortage of clinical coders has wide-ranging consequences, argues Dr Marc Farr, chair of the Chief Data and Analytical Officers Network
NHS England launches digital clinical safety standards review

NHS England launches digital clinical safety standards review

NHS England has launched a review of digital clinical safety standards, requesting input from NHS stakeholders and IT manufacturers.
NHS using AI to predict frequent emergency service users

NHS using AI to predict frequent emergency service users

The NHS in England is using AI to predict patients who are at risk of becoming frequent users of emergency services.

1 Comments

  • Sepsis is difficult and rapidly-changing. Most patients with fever etc have something less serious, and the ‘sepsis’ diagnosis only becomes apparent after [often] a very rapid deterioration, Who would take the clinical responsibility for the AI decision if it all goes pear-shaped?
    Not, I hope, the doctor in charge of the patient. He/she would be caught in a cleft stick – follow own judgement, or do what the AI says? It would be difficult to explain to the coroner if they ignored the AI decision and it was correct, but equally they would probably held responsible if they thought it was wrong, but were scared of following their instinct – because if that went wrong, they would still be held to blame. Damned if you do, and damned if you don’t.

    The AI should give probabilities as well as a yes/no answer, and there should be certification of the system, with the providers of it having to take responsibility for any incorrect AI decisions. Are they confident enough in their own creation? If not, why should anyone else be?

Comments are closed.