Another view: of artificial intelligence

  • 15 December 2017
Another view: of artificial intelligence

I’ve seen a lot about artificial intelligence in the news recently. Stories about the ethics of self-driving cars; whether they would allow you to die rather than allowing others to come to harm, and whether a passenger would be happy at such a prospect.

Now people are starting to say that AI will replace doctors. I’m not convinced it’s quite as imminent a prospect as some people think, but it’s worth exploring a few issues.

OK, perhaps as a doctor I might have a vested interest in saying not everything I do can be done by a computer, but I do think they have a role. And perhaps I have a vested interest in them working. I might retire in 10-15 years and there don’t seem to be any doctors coming after me to look after me in my old age. I’d rather have a good AI than nothing.

But, interestingly, no-one seems to have talked about the ethics of AI in medicine. If people are worried that cars will take a Utilitarian view, what about AI-docs? Will they concentrate on helping the greatest number or doing the most good? Will they ignore a lot of children and old people and concentrate on getting the employed person back to work to pay more taxes? Some of you may have come across the Oregon health experiment; an AI might make different decisions, would they be better or worse?

Will they ignore smokers, people who are overweight, those who drink or who don’t follow health advice? Will they prioritise based on quality adjusted life-years (QALYs)?

Understanding subtleties

Will AI cope with the subtleties of what patients really present with, which is often different to what they say is the issue? While an AI might be good at reading an ECG and saying there is nothing wrong with it, will it pick up the fact that the patient is worried because his dad died at a similar age of a heart-related problem?

Perhaps a truly advanced AI will, but all the information I’ve seen about AIs so far is about them making good decisions based on data given – not interpreting and understanding the complexities of human communication. People might be confusing AIs with Turing machines.

There could be a role

If they are good at making rapid, reliable evidence-based decisions then I think there is a big role for AI to help doctors and make us more efficient, productive, safe and effective and – dare I say it – more satisfied and less stressed.

A huge amount of what we do is interpreting large quantities of data based on our knowledge and experience. Having an AI colleague supporting us is perhaps something to be embraced, particularly in these days of workforce shortages.

NHS England is currently running a big campaign on releasing capacity for general practice and I recently spoke at one of their events on some of the things my local GP federation is up to. I really like their top 10 high impact actions concepts. A lot of people are concentrating on the “diverting patients away from GPs” action. However, one of the others talks about improving the efficiency of processes and how this could make GPs’ lives better. It talks about things like GPs learning to touch type, or using speech recognition to speed things up.

While these are good, there are lots of other ways of allowing me to work faster and I think AI has a role. If we look at some of the things I do regularly that are often currently slowed down by technology we will find lots of examples.

Prescribing warnings

These are currently a real pain; almost every time you prescribe anything you get shown a whole load of warnings. The problem is twofold. First, most of the warnings don’t apply or aren’t important and second, the sheer fatigue of seeing loads of warnings means you start ignoring them no matter how big a red box they appear in.

An AI could intelligently show me only warnings I need to see, or intelligently suggest alternatives that might be better or more suitable.

Blood results

A huge task every day for every doctor is doing their blood results. This means reviewing the results of all the tests they have ordered on people, but also reviewing tests ordered by colleagues who aren’t in, or reviewing routine tests that have been done for the purposes of drug or disease monitoring.

There are loads of them. Most are normal but don’t say they are normal as often one indicator is just a fraction out of range. Experience has taught me when to ignore, but the computer flags it as abnormal. Also sometimes an abnormal result is expected, or it’s an improvement on what was there before. An intelligent system would know which to ignore and which not to ignore. It might also spot underlying trends that are too subtle for me. It could also do this quickly and reliably and not leave it until last because other things take priority.

Letters

I read anything from 20-140 letters a day. These might be discharge summaries or outpatient letters, or just part of the endless stream of admin or paperwork about people which the NHS creates in its infinite wisdom but which adds no value. It’s a huge piece of work trying to reduce the stuff I don’t need to see. One partner in my practice felt it wasn’t economical to employ someone to do the sorting for us, and one always wants to see anything on his patients no matter how trivial or irrelevant it is.

As well as reading the letters there are multiple actions that arise from them. Some patients might need blood tests or appointments arranging. Some need a change in medication or a new referral. For some it’s just a new diagnosis that needs coding.

Some doctors do all of this themselves; some pass the letters on to helpers. But whichever way you do it, it can be laborious and costly and prone to error. While there are automated ways of grabbing data from letters, pretty much every letter is reread by a coding clerk after a GP.

AI could automate this. It could know who likes to see what. It could filter out stuff that doesn’t need to be seen. Highlight stuff that does. Code and extract data from letters, so saving time and money.

Personal assistant

On my iPhone I find it easier to say “Siri: set an alarm for 6.30pm tomorrow night”, than I do to go into the menus and do it manually. Could we use a form of AI to take recurrent tasks from me? Could I say “I need a blood form for FBC renal and HbA1c” rather than manually requesting it?

And could AI help my note taking? Could it annotate a consultation for me, pulling out notes rather than word for word verbatim recording?

Could it know when someone says they are thirsty and peeing a lot that I’m going to do a test for sugar and HbA1c and pre-fill in the form for me?

Maybe they could take over…

Could AI start creating a differential diagnosis on screen? Prompting me to ask more questions or home in on things? I saw an expert system that claimed to get the right musculoskeletal diagnosis 99% of the time based on patients’ answers to questions. What if it could do it on listening in?

Once we get to this point, maybe we really are at the stage at which AI could take over from doctors.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI tool to help detect lung cancer deployed in Greater Manchester

AI tool to help detect lung cancer deployed in Greater Manchester

AI that helps detect diseases such as lung cancer quicker is being rolled out at seven trusts within the Greater Manchester Imaging Network.
Ambient voice technology to draft patient letters piloted for NHS use

Ambient voice technology to draft patient letters piloted for NHS use

Great Ormond Street Hospital for Children is leading a pan-London, 5,000 patient assessment of the use of ambient voice technology.
AI software improves odds of good maternity care by 69%, say researchers

AI software improves odds of good maternity care by 69%, say researchers

Women are more likely to receive good care during pregnancy when AI and other clinical software tools are used, researchers have found.

12 Comments

  • With the development of this sort of computing power, I feel we are on the cusp of something very powerful (note – not good/ bad/ indifferent- that remains to be seen!). The volume of data being thrown at clinicians now has grown exponentially ( more patients x mote tests x more data per test to interpret).
    For all the reasons Neal mentions, clinicians need the sort of help AI can potentially deliver to make sense of it all.
    The issue of risk is interesting here- there is notable risk in the current service (clinicians can no longer know enough about lots of conditions/ pts). Transferring risk (perhaps) to those AI programmers/ developers may, or may not increase/ decrease the overall risk for patients.
    What might be possible though, is to help filter out some of the simpler things that hoover up huge sways of clinicians time. This would free some time to see more pts/ spend time with more complex cases. Over time machines will increase their repertoire.

  • My interpretation of what you are describing is a rule engine.

    Especially the results, so it would be if this rule applies then do this . As a developer(/supplier) I can provide a rules engine (once Care Connect and GP Connect gives me that capability) which runs these rules but I’d see you having to authorise the rules (maybe build them).

    Similar logic could be applied warnings, I can see warning overload is litigation proof for EPR suppliers but it’s a clinical risk. You need a rules engine and be able to authorise the rules.

    As for letters, the same could apply but we need to move away from ‘paper’ formats (traditional paper or electronic paper e.g. pdf’s) to something like FHIR documents or openEHR. Hopefully that’s transfer of care but my app rules engine still needs access to the data via Care Connect systems.

    • Sorry replied to the wrong post!

      I guess my admittedly limited understanding of AI is that it would learn the “rules” from watching what I and my colleagues do. So 30000 GPs doing 10-50 results each every day – might produce a crowdsourced level of intelligence of which results need actioning and which don’t without me having to write the rules explicitly. Writing the rules down might be very difficult and in two different patients, i might do two different things based on their other problems, history etc.

      Of course what would be even better is if the AI tracked the patients and saw which ignored results were incorrectly ignored and added that learning to the set.

      I may be misunderstanding AIs.. of course..

      • Similar to my understanding but I’m a ‘little’ rusty on the details.

        Would presume this would involve feeding in the current medical record and other genomes (social, etc) and start training it? Maybe guided by a few rules to start with??

        • That’s what I was thinking.

          • There’s also the aspect of how to train machine learning on huge data sets to be able to work reliably. Care.Data didn’t go down too well, but that’s the scope of data required to train such systems.

    • My knowledge of AI is limited to an exposure to the concept some 30 years ago at university, but it strikes me that a key part is “artificial”. What if the rules the machine comes up with do not make sense – do you just blindly accept “the machine must know what it’s doing”, or reject them? A case in point – the recent highly successful Go machine came up with a move which the human experts thought was poor and likely to lose the game, but it went on to win.

      • ive long held the view that there might be early markers in a patients blood test results that are too subtle to be picked up by a human.. particularly one who may not look at all the trend data. e.g. in eGFRs and that a computer might be more alarmed by a drop/change than me and have reasons.

  • Could AI start creating a differential diagnosis on screen? Prompting me to ask more questions or home in on things?

    Yes – see:
    https://spiral.imperial.ac.uk:8443/handle/10044/1/43811

    Nice review of the potential of AI Neil.

    • I guess my admittedly limited understanding of AI is that it would learn the “rules” from watching what I and my colleagues do. So 30000 GPs doing 10-50 results each every day – might produce a crowdsourced level of intelligence of which results need actioning and which don’t without me having to write the rules explicitly. Writing the rules down might be very difficult and in two different patients, i might do two different things based on their other problems, history etc.

      Of course what would be even better is if the AI tracked the patients and saw which ignored results were incorrectly ignored and added that learning to the set.

      I may be misunderstanding AIs.. of course..

    • And you would also need to feed in a doctor genome to get the answer that doctor wanted. So if Doctor Smith was reviewing the AI’s results it would be tailored to Doctor Smith.

  • can i leave another reference for the Oregon experiment that talks more about health care rationing than the wiki article. http://journalofethics.ama-assn.org/2011/04/pfor1-1104.html

Comments are closed.