New DeepMind AI ‘spots breast cancer better than clinicians’
- 2 January 2020
A newly developed artificial intelligence (AI) model is able to spot breast cancer better than a clinician, new research has suggested.
Google DeepMind, in partnership with Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, has developed the model which can spot cancer in breast screening mammograms in a bid to improve health outcomes and ease pressure on overstretched radiology services.
Initial findings, published by the technology giant in the journal Nature, suggest the AI can identify the disease with greater accuracy, fewer false positives and fewer false negatives.
The model, trained on de-identified data of 76,000 women in the UK and more than 15,000 women in the US, reportedly lowered false positive results by 1.2% and false negatives by 2.7% in the UK, but is yet to be tested in clinical studies.
When tested, the AI system processed only the latest available mammogram of a patient, whereas clinicians had access to patient histories and prior mammograms to make an informed screening decision.
Dr Dominic King, the health lead for Google DeepMind, said: “Our team is really proud of these research findings, which suggest that we are on our way to developing a tool that can help clinicians spot breast cancer with greater accuracy.
“Further testing, clinical validation and regulatory approvals are required before this could start making a difference for patients, but we’re committed to working with our partners towards this goal.”
Breast cancer is the most common women’s cancer globally, yet 20% of screening mammograms fail to spot the disease, instead returning a false negative.
Coupled with a shortage of senior radiologists causing lengthy delays – figures from the Royal College of Radiologists in 2018 show the UK needs another 1,004 full time radiologists to meet demand – diseases like breast cancer are increasingly more likely to be misdiagnosed.
DeepMind hopes the use of new technologies like AI will provide the key to spotting cancer early and easing burden on clinicians.
Matthew Gould, CEO of NHSX said: “This research is an exciting step in bringing the benefits of artificial intelligence research to patients worldwide.
“I’m proud that the UK – and the NHS – is at the forefront of this. Breast cancer screening is a key part of our prevention programme and we look forward to seeing how this cutting edge research can become part of everyday clinical practice in the NHS.”
A recent review into the UKs screening programmes found IT systems “cannot support the safe running of programmes” and need to be upgraded “urgently”.
Professor Sir Mike Richards found breast cancer screening programmes were often given “low priority” by NHS trusts and, in some cases, management systems have not been updated since the 2017 WannaCry cyber-attack.
But he said new technologies, like artificial intelligence (AI), have the potential to ease a growing pressure on the NHS workforce and are likely to bring benefit to healthcare in the coming years. Currently each mammogram is independently evaluated by two radiographers, but AI has the potential to ease current workforce strains by taking on the job of one radiographer, Sir Mike found.
The Government recently pledged £250m for a National Artificial Intelligence Lab to improve diagnostics and screening in the NHS, including developing treatments for cancer.
2 Comments
Mammograms should be abolished for NUMEROUS scientifically solid reasons despite the misleading concept of better diagnosis pushed by the highly profiteering cancer industry (which google is part of) — read the books: ‘Mammography Screening: Truth, Lies and Controversy’ by Peter Gotzsche and ‘The Mammogram Myth’ by Rolf Hefti).
Jiurnal article is behind a paywall, but I believe it outperformed solo radiologists, but not if compared with dual-reporting, as per UK screening programme.
At what point do the ‘powers that be’ deem it suitable to REPLACE radiologists/reporting radiographers on the screening programme, thus freeing up valuable clinician time for general radiology?
Who carries the can if it gets it wrong? Can it ‘explain’ its ‘reasoning’?
Comments are closed.