“Inexcusable” mistakes made in Google DeepMind and Royal Free NHS data deal – study

  • 16 March 2017
“Inexcusable” mistakes made in Google DeepMind and Royal Free NHS data deal – study

A controversial deal between Google subsidiary, Deepmind, and a London NHS trust has been heavily criticised for its lack of transparency, in an academic paper.

Published Thursday in Health and Technology, the paper said “inexcusable” mistakes were made when DeepMind Health and the Royal Free London NHS Foundation Trust signed an agreement in 2015 to develop the acute kidney injury alert app, Streams.

DeepMind and Royal Free have quickly hit back in a joint statement that claims the research contains major errors.

“This paper completely misrepresents the reality of how the NHS uses technology to process data. It makes a series of significant factual and analytical errors, assuming that this kind of data agreement is unprecedented.”

In April 2016, the agreement hit the headlines when the New Scientist reported that Deepmind were given access to five years worth of data, covering 1.6 million patients, most of whom had not had an acute kidney injury.

The scale of data transfer has been criticised by privacy groups and had led to a still on-going Information Commissioner’s Office inquiry.

But Royal Free and Deepmind have argued that the data is being used for “direct care”, with the trust retaining control of the data in arrangement identical to many contracts between NHS trusts and clinical IT system suppliers.

However, the Health and Technology paper stated that “the failure on both sides to engage in any conversation with patients and citizens is inexcusable”.

“Patients should not be hearing about these things only when they become front page scandals.”

The paper was written by Julia Powles, a Cambridge University academic, and Hal Hodson, the journalist involved the New Scientist’s coverage of the deal last year.

In response to the joint statement from DeepMind and Royal Free, the authors said “the accusations of factual inaccuracy and analytical error were unsubstantiated, and invited the parties to respond on the record in an open forum”.

The paper continues, “the public’s situation is analogous to being interrogated through a one-way mirror: Google can see us, but we cannot see it”.

The academic research queries the justification use of “direct care” that both Royal Free and DeepMind have used to explain not obtaining patient consent, as the dataset contains those who have never been tested or treated for kidney injury.

“The position that Royal Free and DeepMind assert—that the company is preventing, investigating or treating kidney disease in every patient—seems difficult to sustain on any reasonable interpretation of direct patient care.”

The authors label the controversy a “cautionary tale and a call to attention” for other public bodies.

But the ongoing criticism of the deal also appears to have done little to dampen enthusiasm for the partnership. In November 2016, Royal Free and Deepmind signed a five year agreement to further develop Streams.

Both parties claim the app is already producing  successful results by alerting clinicians to deteriorating patients and that “the feedback from clinicians has been overwhelmingly positive”.

Deepmind has also formed similar partnerships with other trusts, including University College London Hospitals NHS Foundation Trust, Moorfields Eye Hospital NHS Foundation Trust, and  Imperial College Healthcare NHS Trust.

To address transparency concerns DeepMind announced earlier this month that it was building an audit infrastructure so trusts could track in real-time how its data was used.

At the time, Mustafa Suleyman, DeepMind’s co-founder told Digital Health News, that the technology “should bring a level of transparency and oversight to the use and processing of health data that will improve the level of trust and accountability”.

The company is also overseen by a panel of unpaid Independent Reviewers, who have yet to publish their first annual report into the company.

Powles and Hodson are currently working on a second paper is examining the revised Royal Free agreement and ongoing regulatory investigations.

 

 

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

MHRA selects five AI-powered medical devices for regulatory pilot

MHRA selects five AI-powered medical devices for regulatory pilot

The Medicines and Healthcare products Regulatory Agency (MHRA) has selected five medical technologies for its AI Airlock a pilot scheme.
Engagement paper explores the use of AI in NHS communications

Engagement paper explores the use of AI in NHS communications

NHS Confederation and the AI in NHS Communications Taskforce have set out actions for using AI in communications in the health service.
Most people open to sharing health data to develop AI in the NHS

Most people open to sharing health data to develop AI in the NHS

Three quarters of people support sharing health data for the development of AI systems in the NHS, according to a Health Foundation survey.