Tom’s digital disruptors: testing times for apps

  • 10 November 2015
Tom’s digital disruptors: testing times for apps
The NHS Apps Library: its replacement has questions to answer

When the NHS Apps Library shut down in the middle of last month, The Register said farewell in its own, inimitable fashion – with a headline that read “shonky securo-nightmare NHS apps library finally binned.”

Otherwise, the library, which was once held up as a great advance in delivering new digital services to patients, passed with barely a whimper.

NHS England simply claimed that a “period of testing” for the service had come to an end; and that it had always been meant as a pilot site to “review and recommend apps against a defined set of criteria”.

Wherever the truth lay between these different interpretations, the decision inevitably led to confusion – and concern about how any future model for NHS app endorsement is going to function.

Work on a new process has already included the launch of a small library of digital tools for mental health. But there are some key things NHS England and its partners would probably do well to bear in mind so as not to see it succumb to the same mistakes.

How do we know what works?

Perhaps the biggest issue is to make sure that any healthcare app that carries a recommendation from the NHS actually helps to support a person’s health.

As obvious as this may seem, NHS England’s review process did not require developers to evaluate the effectiveness of any app, only to say they were safe, relevant and compliant with the Data Protection Act.

Public Health England's deputy director, digital, Diarmaid Crean, who is helping to lead on the new endorsement model, admitted there had been problems at EHI Live 2015.

“The apps library – once hit by volume – they had to drop the bar for the approval process to such a low standard that they were actually allowing many apps onto the apps library that didn't get a very thorough appraisal.”

This issue of effectiveness came to public attention with the publication last month of ‘App-based psychological interventions: friend or foe?’ in the BMJ’s ‘Evidence-Based Mental Health’ journal.

This article, which was widely picked up by the national press, said it was impossible to determine the true clinical value of more than 85% of mental health apps in the library.

The conclusion was based on a review of the 14 apps dedicated to the management of depression and anxiety.

Just four provided evidence of patient-reported outcomes to substantiate their claims of effectiveness, and just applied validated mental health metrics, such as Generalized Anxiety Disorder 7 and Patient Health Questionnaire 9.

Speaking to Digital Health News, one of the study’s authors, Simon Leigh –a health economist at Lifecode Solutions – said the lack of evidence “threw me back quite a bit”.

“In other areas of the NHS we put so much effort into evidence-based medicine – gathering clinical data, gathering cost data,” he said. “It’s a bit of a dangerous game to recommend apps that potentially may not do anything or, in some instances, may actually do some harm.”

The efficacy of apps in the library was also mentioned by Phil Booth, founder of healthcare data security campaign group medConfidential.

“They basically just said at least it won’t kill you. That’s not acceptable; that’s like a phase I clinical trial. We need to be doing better than a phase I clinical trial when we are exposing something to general population.”

An effective intervention

It’s hard to imagine the rigorous recommendation process that medicines go through being applied to apps – if only because their developers don’t have access to the billions of pounds and years of research available to pharma companies. But it’s clear that more needs to be done to make sure the public aren’t downloading ineffective or even harmful products.

The issue is being considered as part of National Information Board workstream on app endorsement, which says that clinical effectiveness will feature in the later parts of the four-stage evaluation process, while acknowledging that new research methods may be needed in the area.

Public Health England’s Crean confirmed at EHI Live that assessment of an app's effectiveness will be part of this process.

This begins with self-assessment, before moving onto a form of crowd sourcing, and then – for a small number of apps that have demonstrated a high level of quality and potential – an independent evaluation by an official body.

Another encouraging sign is that Public Health England is working alongside the National Institute of Health and Care Excellence, which is responsible for assessing the cost-effectiveness of drugs and medical devices for NHS.

Booth, though, is keen for the Medicines and Healthcare products Regulatory Agency to be involved in the process as well, as it is the competent body to assess medical devices.

“The single biggest thing that they could do to show us they have learned lessons from the app library is to take NHS England out of it entirely and five it to the MHRA,” he said. “That would be a signal at least they are taking it seriously enough. Anything less than that I fear we will just get a rerun [of the NHS Apps Library].”

Keeping data private

The other burning issue that needs to be addressed in any replacement is keeping personal health data private and secure. Again, this might seem obvious.

But dozens of apps were reviewed by researchers from Imperial College London and France’s Ecole Polytechnique CNRS, who published a study in BMC Medicine, showing that out of 35 apps in the library that sent identifying information over the internet, 23 did so without encryption.

Four apps were found to be sending both identifying and health information without encryption during the review, which assessed 79 apps in the library during July 2013.

Kit Huckvale, one of the researchers behind the study, told Digital Health News that these issues were more likely to be oversights than anything malicious, but it was still a major issue to address.

“Although we couldn’t follow what happened to data once it hit third-party servers, we found no evidence of malicious intent in the initial handling of data. These apps were leaky, but not clearly creepy.”

Booth also did his work with medConfidential on the privacy of apps, uncovering serious data problems with several products, including mental health support tool Kvetch and Spanish-owned doctor search service Doctoralia, both of which were removed from the service in July. 

He argues this has left NHS England with a mountain to climb when it comes to public trust in any new endorsement model for apps. That “shonky” headline won’t be forgotten in a hurry. And lots of people will be looking out for any new “securo-nightmares.” 

A fair trial?

One of the most challenging questions posed by Booth is just what happens to the people who have downloaded and may still be using apps that were recommended in the library but may have data concerns or not be effective.

It is not an easy question to answer for NHS England. It says the Health Apps Library gathered around 12,000 hits per month, but that it does “not have apps download figures as they were hosted by external provider sites”. 

For Booth, this is an unacceptable state of affairs, considering that the library is now being billed as a ‘pilot’. “If you’ve got an apps library, and you’re endorsing hundreds of apps as part of a pilot, you need to screen that in big red letters on every page – or people are going to think this is a full on endorsement by NHS.

“If this genuinely was a trial, a clinical trial of apps on the population, where is the research protocol? Where is all the documentation? Did they actively present this as such? No they didn’t.”

An ambitious target

Health secretary Jeremy Hunt announced in September this year that he had an ambition for 15% of all NHS patients to routinely access NHS advice, services and medical records through apps by the end of the next financial year.

It’s an ambitious target and, while the work done by Crean and his colleagues does seem to be a positive step forward, it’s hard to see just how it can be achieved, given the gap in knowledge as to what apps actually work, how easy it will be to make them secure, and how they can be patched and updated.

Plus the regulatory burden that closing these gaps could impose on what is necessarily a fairly small and agile industry.

Crean himself said his team has always acknowledged the new endorsement model has only a 50% chance of success. The next big tests will come when diabetes pilots are run this year, and when the end-to-end process for an accredited app is released sometime before March 2016.

Tom Meek

Thomas Meek is a reporter at Digital Health News.

February 2015 after spending several years writing about the pharmaceutical industry and healthcare communications, where he developed his interest in using new technologies to support patient care and education. He has a degree in journalism from The University of Stirling.

Find him on Twitter at @DHTomMeek

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Privacy groups focus on national data opt-out as FDP tender concludes

Privacy groups focus on national data opt-out as FDP tender concludes

As NHSE prepares to award a £480m contract to deliver the FDP, legal groups and data privacy activists are focused on the national data opt-out. 
NHS England says all NHS trusts and ICSs to get an FDP instance 

NHS England says all NHS trusts and ICSs to get an FDP instance 

Every hospital trust and integrated care system may get its own local version of the proposed new FDP that can connect with other data platforms
ORCHA reports significant and sustained interest in digital health products

ORCHA reports significant and sustained interest in digital health products

Demand for digital health products soared during the pandemic as patients looked to manage their own care – and interest remains strong in post-lockdown Britain.