And statistics

  • 16 December 2009

Dr Foster and the Care Quality Commission found themselves embroiled in a massive row over how to rate hospitals recently, when Dr Foster’s annual Hospital Guide came up with different results to the CQC’s Annual Health Check and it apparently failed to spot problems in Basildon. Daloni Carlisle delves into the fraught world of data collection and interpretation.

When Dr Foster published its 2009 Hospital Guide recently, the editors’ introductory letter said they were hoping it would “spark an interesting debate”.

It’s fair to say that they achieved this aim. Quite spectacularly, Dr Foster found itself pitched against the Care Quality Commission and a range of hospital trusts who disagreed with the guide’s findings in print, online and broadcast media. No-one came out of this smelling of roses.

The consequences of timing

The row all started with a coincidence. On 27 November, the CQC and Monitor, the foundation trust regulator, announced serious concerns about the quality of care at Basildon and Thurrock University Hospitals NHS Foundation Trust. Its previous rating by the CQC had given no hint of the problems uncovered by a spot check by CQC inspectors.

Dr Foster’s Hospital Guide came out on 29 November, containing a new patient safety score based on a range of indicators including standardised mortality rates. It listed 12 English NHS trusts that were “significantly underperforming” and ranked Basildon the worst in the country.

But nine of Dr Foster’s dirty dozen had been rated good or excellent in the CQC’s Annual Health Check for their overall performance. The inevitable question was: why the difference? Surely, said the commentariat, a good hospital is a good hospital; so how can there be this discrepancy?

The CQC faced calls for a complete overhaul of its rating system, with newspaper columnists heaping contempt on its use of self declaration by trusts.

Dr Foster, meanwhile, was roundly condemned by trusts who said its data was wrong and the analysis unreproducable. There was even threatened legal action. The CQC’s chair, Baroness Young (who has since resigned) told the BBC that some of Dr Foster’s data was “very legitimate” but some was “quite alarmist”.

Both the CQC and Dr Foster have been left battered by this. Roger Taylor, research director at Dr Foster, admits to having been surprised by the strength of reaction to the Good Hospital Guide. He says: “The story has been presented as CQC versus Dr Foster, which is certainly not what we wanted.”

Richard Hamblin, director of intelligence at CQC, says: “I do not think Dr Foster deserves to be slated, but equally I certainly do not think that Good Hospital Guide invalidates the 2008-09 Annual Health Check.”

There’s method in the methodology

First some specifics. Dr Foster has been accused of failing to publish its methodology in full and of failing to publish the weighting given to the different elements that made up the safety score.

Taylor denies both, saying it is all there on the website, although he admits to clarifying the weighting. “All the factors are weighted the same and we had thought that was clear, but it seems it was not,” he says.

“As an organisation, we come into this area from the point of view of public transparency,” he adds. “All our data is on the website and anyone with some mathematical knowledge can reproduce our findings. From a commercial point of view, this is a nightmare and our sales force [selling Dr Foster’s tools into the NHS] keep asking us to stop publishing it.”

On the CQC side, the criticism has been that Basildon, like all trusts, completed its own declaration for the Annual Health Check. It was not until a spot check that the problems came to light.

Hamblin points out that the CQC validates all self declarations against a huge array of data. Basildon did not reach the threshold of disagreement with the data that would trigger concerns.

Yet the row has highlighted some interesting differences between the methodologies used by Dr Foster and the CQC to do with aggregation, thresholds and league tables.

Hamblin says: “There are some similarities between the two and some of the same criticisms could be levelled at either of us. But where we part company with Dr Foster is on league tables. Their methodology requires them to create a bottom group.”

He points out that Basildon was bottom of Dr Foster’s league by a long way, with a marked difference between it and the other 11 other hospitals condemned as worst performing.

CQC, by contrast, uses an absolutist approach in which trusts are rated by whether they have reached a threshold. “All trusts could be rated excellent under our system,” says Hamblin.

Then there is the question of the data itself and what the variation might show. Take Hospital Standardised Mortality Rates. Hamblin says: “Variations can relate to four things. Coding variation and errors, local variation that is not susceptible to risk adjustment, bad luck and quality of care. It is some combination of these that will lead to the observed variation.”

So, should that variation be used to produce a league table or to prompt questions? Hamblin says for CQC it is the latter – and points to the CQC’s system of alerting hospitals when their HSMR varies from the expected. This has shown that in over 70% of cases, there is an explanation other than poor quality of care for a trust’s high HSMR.

Taylor responds: “We absolutely understand the point about league tables.” Dr Foster has tried to overcome some of the difficulties of ranking hospitals with very similar scores by using bands. But when they have tried to publish just the bands, people have demanded to see the individual scores. In addition, he points out that thresholds are arbitrary and can lead to a hospital missing by a very small margin.

Finally aggregation: what data are you measuring? How do you choose to measure it and how you choose to put it together? Do you focus on outcomes, structures or processes? The answers to these questions may give very different ratings and push trusts in very different ways to give specific answers (think waiting lists and gaming here).

As Hamblin says: “Fundamentally, this is a problem of trying to boil an organisation with a budget of half a billon pounds and thousands of staff into one word.”

A question of trust

Both CQC and Dr Foster want their data to be used by trusts to improve their care and the public to help them choose their care and both say there is room for each of them. Taylor says: “I think this row really shows that no single measure can be completely right and we need a plurality in the market place.”

But this does prompt the question: who should the public believe? Hamblin answers obliquely: “I think people should look in depth, below the top line figures and what is being said. What they should not believe is what the ill informed commentariat, where some of the reporting of this has been factually inaccurate and frankly bizarre.”

 

 

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Airedale NHS FT postpones Oracle EPR go-live indefinitely

Airedale NHS FT postpones Oracle EPR go-live indefinitely

Airedale NHS Foundation Trust has postponed the go-live of its Oracle Health electronic patient record (EPR) system for a second time.
Synnovis attack led to at least five cases of ‘moderate’ patient harm

Synnovis attack led to at least five cases of ‘moderate’ patient harm

The Synnovis cyber attack led to at least 119 incidents of patient harm, including at least five cases of 'moderate harm', figures show.
NHSE says IT should flag patient safety issues in primary care

NHSE says IT should flag patient safety issues in primary care

New patient safety guidance from NHS England says that primary care’s IT systems should automatically flag patient safety issues.