Enter the CCG on stats use and abuse

  • 30 June 2014
Enter the CCG on stats use and abuse

Just as everybody thinks they are a better-than-average driver, so everbody seems to think that they too can ‘do statistics’. So I’m going to be blunt. Statistics isn’t for the amateur: it’s potentially dangerous – like surgery.

In truth, I’d be happy if only those with statistical qualifications were allowed to use or publish statistics.

Every graph or table ought to display a quality mark to show that it has been properly prepared, together with the registration number of the statistician concerned.

This would certify the accuracy of the analysis and, if it proved erroneous, make it possible to identify who was responsible.

Sadly, the NHS seems to have a large cohort of staff without the vaguest idea of how to use statistics properly.  

In statistics, logic and context is everything

And now for the worst abuse of NHS statistics I’ve ever encountered, though fortunately it was annoying rather than life-threatening.

When Choose and Book was first introduced, its uptake was far less than the Department of Health had hoped. Some geographical areas had high usage: others were much lower.

I was told that my own uptake was 38% – not good for the co-chair of the primary care trust’s C&B committee! Shouldn’t I be named, shamed, and blamed?

Neighbouring practices were getting similar scores (and also attracting criticism from their PCTs) despite trying extremely hard. As a result, they were disconsolate, and thinking of giving up on C&B entirely.

But these figures didn’t tell the whole story. They were perfectly accurate; of all the referrals I’d made to secondary care, less than 40% were electronic and the same was true for my disheartened colleagues.

However, they only went so far. Once we had excluded all those clinics that couldn’t be booked electronically, our scores went up to more than 90% – quite a different picture.

The same thing happened up and down the country. Often the (very real) low uptake of C&B wasn’t the fault of GPs, it was the result of secondary care providers not making their clinics electronically bookable. GPs were simply unable to use C&B to refer patients into them.

It is, of course, perfectly acceptable to measure the national uptake of C&B as a percentage of all clinic appointments.

But to find out how assiduous an individual GP has been, the denominator must change so his or her score is measured as a percentage of those clinics that GP could have booked electronically.

Put like this, it’s obvious. It soon became known nationally as ‘the denominator problem’. What was the point in naming, blaming and shaming GPs if the problem lay, not with the GP, but with the hospital?

Learning from mistakes?

Once NHS organisations were alerted to the denominator problem, you might have thought things would change nationwide. Not a bit of it.

Although the DH had always (rightly) warned that figures for the overall use of C&B shouldn’t be used to performance-manage clinicians and units, this didn’t stop individual NHS managers and strategic health authorities from repeatedly berating PCTs and practices “for their poor use of C&B.”

Even when the true mathematical situation was explained to them, some still continued. Indeed, one SHA had to be ‘paid a visit’ by a member of the national C&B team before it could be persuaded to stop pressurising PCTs and practices on the basis of the crude uptake rates.

Overall, it was an appalling episode that demonstrated the true statistical abilities of so many in the NHS.

As I said earlier, statistics is a potentially dangerous tool that should only be handled by those who know how to use it safely. It can be treacherous in the hands of the novice — and even worse when used by someone with a little (but not much) training.

This should not be happening in the modern NHS. But here are a few more of the appalling things statistically inept NHS workers have done:

  • Averaging averages (like dividing by zero, mathematically it is meaningless)
  • Failing to recognise that percentages are misleading when small numbers are involved
  • Not understanding that in any group, however good, someone has to be below average (unless everyone has identical measurements, of course)
  • Turning subtle scores into simplistic red, amber, green or RAG ratings (acceptable on large numbers, but potentially misleading when used with small numbers and even worse when there are no exception codes)
  • Insisting on continuous year-on-year improvement, which logically is clearly impossible.

Need I go on? No wonder the NHS has so little reliable information if this is the off-hand way in which it treats data when standards, morale, reputations, and lives are at stake.

This is for you

Fortunately, not all leaders in the NHS behave in this manner. The health service has some excellent people who know their way around statistics, and who are bold enough to stand up against their inappropriate use.

Nor is this a problem just for managers. Anyone can fall into these traps — clinicians and non-clinicians alike.

Increasingly, clinical leads in both clinical commissioning groups and hospitals are a conduit for performance data, so everyone will benefit if these clinicians pay due care and attention to the statistics and dashboards they sign-off.

It’s for precisely these reasons that I am so keen to have statistics as a core ability for all chief clinical information officers and clinical informatics leads; something I’ve been promoting since the inception of the EHI CCIO Leaders Network.

It’s also why our CCG has an Informatics Manifesto. Among other things, it pledges that the CCG:

  • Will not judge individuals or organisations using metrics over which they have little or no control
  • Will not introduce any metric to rank practices and/or clinicians without full exception coding
  • Will always set targets and make comparisons on a truly like-for-like basis
  • Will not ask for year on year improvement once a sensible upper target has been reached. High quality will be acknowledged and no further improvement will be required.

It isn’t rocket science: but it does need care – and statistical acumen. 

Dr John Lockley

Dr John Lockley is clinical lead for informatics at Bedfordshire Clinical Commissioning Group and a part-time GP.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Digital Health Networks 2024 mentoring scheme celebrated

Digital Health Networks 2024 mentoring scheme celebrated

The Digital Health Networks' 2024 mentoring programme has received positive feedback, ahead of the expanded 2025 programme launch.
Bishoy Dimitri: A CCIO on a mission to empower patients

Bishoy Dimitri: A CCIO on a mission to empower patients

Bishoy Dimitri, CCIO at Oxford University Hospitals, chats to Digital Health about his career to date, digital priorities and ambitions for the next year.
Digital Health Networks Awards 2024 applications now open

Digital Health Networks Awards 2024 applications now open

Applications are now open for the 2024 Digital Health Networks Awards, the chance for digital health professionals to receive recognition among their peers.