Enter the CCG on positive and negative numbers

  • 4 June 2014
Enter the CCG on positive and negative numbers
The ‘bundle of twigs’ fallacy is the erroneous belief that multiple pieces of evidence

In the early 1990s, I underwent personality profiling. This consisted of about 13 multiple-choice questions, and took less than ten minutes to complete.

And then the analysis came back. It was pin-sharp, and could have been written by someone who had known me closely throughout my entire life.

To say that I was astounded would be an understatement. To this day I remain in awe of a system that could produce so much information from such a minute input. Doubtless the whole mechanism had been iteratively refined — to near-perfection.

Ever since, this model of information acquisition has been my ultimate goal: to gather all data necessary, using the minimum number of questions, while producing an output that is fiercely accurate and unbelievably useful. This is what I would term ‘Type A’ data acquisition.

Brilliance and its opposite

Then, of course, there is the opposite, neatly exemplified by what happened many years ago in our then primary care group.

In its wisdom, the PCG had decided that it would be helpful to have additional information on the management of patients with cardiovascular disease. So it requested practices to supply this extra data.

Some weeks later, we received the results – 19 pages of immaculately-produced histograms showing how each practice had performed in various areas.

One graph displayed the PCG’s calculation for: “What percentage of your patients with cardiovascular disease who are smokers received ‘stop smoking counselling’ in the past year?”

We did awfully well: we’d been scored at 200%. (However, we weren’t top: another practice had scored 500%.)

The subsequent locality meeting was — as they say — “interesting”. We told the administrators concerned precisely what we thought of their statistical abilities.

Somewhat shamefacedly, they agreed that the maximum score really should have been 100% – then pointed out brightly that all was not lost, because the exercise had generated a significant amount of extra data.

At this point one of the GPs present growled: “Yes, but we don’t know how much of it we can trust.” Precisely.

This is ‘Type B’ data collection – intrusive, ineptly analysed, with all the wrong inferences, and above all, unreliable.

The appliance of science

What is not always appreciated is that the over-enthusiastic collection of medical data, especially data that isn’t relevant, is actually very costly.

It’s not just the wages of the administrative staff processing it that add up, but the uncounted time of the clinicians and practice managers who have to assemble the data in the first place.

Performing a physically intrusive, detailed (and therefore costly) data collection will not be justifiable unless its conclusions are correctly analysed, accurate and relevant.

The bundle of twigs fallacy

There’s another important principle that needs bringing in here – the ‘bundle of twigs’ fallacy. This is the erroneous belief that multiple pieces of evidence, each independently suspect or weak, provide strong evidence when bundled together.

Or to put it another way, ‘big data’ doesn’t become good just by being big: it is only good if its individual components are strong. 

Proponents of care.data, take note. The quality of the information gleaned from primary care computers will depend on the accuracy and completeness of coding of the primary care record — and this can be variable (to say the least) in areas unrelated to the Quality and Outcomes Framework.

Sampling bias

Then there is a matter of sampling. The immutable law of statistics is that any conclusion drawn from a sample is only as good as the relationship between the sample chosen and the total population.

The classic cautionary tale here is of a truly enormous telephone survey performed in 1935 by the Literary Digest, to predict the results of the US general election: it was carefully structured to sample every state in exact proportion to the number of its residents.

This poll predicted a landslide Republican victory: in the event Roosevelt, the Democrat, won — also by a landslide. How was this possible? Simple – it was the sampling technique.

The poll, by telephone, was conducted during the great depression: those able to afford a phone or a magazine subscription at that time were much more likely to be Republican voters.

The conclusion? Watch like a hawk for hidden bias.

How many of those filling in the ‘Friends and Family’ test, or giving feedback on a practice website, are motivated to do so purely because they have an axe to grind?

Patients who have gone away satisfied are much less likely to post their comments. Data sources like these inevitably contain an in-built bias which needs to be acknowledged.

And even if the sample size is huge, if it is a biased sample it is still useless. (It’s the ‘bundle of twigs’ fallacy all over again.)

Einstein’s dictum

We can all too easily be blinded by numbers, especially those expressed with apparent precision. It sounds good to be able to say that ‘the average height of a class of medical students is 5ft 8.713 inches’ – but if we only measure each person to the nearest half-inch, those three decimal places are totally spurious.

But there’s a deeper problem with numbers – the unspoken (and often unchallenged) assumption that just because we can measure something (such as raw immunisation rates), this measurement is automatically a good proxy for the attribute we really want to assess (such as ‘practice quality’).  

It may not be – in this example, because of the skewing effects of patient choice.

We need to remember Einstein’s dictum: “Not everything that can be measured matters: and not everything that matters can be measured.”

Are your figures a good proxy for what you are actually trying to measure? If not, don’t use them. Bad information is worse than no information at all – because people will rely on it, especially if it has apparently precise numbers attached.

And when people rely on bad information, even worse conclusions and actions are likely to follow. 

Dr John Lockley

Dr John Lockley is clinical lead for informatics at Bedfordshire Clinical Commissioning Group and a part-time GP.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Drones deliver urgent blood samples for Guy’s and St Thomas’

Drones deliver urgent blood samples for Guy’s and St Thomas’

The first drones have successfully delivered patient blood samples as part of a six-month trial at Guy’s and St Thomas’ NHS Foundation Trust.
NHS SBS wins place on contract for provision of cloud-based tech

NHS SBS wins place on contract for provision of cloud-based tech

NHS Shared Business Services (NHS SBS) has won a place on a national framework agreement for the provision of cloud-based services.
Digital Health’s monthly roundup of contracts and go lives

Digital Health’s monthly roundup of contracts and go lives

Our latest round-up includes Cheshire and Merseyside's £11.5 million LIMS contract and PAHT's Oracle Health EHR go-live.