Another view: Neil Paul
- 13 March 2012
My clinical commissioning group is part of the Aqua programme. This is a membership programme set up in the North West to spread ideas for improving the quality and productivity of healthcare.
One of its tenets is risk scoring your population to target your interventions. This is based on the fairly well-known premise that there is a pyramid of use of health services.
A few at the top use the service a lot, but as you go down the pyramid people use it less. Interestingly, in today’s newspaper there is an item about some individuals using A&E 100 times plus per year.
The idea is that if you target the people who aren’t yet at the top you can prevent them getting there. The hope is that the resource you use in prevention is less than the resource they use if they do get there.
Because, of course, you need money to invest in this. You can’t stop doing the work that is already going on for those at the top.
The emperor is naked, surely?
So, we have been looking at risk scoring and we have seen presentations from several companies that claim to do this. I keep wondering if I’m missing the point – or am I the only one who thinks the emperor is naked?
The companies all seem to have a secret algorithm that they think is going to make them rich. One told us that they had sold their wares to tens if not hundreds of CCGs already.
At around 40p a patient (although, to be fair, this may include more than just the use of the formula) it seems a lot of money for a random number generator.
Why the NHS hasn’t got its own risk calculator for free use I’m not sure. The much-hyped Parr+ tool is intended for secondary care use not primary care, and it’s getting a bit old now. But why we haven’t developed something else I’m not sure.
There is a useful Nuffield Trust document that tries to teach CCGs how to pick the right tool to use. But while it talks about checking the accuracy figures quoted, it doesn’t really mention any of the points below.
At least one company we have seen said that is algorithm was based on work from the Johns Hopkins University in the USA. I’m puzzled. If this is public, published work why aren’t we using it for free?
Perhaps I’m naïve and they have published the results of a secret algorithm rather than the algorithm itself, and we are paying to use that?
But even if that is the case, why are CCGs spending so much on it when there isn’t any evidence it applies to a British population in a British health care system?
And generating work
Although I like the basic idea of targeting the soon to be ill, even the sales people tell you that these scores give false negatives and positives.
In fact, they admit you may need to review three to five people to find one that has a preventable problem – and you will still miss some.
Another way of looking at this is that it’s a huge extra burden of work, most of it pointless, that may miss the people you need to see – and we are paying to be able to do it.
The systems in question seem to generate an individual risk for each patient. So I’m sat in my surgery and a patient walks in. My computer beeps and tells me that I have in front of me a soon-to-be-high-risk person.
If they are willing to engage, I review their situation and try to maximise their treatment. But wasn’t I doing this anyway?
Perhaps the patient being told they are high risk gives them an extra incentive to follow my advice? But then, do we worry a lot of people needlessly knowing how badly people understand risk scores?
And unbelievably expensive
What a risk score per patient doesn’t do is identify pathways or areas that need work. To gain this understanding, we need to do a piece of work looking at the people with high risk scores to identify common themes.
But hang on a minute; surely the weighted factors in the algorithm that are the themes we need to address? If the algorithm wasn’t secret, it could address them more easily. But, basically, if having severe COPD puts your score up, we just need to look at COPD pathways?
The Harrison effect says that whenever you investigate something you change it. If we assume the algorithms were correct in the first place, the moment we use them to intervene, we change the risks of the people looked at – else there is no point intervening.
So the algorithms won’t work a second time. Or worse, if the algorithms were based on following up a population that didn’t have any intervention, we will intervene, reduce a person’s risk, and then compare the end of year result to the initial score and say it was inaccurate in the first place.
And I really just can’t get over how expensive these systems are. Let’s think this through. Who is likely to get ill next year who isn’t already?
The elderly as they get older; people with existing chronic diseases; some diseases more than others, the more of them the more likely; people on lots of drugs; people with deranged blood tests; and people who are presenting more and more often.
I’ll happily charge you 39p a patient to identify these patients to you. I’ll give you a guarantee at least some of them will be correct and that I’ll miss some. Any takers?
Dr Neil Paul is a full time partner at Sandbach GPs, a large (21,000 patient) practice in a semi rural Cheshire. Until recently, he was on the PEC of Central and East Cheshire Primary Care Trust, with responsibility for Urgent Care and IT.
He is now on a journey into the unknown. He is on the board of his local consortium, one of many on a pilot leadership programme, and looking at provider opportunities. He recently set up a successful primary care clinical trials unit and is involved in several exciting IT projects.