Septic shock
- 13 May 2013
Sir Bruce Keogh, the medical director of the NHS, is currently assessing standards of care in 14 hospitals at which the rate at which patients are dying is significantly higher than expected.
The move follows the conclusion of the public inquiry into Mid Staffordshire NHS Foundation Trust, which found that warning signs of poor care, including high mortality rates, were ignored for too long.
High mortality rates do not prove that there is a problem with the care being given to patients. There are many reasons why the data may give an incomplete or distorted picture of what is happening.
Despite this, since Dr Foster first started publishing hospital standardised mortality ratios, there has been growing acceptance of their value as a tool for providing insight into variations in quality of care.
The new assurance framework for clinical commission groups includes the expectation that commissioners take note of hospital mortality rates. But for the system to work, it must be sure of the quality of the information it is using.
Action on HSMR
Events at the Royal Bolton Hospital over recent months, including an external investigation of the trust’s management of septic patients, have called that into question.
The story begins in 2005. At the time, the hospital had a significantly high HSMR and had begun a programme of work to address this.
One of its first steps was a review the care of patients with broken hips. At the time, close to one-in-four of these patients were dying in Bolton, compared to a national rate of 15%. Improvements to care saw the mortality fall back into line with the national average over subsequent years.
Around that time, a programme of work to address the care of patients with sepsis was also introduced. Half of all patients with sepsis at the hospital were dying compared to a national rate of about a third.
As the hospital introduced new processes of care, the mortality rate here also began to fall, although it remained above the national rate.
Despite these successes, the overall mortality ratio at the trust also remained stubbornly high. In 2003-4 it had been 24% higher than the national rate. By 2008-9 it was still 24% higher than the national rate.
A focus on sepsis
In early 2010, a new series of measures were introduced to further address mortality, including a clinical review of patients who had died in hospital.
Such reviews are now common practice in hospitals. They will, ideally, review the care provided to the patient and identify if there were any failures. They may also find out that the patient notes were not been accurately recorded or that the information was wrongly coded.
In Bolton, this process had a dramatic effect on the ways data was coded on the official record used for mortality monitoring. Within a year, there was a five-fold increase in the number of patients being coded as having septicaemia – from 150 a year to 850.
Septicaemia is a condition that carries a very high risk of death. Nationally about a third of patients die. As the number of patients with septicaemia grew, the number that were expected to die also grew and with that that, the HSMR finally began to fall.
The change was welcome at the trust. Indeed the trust was sure that its clinical changes were making a difference to patients. But was it being misled by the data?
The trust quoted a fall in the crude mortality rate across the whole hospital as evidence of its success. But if this fall is compared to that across its old strategic health authority area, or the rest of the NHS, it is no different to the average improvement that is happening elsewhere.
That is not to detract from the success of the work in improving care. But it is important to understand that the overall impact is no more than is happening throughout the NHS.
Real change or statistics?
The fall in HSMR was driven not by a change in the actual number of deaths but by a rise in the number expected to occur, as a result of the much larger number of patients with high-risk diagnoses.
By 2011-12, the way that patients in Bolton were being coded for septicaemia was so out of line with the rest of the country that it was undermining any comparisons that could be made between the mortality rate at the trust and the rest of the NHS.
Towards the end of 2012, Dr Foster identified these peculiarities in the coding at the trust and raised them with the hospital and its local commissioners.
This led to an audit of case notes which, earlier this year, identified variations between the trust’s coding and national rules.
This was then followed by an investigation that concluded Bolton was, indeed, applying a different definition of septicaemia in its coding to that which was being used elsewhere.
The difficulty had arisen in part because of the closer involvement of clinicians in decisions about how to code patients.
In the coding dictionary, septicaemia or sepsis as a primary diagnosis should not be used for infections of specific organs, such as respiratory infections. But in clinical language, it is common to refer to such patients as being ‘septic’.
Doctors in Bolton used the term in this way, helping to prompt the increase in coding for a primary diagnosis of sepsis.
Not fiddled, but still wrong
The key headline from the investigation was that no-one had intentionally distorted coding in order to manipulate mortality rates. That is an important conclusion.
But just as important is the fact that the coding had gone wrong and had gone wrong in a way that did impact mortality rates.
The investigation left one question unanswered. How much impact did the coding changes have on the reported mortality rates at Bolton?
Antony Sumara, who has taken over as chief executive at the hospital, is now taking steps to answer exactly that question and put the trust’s data recording back onto a sound footing.
The incident raises broader questions for the management of hospitals. Firstly, there is the question of assuring the accuracy of data. Coding regulations and coding standards in the UK are good.
The Dr Foster Global Comparators project brings together coded hospital data from nine countries and compares the data. It provides an overview of differences in coding standards in different health economies and the UK compares well.
But as greater weight is put on measures of outcome that are sensitive to the accuracy of data recording, it will be necessary to improve the external assurance around quality of data.
The other question that these events raise is the degree to which clinical and management leaders within the NHS understand how best to interpret and respond to information about outcomes.
The investigation report into Bolton called for management functions to be separated from the auditing of clinical standards, so that the latter remains properly independent.
That is right and, indeed, points to the wider issue of how well the directors and managers of NHS organisations understand outcome indicators – and the difficulty they sometimes face in identifying the appropriate responses to high mortality rates.
It is a good thing that Sir Bruce Keogh is looking closely at hospitals with high mortality rates. But much of his time will be wasted if he fails to take the steps necessary to ensure that he is working with accurate data.