Another View: On learning from our mistakes
- 2 April 2019
A book about system and organisation performance has got Neil Paul thinking – not least about why healthcare IT never learns from its mistakes, and about how we can change that reality.
I’m currently reading Matthew Syed’s book Black Box Thinking. It’s about system and organisation performance; why some systems, by learning from their mistakes, deliver high quality results that keep improving over time, and why some don’t.
I highly recommend it. The first section of the book is dedicated to comparing the airline industry and medicine. It makes for uncomfortable reading at times. Syed’s postulate is that the airline industry has developed a culture that openly admits mistakes, discusses them and learns from them, while medicine hides them, is slow to learn and implement change and, as a result, patients come to harm.
While he focuses on clinical care, I wonder if this blind spot extends to healthcare IT as well. In the past couple of weeks we’ve had the Ethiopian Airlines plane crash. At the time of writing there is a theory the computer software is at fault and most countries have grounded Boeing 737 Max 8s. The cost and inconvenience of doing that doesn’t appear to be a problem, perhaps because it’s not the government picking up the bill? Yet we never close healthcare operations because of poor IT, despite patients probably dying or suffering harm because of it.
Tolerating the problem
My surgery computers have recently been upgraded. It’s been a huge improvement. The computer appears to come on, stay on and didn’t crash all week following the upgrade. Previously, it would crash at least once a day and took 18 minutes to go from reset to login screen. This usually happened during surgery, with patients waiting. Yet no-one from IT really appeared to be bothered about this; certainly not to the point of stopping everything until it was fixed.
I could go on about how we tolerate bad IT. I could talk about how our laptops don’t work half the time when we visit care homes, meaning we can’t record notes contemporaneously and have to waste time typing up later. I could reflect on how poor hospital departments are at sharing information and results speedily. But I’ve said it all before.
In contract, Syed says, airlines learn from their mistakes. First, they take all the data they have collected – including information from black boxes. Next, they share it. They fund an independent, no blame authority to look into it and find reasons for problems and recommendations to solve them. Then they share the conclusions, and people listen and implement them.
It’s interesting to note that, in America at least, the evidence uncovered in these investigations is apparently inadmissible in court. In healthcare, could fear of being sued and having one’s reputation and livelihood ruined be part of the reason for a lack of openness and learning?
Perhaps, but it seems it’s a misguided fear. Syed gives the example of Virginia Mason Hospital, where being open and transparent has actually reduced complaints and litigation and massively reduced insurance premiums. I wonder: now the government is for the first time providing some form of indemnity for GPs and therefore picking up the legal bills, might a no blame and open culture become more attractive?
Variations within
Syed explains that there is not only variation between industries in their ability to learn – there’s a variation within them as well. This is apparently related to speed of feedback. He cites the example of experienced ICU nurses who can sometimes out-predict the analysers on who is deteriorating because of their experience watching people deteriorate in front of them. He contrasts this with radiologists reporting mammograms, who he says never really find out whether they were right since the time from their report to diagnosis can be months.
Radiology, like a lot of medical specialities, started as an apprenticeship. When I was a junior you worked long hours and built up a history of cases you had seen; building a knowledge of what you had done and whether it had worked. Sometimes busy jobs were prized for the experience they gave, while outpatients was always harder in that you rarely saw the same patient twice.
When juniors come to general practice from hospital they can sometimes be stunned they are seeing the same patient day after day or week after week. It does mean, though, that they get plenty of regular feedback on how things are going.
Train a human as we’d train a computer?
I’ve been a GP for 20 years now. In that time, I must have read thousands of letters from consultants concerning both my patients and those being cared for by my practice colleagues. The letters tend to list why an individual was referred and what was done and found. That means that, when I see a patient, I not only have my memory of what I’ve done in similar situation – I also have the memory of what consultants have done in that situation, and what the outcome was.
In addition, we run a peer to peer referral system locally. A panel of 10 GPs review all the outpatient referrals made each month and offer feedback on quality and appropriateness. Over the last six months, I’ve read thousands of referrals this way too.
It seems to me this is similar to how you would train AI. Feed it a lot of information of inputs and outputs. Perhaps if we want to learn from our mistakes, like other industries, we need to think about how we train humans in the same way we’re aiming to train computers.