Special report: Voice Recognition
Feature
Voices that care
Voice recognition is steadily becoming accepted as a way to interact with electronic patient record systems. Now, suppliers and trusts are thinking about how natural language processing can be used to make sure that EPRs deliver quality benefits. Lawrie Jones reports.
In December, Great Ormond Street Hospital for Children NHS Foundation Trust released an electronic patient record tender. Voice recognition was a key component.
The tender confirms a long-term trend that means that, for some trusts at least, voice recognition is no longer something that is confined to the pathology department or to speeding up the production of letters on which the government has put targets.
It also underlines the extent to which trusts have realised that they need good infrastructure and good user interfaces to support their EPRs: the days when a core software system could be rolled out to a couple of PCs per ward are going, if not quite gone.
But there is a further point of interest in the tender documentation; it is clear that the success of the EPR will be contingent on “cultural change across the whole organisation.” This kind of recognition is vital, says James Kippenberger, managing director at BigHand. “Many organisations begin by defining the technology they want, rather than the challenge they want to solve,” he says.
Drop the pilot
Voice recognition companies feel that a lack of strategic thinking has affected them in the past. Kippenburger argues that some trusts have tried voice recognition pilots, failed to invest in the culture change necessary, and then given up.
“Running a pilot is the worst way to introduce voice recognition to the NHS,” he says. “Ongoing drive from senior management and senior clinicians is critical.”
Andrew Whiteley, managing director of Lexacom, says something similar, describing a situation in which acute trusts have invested in capacity, rather than capability. “We need to work out why our customers want voice recognition,” he says. “You have to have something that clinicians learn to trust and believe in.”
“We’ve all got to be creative”, adds Sarah Fisher at Nuance Healthcare. “We want vendors and suppliers to be described as partners.”
James Stapleton, head of operations at G2 Speech, agrees. “In some trusts, they’re just thrown some technology,” he says; adding that this can lead to frustration. “Organisations need to ensure that the workflow shifts,” he states. “If they can do that, voice recognition can work.”
Barts: integrating voice recognition with Millennium
Sarah Jensen, chief information officer at Barts Health NHS Trust, agrees that voice recognition is now becoming a key component of an EPR implementation, and that this shifts the focus of voice projects onto change management. “All IT projects are really organisational change projects,” she says.
Jensen is leading Bart’s adoption of a single voice recognition system provided by G2 Health across the five hospitals that it runs as a consequence of a number of mergers, and integrating this with a single instance of Cerner Millennium.
Jensen argues that it’s the role of the CIO to lead this transformation, but stresses that “all IT projects should start with a conversation about the technology being an enabler”, with the key challenge being to communicate the benefits to clinicians.
At Barts, one of the benefits will be speedier communications. In the past, clinic letters could take up to five days to process. Using their new system, patients can leave with the letter in their hands.
“The clinician can dictate the letter, authorise it and sign it off. Once endorsed and completed the system will automatically trigger this letter to be sent to the GP.” Jensen adds.
All the information is uploaded to an integrated health information exchange system, which is accessible by all of the organisations working in the local health community.
While Jensen cautions that there are still barriers to overcome, she describes the trust’s approach to voice recognition as one part of a five year digital plan to be paperless by 2020. “This is the technology we strategically envision getting us there,” she emphasises.
Portsmouth: thinking beyond faster results
Plymouth Hospital’s specimen dissection department deals with a large volume of samples every day. With each sample coming from a patient that, potentially, has cancer, speed of diagnosis is important.
The department has used Nuance’s voice recognition software for the past five years. Dr Dean Harmse, consultant histopathologist and cytopathologist at Plymouth Hospitals NHS Trust, describes how initial concerns about accuracy were quickly allayed, and how his team has embraced the software.
“From a pathologist’s point of view, the system is quick to use and time neutral. The benefits come from the independent working,” he says. It’s also delivering time savings, not least because it is integrated with the trust’s lab management system.
“We currently save 7.5 man hours a day.” Harmse adds. “My description report is added straight into the lab management system. I’m done with the cases, and the essential information is available immediately for the clinician.”
Despite this, Harmse becomes animated when describing the possibility of voice recognition software not only saving time but improving clinical quality. “A slightly more intelligent system would be very handy to help prevent and avoid errors in medicine,” he says.
Natural language processing: the next frontier
This is something that suppliers are keen to embrace, with Nuance’s Fisher describing how the organisation is hoping that natural language processing could help to improve the quality of reporting.
Clinical notes, discharge report and EPRs all include an enormous amount of unstructured data, which systems currently struggle to comprehend. NLP technologies will increasingly help machines to make sense of this unstructured content.
In a recent report, the King’s Fund think-tank identified NLP (which it describes as machine learning) as one of the eight technologies that could fundamentally change health and care. “From a pathology point of view, if I make a diagnosis of cancer the system could be trained to identify further molecular tests,” Harmse argues; making these big claims more concrete.
While Harmse is keen to explore the benefits of NLP systems, he is clear that they must benefit the clinician; something that vendors say they recognise. “The system is an aid to the consultant, and not the computer taking the decision for them,” says G2’s Stapleton. “At all times, the clinician is in control.”
NLP could also have benefits for patient coding, and performance management. Computer-assisted coding systems analyse healthcare forms and documents, producing medical codes for specific phrases and terms.
CACS can use the information – including the unstructured content - contained within forms to generate suggested codes. “We are working on being able to automate coding into the system,” Fisher says, adding that she can see a future of integrated system, in which the disease can be coded, clinician notified, patient letter created and EPR updated in one lot of coherent steps.
Part of the paperless picture
The technology that excites Harmse is currently being developed by all voice recognition providers and can already be seen in use in the USA and Holland.
Having worked with voice recognition software for over five years, Harmse is positive about the impact voice recognition. He’s also clear about the increasing importance it can play to the way he and his team work. “Personally, I think speech recognition is the way forward.”