Usually the purpose of technology is to make life easier for human beings. Unfortunately, as hospitals went from 10% to 80% adopting electronic health records (EHR) over the last 10 years, life has gotten harder for nurses. Several studies found nurses spend from a quarter to close to half of their time on documentation. There are companies hoping to make charting easier with speech-to-text technology.
How charting got harder for clinicians with the rise of EHRs:
So few health systems still use paper charts, that many clinicians practicing today may not remember the issues with them. The primary goal of going from paper to electronic records was to prevent gaps in patient care. According to one source, an audit of paper records in five large medical facilities found between 5-20% were incomplete.
The thought was electronic records would reduce gaps and support coordination of care. Unfortunately, they also added quite a bit of work for clinicians and is a leading cause of clinician burnout. One study found nurses struggle with the timely documentation requirements along with providing patient care. The expectation that nurses will complete documentation at the bedside compounds that problem.
Some common complaints from both nurses and physicians is the difficulty navigating the many tabs and forms they must fill out, as well as the time it takes to capture everything about a patient in clinical notes.
As a stop-gap, some systems hire medical scribes – people who transcribe notes for providers (primarily physicians, so this does not solve the issue for nurses). However, this is a costly solution to only part of the problem. Additionally, not all patients are comfortable having another person present for their conversation with a provider.
All of this negatively impacts both clinicians and patients.
The impact of documentation on quality of life and patient care:
In addition to provider burnout, documentation burden impacts quality of life for clinicians and patient care. Nurses described feeling torn between documentation and interacting with patients. This drove many to avoid documenting at the bedside and instead to document at the nurse’s station.
For physicians, one vendor found 88% of doctors reported feeling severely stressed by time spent on clinical documentation. They also found there are an average of 4,000 clicks required to complete documentation in a shift.
This all begs the question of whether new technology can solve a problem previous technology created. The makers of speech-to-text applications believe the answer is ‘yes.’
What is speech-to-text technology?
Speech-to-text or voice-to-text technology is mostly what it sounds like: technology that listens to a person’s voice and transcribes what they are saying into text. It can help to improve completeness of documentation since there is no typing or clicking required. It can also increase patient engagement since clinicians can focus on conversation with the patient.
While this technology has been around for a while, recent advances in artificial intelligence is making it more useful in the medical setting. Technology that can learn over time and understand a specific clinician’s voice could become as accurate as medical transcriptionists.
There are two types of speech-to-text technology:
Back-end speech recognition: This is where speech is recorded and translated into text. It produces a draft document that a medical transcriptionist or provider can proof-read. Some healthcare organizations prefer this approach since there is a quality check before information goes into the EHR.
Front-end speech recognition: This is where the software converts speech into text in real time, with no medical transcriptionist. As you can imagine with how often Siri and Alexa misunderstand something, this method can produce errors. However, the software also learns over time so it can improve. Some healthcare organizations may use this approach for personal notes that are not put into the medical record, or short documentation like ‘normal findings.’
There are companies offering different versions of these set-ups. There is clearly a push to make this technology as hands-off as possible for the clinician.
Companies piloting speech-to-text technology with clinical charting:
There are a number of companies stepping forward to offer ways of making charting easier with speech-to-text technology. Here are examples of one small company and one big company stepping into this space.
Nuance:
Nuance specializes in voice recognition clinical documentation software for clinicians. Their solutions integrate with well-known EHR platforms, like Allscripts, Cerner, and McKesson. An example of one Nuance applications is Dragon One. Clinicians can speak their clinical notes into an app on their smartphone, which then transcribes it into the medical record. They include medical vocabularies and artificial intelligence to improve the transcription accuracy.
Nuance found that after using their solutions, clinicians saved 2.5 hours of documentation for every hour dictated. Providers were also able to dictate notes three-times faster than typing the same note.
Amazon:
As if Amazon did not offer enough services, they are now offering a product called Amazon Transcribe Medical – a “HIPAA-eligible machine learning automatic speech recognition service.”
This application supports developers adding speech-to-text to medical systems. It uses a streaming API that can integrate with any voice-enabled application. This means that a clinician could speak into their EHR, the EHR would send that voice recording to Amazon Transcribe Medical, and the EHR would receive back text of that recording. This would occur in real time so the clinician does not need to wait for a document to come back from a service.
Amazon’s focus at the moment is primary care, and the idea is this service is scalable across a large number of healthcare providers. They advertise that the service does not need to be told where punctuation should be placed, and can naturally pick that up by listening to the clinician’s voice. It also does not require experience with machine learning or dictating notes.
While this all sounds great, the real question is whether it actually works.
Is charting easier with speech-to-text technology?
There is one example of a healthcare organization making charting easier with speech-to-text technology. Sangre de Cristo Community Care in Colorado uses this technology in their EMR, MatrixCare, and so far the reviews are positive. They are a home care organization that also provides hospice care. For their clinicians, the bulk of charting is writing out clinical notes. With this new technology from a company named nVoq, clinicians can tap the microphone and speak their note.
The technology includes industry-relevant terms for home health care and hospice. Their head of compliance found a more complete picture of patients with this technology enabled. They believe speech-to-text is allowing clinicians to capture patient information while it is fresh in their minds.
The agency also found the length of patient notes increased 45%, while documentation time decreased 50%. If other healthcare organizations follow this example, manual charting will sound as old-fashioned as paper charts.
Key Takeaways:
While electronic health records make charting more complete for patients, they inadvertently created more work for clinicians. Nurses and physicians report spending excessive amounts of time just entering documentation. That is where there is hope for making charting easier with speech-to-text technology.
This technology captures what a person says and creates text. There are many companies piloting this technology including large companies, like Amazon, and smaller companies, like Nuance, where this is their sole focus. The primary goal, however, is to shorten the work burden on clinicians who are already stretched thin.
Initial examples of healthcare organizations using this technology are promising. Hopefully more organizations take advantage of this opportunity to improve clinicians’ work experience.
0 Comments