Say What?
February 02, 2012
by
Kristen Fischer, DOTmed News
Dialogue on the changing face of physician dictation and transcription solutions
The days of hand-written notes from doctors are long-gone…even typing notes from patient visits are a thing of the past. Physicians are embracing high-tech dictation and transcription solutions to document patient consultations.
These digital solutions are part of the movement for hospitals and medical offices to meet electronic medical record and electronic health record requirements as part of the health care reform package. While hospitals and practices scramble to meet requirements and avoid government penalties, the way that patient data is documented is evolving. The newest trends let doctors dictate information on patient visits using computers, iPads—and even smartphones.
Although there is no policy that mandates practices and hospitals use electronic dictation and transcription solutions, there are incentives for them to adopt digital technologies. The Obama Administration allotted approximately $40 billion to help facilities transition to electronic record keeping. Over the past few years, many facilities have cashed in on the financial incentives and deployed such systems that meet “meaningful use” standards, deciding to reap early benefits, taking the carrot instead of waiting for the stick in the form of monetary penalties that are on the horizon for those not meeting requirements.
Keith Belton, Senior Director of Nuance’s Healthcare Solutions Marketing, says it is safe to say that approximately 20 to 30 percent of medical practices are using a fully functional EMR.
“The notion of going from handwriting your notes to dictating and getting those notes into the EMR is really what we focus on,” he says. And there are a variety of ways to do so, depending on how involved the doctor wants to be with EMR technology.
Systems that work with doctors
The problem for many practices is that having to use an EMR system forces medical professionals to take on an IT role—a huge challenge for smaller practices that likely don’t have dedicated IT staff. Older systems require physicians to cycle through drop-down menus to record specifications about a patient visit, either by typing or speech recognition software, which makes it hard to tell the story of what’s happening with a patient, Belton noted.
“These doctors aren’t typists and they don’t want to sit there and click through screens,” Belton says.
Some doctors still prefer to type notes into a computer interface, while others prefer speaking into a device. But dictation is becoming the wave of the future, and there are generally two options that use speech recognition technology: Navigate your way by speaking through a speech-ready system and generate your own reports (better for more technologically-savvy docs), or dictate material and have it reviewed—not written from scratch—by a medical transcriptionist (ideal for old-school docs that do not want to deal with a computer at all).
Nuance, a leading speech recognition technology provider, has introduced two systems that work for physicians whether they’re technology-savvy or not that lets them meet meaningful use requirements.
Speak and transcribe electronically—sans the transcriptionist
Dragon Medical lets the doctor talk into a microphone and navigate by voice to different sections of the EMR. For example, the doctor can record what happened during a patient consultation, but also move to other areas of the platform to write a prescription or look up test results. AnyModal Conversational Documentation Services (CDS) and ChartLogic’s PrecisionVoice are similar. They both let the clinician use dictation to move between EMR prompts, and also let the doctor use his or her voice to access other functions to order tests, create referral letters or write prescriptions within the EMR.
“We’ve found that our PrecisionVoice engine is an easier, more familiar charting method than a point-and-click charting method that is common amongst the EMR systems in the marketplace today,” says Brad Melis, ChartLogic’s Founder and Executive Vice President.
Belton said there are approximately 800,000 doctors in the U.S. and bout 200,000 of them using Dragon, which can be integrated with other popular EMR systems such as Allscripts, Epic and Meditec.
Traditional transcription with a tech edge
The other platform Nuance offers is the Health Information Management (HIM) system, which is ideal for less technologically-savvy physicians that use dictation but don’t want to review their own reports and prefer to use a traditional transcriptionist. This lets the doctor speak on the phone or via the computer, and then runs the dictation through speech recognition software. When the transcriptionist gets it, the draft is already visible electronically so the transcriptionist can simply edit it instead of typing it up from scratch. The completed file is then uploaded into the EMR to await the physician’s signature to ensure it meets his or her requirements.
“The transcriptionist is no longer typing word for word [with this system],” Belton notes. “We’re driving down the cost of transcription using speech recognition.”
Belton said there will always be a need for transcription, but this option improves transcriptionist productivity and boosts turnaround time for reports. Approximately 200,000 physicians currently use this system, which is also compatible with popular EMR platforms so it attains meaningful use standards.
“You’re always going to have doctors that simply want to transcribe…they don’t want to be editing,” he adds.
Belton predicts that the majority of doctors will be using real-time front-end speech platforms. But even five years down the road, 20 to 30 percent will still be using more traditional methods that incorporate to EMRs.
For those seeking a more traditional system that incorporates speech recognition while still utilizing a transcriptionist, the MedQuist-M*Modal DocQment myWAY may be the product they need. It incorporates real-time speech recognition and physician self-editing, so doctors can choose a method that works best for them: review and edit speech-recognized documents similar to Nuance’s Dragon or send a speech-recognized document to a transcriptionist, like Nuance’s HIM. MedQuist-M*Modal’s DocQscribe is a Web-based transcription and editing platform used by transcriptionists that enables them to edit and finalize reports.
On the horizon: going mobile, cloud computing
There are about 100 existing EMRs that have licensed Nuance’s solutions into their mobile platforms using the Healthcare Development Platform, a cloud-based development system. M*Modal-MedQuist—formerly two separate entities that merged this past summer—are launching mobile systems so physicians with a smartphone or tablet can use it to dictate, transcribe and sync data with their EMR.
“Physicians are very mobile folks,” Belton says. “There’s no question that there will be more work done on mobile.” Not all doctors see patients in one location; many private practice doctors need to be able to document patient visits whether they are in their office or in a hospital.
Using tablets and smartphones to dictate and transcribe patient data is a growing trend. Dragon Medical Mobile Recorder lets clinicians dictate information at the point-of-care with an iPhone. The dictations are then securely uploaded to Nuance’s background speech recognition platforms for rapid document turnaround.
There is a wealth of other applications that let physicians retrieve information using mobile devices—those are nothing new in the grand scheme of medical technology, though new configurations and variations continue to emerge. One of the latest apps to incorporate speech recognition is Nuance’s PowerScribe 360 Mobile App. It gives radiologists access to radiology reports using their iPhones, and also lets them dictate a search query and receive results from sources such as Google.
Meanwhile, AnyModal CDS is currently available on iPad. Tom Mitchell, M*Modal- MedQuist’s marketing manager, said the company is working to make EMRs accessible on Smartphones so physicians can dictate information, have it transcribed and sync it to the EMR in the medical facility.
Speech recognition and context technologies advance
The field of speech recognition software in general continues to improve and some new solutions are in the works. Nuance is piloting a Clinical Language Understanding technology that improves accuracy during dictation, and enables data to be mined from text. The technology enhances the accuracy of transcription when it comes to complex medical terminology, and is due out this year.
Competitor M*Modal- MedQuist has already unveiled its Speech Understanding technology.
“It’s a little more in-depth than simple speech recognition,” explains Mitchell. He says the Speech Understanding technology enables the system to accurately pick up words regardless of a physician’s accent and makes documents searchable. Coupled with the Natural Language Understanding tool, it has revolutionized speech recognition to ensure dictation is more accurate and includes properly encoded information.
“Speech recognition has been around for a long time, but the second generation of this, Speech Understanding, will drive innovation with the ability to derive greater understanding out of the spoken word from any input or capture device,” Mitchell adds. “Natural Language Understanding takes this a step further and instantly converts the narrative into a structured clinical document that enables collaboration and eliminates inefficiency in clinical and administrative workflow.”
“The speech recognition engines have improved with additional processing power and other technology improvements,” ChartLogic’s Melis says. “Many have moved beyond simple phonetics to natural language processing, which can analyze vast amounts of data and determine the most likely combinations of words and phrases that people use in any given context. This has allowed the computer to be smarter in determining the contextual difference between the phrases ‘ice cream’ and ‘I scream,’” he notes.
Melis believes context will become the most important factor in speech recognition.
“This allows us to understand that you want to order a patient’s MRI, not insert text into the chart indicating that you may order an MRI (later),” he explains.
Mitchell said M*Modal-MedQuist plans to roll out more mobile applications that seamlessly integrate with EMRs to “input and capture the patient’s story from any location,” and will incorporate better speech recognition with mobile technology, two aspects of EMRs that are advancing equally and growing in demand.
Other industry players are expected to unveil similar technologies, but one aspect will be similar in their offerings. “The idea of mobility is key,” Mitchell says.