Over 70 Total Lots Up For Auction at One Location - CA 10/11

Palavra que?

por Kristen Fischer, DOTmed News | February 02, 2012
From the January/February 2012 issue of HealthCare Business News magazine


Meanwhile, AnyModal CDS is currently available on iPad. Tom Mitchell, M*Modal- MedQuist’s marketing manager, said the company is working to make EMRs accessible on Smartphones so physicians can dictate information, have it transcribed and sync it to the EMR in the medical facility.

Speech recognition and context technologies advance
The field of speech recognition software in general continues to improve and some new solutions are in the works. Nuance is piloting a Clinical Language Understanding technology that improves accuracy during dictation, and enables data to be mined from text. The technology enhances the accuracy of transcription when it comes to complex medical terminology, and is due out this year.

Competitor M*Modal- MedQuist has already unveiled its Speech Understanding technology.

“It’s a little more in-depth than simple speech recognition,” explains Mitchell. He says the Speech Understanding technology enables the system to accurately pick up words regardless of a physician’s accent and makes documents searchable. Coupled with the Natural Language Understanding tool, it has revolutionized speech recognition to ensure dictation is more accurate and includes properly encoded information.

“Speech recognition has been around for a long time, but the second generation of this, Speech Understanding, will drive innovation with the ability to derive greater understanding out of the spoken word from any input or capture device,” Mitchell adds. “Natural Language Understanding takes this a step further and instantly converts the narrative into a structured clinical document that enables collaboration and eliminates inefficiency in clinical and administrative workflow.”

“The speech recognition engines have improved with additional processing power and other technology improvements,” ChartLogic’s Melis says. “Many have moved beyond simple phonetics to natural language processing, which can analyze vast amounts of data and determine the most likely combinations of words and phrases that people use in any given context. This has allowed the computer to be smarter in determining the contextual difference between the phrases ‘ice cream’ and ‘I scream,’” he notes.

Melis believes context will become the most important factor in speech recognition.

“This allows us to understand that you want to order a patient’s MRI, not insert text into the chart indicating that you may order an MRI (later),” he explains.

You Must Be Logged In To Post A Comment