Over 100 Total Lots Up For Auction at Two Locations - WA 11/05, PA 11/06

As medidas usadas para o CMS programam o retrato inaccurate da pintura da qualidade do cuidado: Editorial de JAMA

por Lauren Dubinsky, Senior Reporter | February 13, 2015
The measures that CMS uses to decide how much hospitals get paid or penalized do not give an accurate representation of their overall quality of care, according to a new Northwestern Medicine editorial published in JAMA.

Patient Safety Indicator 90 (PSI-90) tracks the occurrence of eight different adverse events including blood clots, pressure ulcers, hip fractures and infections. It was initially intended to quantify those events and assist hospitals in making their own quality improvement efforts. It now makes up about a third of the CMS Hospital-Acquired Condition (HAC) Reduction Program and Hospital Value-Based Purchasing (VBP) Program scores.

"Pay-for-performance policies should be rewarding hospitals that do an excellent job of identifying these conditions before they cause harm to patients, but currently these measures do the opposite," noted Dr. Gary A. Noskin, senior vice president and chief medical officer at Northwestern Memorial and professor of medicine at the Feinberg School.

The editorial cites a recent JAMA study that found that hospitals with high rates of postoperative blood clots were often the ones that were screening patients for clots the most. That paints an inaccurate picture that high-quality hospitals are delivering poor care.

According to the editorial, PSI-90 measures also cause individual complications to be counted multiple times by the Hospital VBP program and HAC Reduction Program. In addition, they do not accurately differentiate between dangerous complications and those that cause little or no harm, and do not account for differences in hospitals' patient populations.

But even with all of the potential issues, the authors believe that PSI-90 can still be used effectively within the programs if they are "properly constructed." The adverse events that are measured are essential for quality improvement, but surveillance bias, redundancy and the lack of clinical relevance have to be ironed out before it can lead to real improvement, according to the editorial.

You Must Be Logged In To Post A Comment