Skip to Main Content

Advertisement intended for health care professionals

Skip Nav Destination

The Rank Tank

December 30, 2021
Are public health-care ratings an indicator of quality or of popularity?

Each summer, U.S. News & World Report releases its list of the best hospitals in the nation, but this year, centers eagerly awaiting the results had to wait a little longer: The publisher postponed the public release of its annual hospital rankings by one week after discovering errors in the data used to compile the report.1

The errors were attributed to a new methodology used in the 2017-2018 rankings that were intended to "allow [the publisher] to better assess hospital care," according to correspondence between U.S. News & World Report and participating hospitals. The changes "are complex to implement, and we discovered errors late in the process," the publisher explained. "We are confident that once correctly implemented, these changes will benefit our shared goal of providing patients with better information about their health care."

The rankings were eventually released August 8, 2017, with few surprises about which hospitals landed a top-10 spot, but this year's blunder raised questions about the validity and value of such ranking systems.2

For instance, how much credence do patients and health-care professionals give to these rankings? Do they offer realistic evaluations of a health-care facility's level of care, or are they simply a marketing tool? How are they developed? And, finally, what should physicians and other health-care professionals do with this information – embrace it or ignore it?

ASH Clinical News sought a clearer picture of how public ranking systems can advance – and impede – the common goal of better medical care.

Ranking Roundup

The health-care rankings field is crowded with both consumer and federal agencies offering recommendations. The U.S. News & World Report rankings are arguably the most established, but consumers can also consult rankings from The Leapfrog Group, Healthgrades, Consumer Reports, and even Yelp.

For more information about how individual clinicians are ranked, see "How Do You Rank?" from our July 2016 issue.

The Centers for Medicare and Medicaid Services' (CMS') Hospital Compare website ranks more than 4,000 Medicare-certified and Veterans Affairs hospitals from one to five stars to help users decide where to access health care – and to encourage hospitals to improve the quality of the care they provide.3

CMS' Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey uses standardized measurements of patients' perspectives on hospital care "to allow objective and meaningful comparisons between hospitals on domains that are important to consumers."4 These items complement the data hospitals already collect to "support improvements in quality-related activities."

Each ranking entity has its own ratings protocol, so the simple question, "How are the rankings developed?" has a complex answer.

U.S. News & World Report offers multiple ranking lists: "Best Hospitals Specialty," "Best Hospitals Procedure and Conditions Ratings," "Best Hospitals Honor Roll," to name a few, along with rankings for regional and children's hospitals. The methodology for the 2017-2018 "Best Hospitals Specialty" rankings is laid out in a publicly available, 137-page PDF.5 Cancer centers' scores account for:

  • structure (reflecting resources that a center makes available to patients, like the number of nurses who care for patients, 30%)
  • process/expert opinion (measuring a hospital's reputation, based on the average number of nominations from physician surveys, 27.5%)
  • outcomes (including 30-day mortality, 37.5%)
  • patient safety (quantifying instances when patients may be avoidably harmed or put at risk, and taken from the Health Services Cost Review Commission all-payer database, 5%)

Consumer Reports explains its ranking system in a comparatively slim, 38-page document; rankings reflect scores on patient outcomes, patient experiences, and hospital practices domains.6 Leapfrog uses a proprietary survey to calculate its data,7 but most other ranking entities mine historical Medicare data, to which they apply their ranking methods.

"You need a PhD to read the methodology documents; I have one, and I still don't understand it," joked Susan Moffatt-Bruce, MD, PhD, MBA, a cardiothoracic surgeon and chief quality and patient safety officer at The Ohio State University Wexner Medical Center in Columbus.

Even when ranking systems are transparent about their formulas, the process seems "to take place in a black box, and when it's a composite score, it's difficult for a hospital to [know where to] take action to improve that score," said Barry Rosen, MD, a general surgeon at Advanced Surgical Care of Northern Illinois in Barrington and vice president of medical management at Advocate Good Shepherd Hospital in Downers Grove, Illinois.

"I'm coming at this from the perspective of a physician and a hospital administrator," he added. "As [the latter], we care a great deal about these scores, but many of these companies won't share their raw data unless you enter a consulting engagement with them, and that can be quite costly."

The mystery surrounding the scores also limits their usefulness for patients. "Rankings are not necessarily informative to patients because the ranking entities take so many factors and aggregate them to create a ‘star' rating," noted Charles Dinerstein, MD, a senior medical fellow at the American Council on Science and Health, where he has written extensively on the pitfalls of ranking systems. "What's interesting and valuable to patients may very well be lost in the aggregation."

According to a 2008 Health Affairs study comparing five nationwide ranking services and one state service, the potential for disagreement among ranking services "appears likely to confuse, rather than inform, consumers."8 Michael Rothberg, MD, MPH, vice chair for research at Cleveland Clinic's Medicine Institute and director of its Center for Value-Based Care Research, and co-authors found that each service assessed different measures of structure, process, and outcomes and used inconsistent reporting periods and patient definitions. "Consequently, they failed to agree on hospital rankings within any diagnosis, even when using the same metric (such as mortality)," the authors concluded.

The report offered suggestions for improving public reporting and ranking systems, such as a more active role for hospitals in designing "measures that can be efficiently collected and that represent the true quality of care offered," citing the Hospital Quality Alliance (which develops and implements the aforementioned CAHPS survey) and the American College of Surgeons' National Surgical Quality Improvement Program as good examples of active participants. They also called on rating services to solve the problems of "risk adjustment and random variation" with standardized methodology and reporting periods to level the playing fields for providers.

Ranking Redux?

The methodology changes that produced errors in the 2017-2018 U.S. News & World Report data were related to the "Patient Safety Score" measure and the "Best Hospitals Specialty" rankings: removing two safety indicators for rare adverse events, adjusting survival scores to account for patients' socioeconomic status, and using a different Medicare data set to exclude patients who were transferred to other hospitals but had not been recorded as transfers in hospital-submitted data. When it announced the errors, U.S. News & World Report acknowledged that analysts incorrectly adjusted mortality rates for hospitals with high rates (top quartile) or low rates (bottom quartile) of transfer cases.9 In all data-driven specialties, certain transfer cases were incorrectly excluded from the mortality calculations, while certain procedural cases were incorrectly included. There were also errors in calculating poverty-adjusted mortality rates.

Before eventually releasing the report on August 8, 2017, U.S. News & World Report undertook "a comprehensive review of the rankings and ratings analyses, including a careful review of each step of the mortality analyses" to correct the uncovered errors.

Ben Harder, chief of health analysis at U.S. News & World Report, told ASH Clinical News that the errors "were limited to the calculation of risk-adjusted survival. While we made several methodologic improvements this year, we did not overhaul the rankings methodology as a whole."

Some clinicians, however, would argue that that's exactly what the ranking services need.

The Role of Reputation

U.S. News & World Report ranks hospitals and cancer centers, and the latter ranking is conducted based on objective data alone. "We include a variety of diagnoses in our rankings of cancer centers," Mr. Harder explained, but there is no additional breakdown of the data based on whether a center treats primarily solid-tumor or hematologic malignancies. "We do, of course, risk-adjust for diagnosis-related groups and severity of illness, along with other patient factors, such as age and socioeconomic status."

“We're all going to be graded, but it's better that we inform how we want to be assessed, rather than just complaining about it.”

—Susan Moffatt-Bruce, MD, PhD, MBA

In other words, cancer is one of a dozen specialty areas (out of 16, total) in which objective data primarily determine ranking – not reputation and physician survey responses.

In four specialty areas (ophthalmology, psychiatry, rehabilitation, and rheumatology), "ranking is determined entirely by reputation, based on responses from three years of surveys of physician specialists," according to U.S. News & World Report.10

This "reputation-only" ranking has rankled some critics of the U.S. News & World Report data. In a 2017 study in the American Journal of Medical Quality, Dr. Moffatt-Bruce and colleagues evaluated the relationships between the 2015 U.S. News & World Report scores and the subjective and objective components that eventually determine a hospital's ranking in U.S. News & World Report's 2015 "Best Hospitals" list.11

They found that, despite methodology changes introduced in 2010 to reduce the weight of a center's reputation score, the Best Hospital rankings are "disproportionately influenced by the subjective reputation measure." The researchers expressed concern that "allowing such a subjective measure of care to influence the determination of America's best hospitals can affect the provision of care, introduce gaming in health care, and lead to misinformed consumer decision making."

In response, Mr. Harder indicated that reputation counts for "at most, 27.5 percent" of a hospital's score in each data-driven adult specialty, and that percentage is the lowest it has ever been in U.S. News & World Report rankings history. "A majority of each hospital's score is determined by objective data," he stated. "At 42.5 percent in most specialties, outcomes measures far outweigh reputation. And outcomes matter most to patients.

"While they posed an interesting research question, [Dr. Moffatt-Bruce and researchers'] data do not support the conclusion they asserted," he said, commenting on the study. "Specifically, they measured the relationship between reputation score and overall U.S. News score (a 0-100 score), but they attempted to draw a conclusion about the relationship between reputation score and U.S. News rank, which is a different variable and is considerably less closely correlated to reputation." He added that he encouraged the group to repeat the study "after aligning the methods with the research question."

Although Dr. Moffatt-Bruce said the group will re-run analysis using updated U.S. News & World Report data, she's not convinced that the outcomes will be any different. "We looked at rank and overall score," she said, "and we still saw that reputation greatly influenced the outcome."

The ranking entities undoubtedly have noble intentions – keeping the public and health-care stakeholders informed – Dr. Moffatt-Bruce said, but "this is an arduous feat, considering the vastness of the American health-care system and how many data points they are trying to pull together."

She praised U.S. News & World Report for being "thoughtful" about trying to truly account for outcomes, patient safety, and other elements when weighing the data, but, despite those efforts, reputation still matters – at least anecdotally. "If you were to bring 100 people in a room, they would probably all say [the rankings] are about reputation," she said. "They couldn't even tell you what the other components of U.S. News & World Report are."

Putting Rankings into Practice

Data-collection methodology, data analysis, and data validity aside, who are these rankings really for?

Dr. Dinerstein said he sees the rankings primarily as a marketing tool for hospitals and the ranking entities themselves. "U.S. News & World Report uses it to sell their magazines and their logo, but as a physician, I have never understood how marketing the hospital or the health system drives business to them," he said. "My experience has been that patients go to the hospital I want them to go to, or, if it's an emergency, to the closest one."

Dr. Moffatt-Bruce offered the theory that rankings appeal to a few audiences: "One is the patient, although I think [the rankings] confuse, rather than help, patients because of the volume of information. The second is the payers that are sending patients our way. The third audience is other physicians or faculty who want to get a sense of an institution before joining it."

Dr. Rosen sees the target audience of the rankings as the hospitals – and that's not necessarily a bad thing. For example, ranking regional hospitals encourages health-care systems to be more open with their outcomes data. "The large hospitals typically earning a spot in the U.S. News rankings recognize that they need to share with the public more community-based information, rather than national information, because most people aren't going to get on a plane to get their health care in a different city," he explained. "That certainly is a step in the right direction."

Though most ranking systems are consumer-focused, the physicians who spoke with ASH Clinical News questioned how patients would use them. Having access to the information without a context for understanding it is misleading to the patient, Dr. Moffatt-Bruce said. "I'm going to buy my vacuum cleaner based on Consumer Reports, and I'm also going to look there for guidance about my health care? It feels uncomfortable."

“At 42.5 percent in most specialties, outcomes measures far outweigh reputation. And outcomes matter most to patients."

—Ben Harder (U.S. News & World Report)

Dr. Dinerstein believes that patients use the aggregated rankings to get an idea of the general health-care landscape, but that they tend to turn to neighbors, friends, and family for their anecdotal experiences at specific institutions or with specific providers.

Dr. Rosen added that a hospital's ranking doesn't tell patients enough about individual physicians, and he'd like to see ranking groups move away from hospital-specific data to physician-specific data. "You can get more information about what refrigerator you want to buy than about what physician you want to see," he said. Despite claims of intricate data analysis, he noted, "most rankings are reputation-based, and they are mostly derived from subjective data. When we start moving toward more physician-specific, objective data – looking at processes of care and outcomes of care – it [will] be to our patients' greatest benefit."

You've Been Ranked – Now What?

Every clinician who spoke with ASH Clinical News emphasized one major point: Physicians are not in a position to bury their heads in the sand and ignore the rankings. Instead, they need to be more proactive with rankings, on behalf of their institutions and with their patients.

For instance, physicians may have to defend their institution's lower ranking to a patient who's looking to transfer to a higher-ranked hospital. But, Dr. Dinerstein noted, there are most likely other factors at play besides an assigned number on a list. "I would have a one-on-one conversation to try to determine what the patient is most interested in and concerned about and address that, rather than continuing to look at those aggregated ratings," he explained. "Remember, you are only as good as your last case. So, if you have a horror story, that's the story that gets repeated."

Middle-of-the-pack hospitals and centers can benefit most from the rankings, according to Dr. Moffatt-Bruce. The institutions that make regular appearances at the top of the lists will likely always jostle for the same top spots. Rather than waiting for the rankings and bemoaning the results, Dr. Moffatt-Bruce strongly encouraged physicians to get involved in outcomes-based data collection, either through their own institutions or through their professional specialty societies, both at the state level and nationally.

"The onus is on faculty, practitioners, surgeons, and physicians to engage and inform," she stressed. "We're all going to be graded, but it's better that we inform how we want to be assessed, rather than just complaining about it."

Drs. Dinerstein and Rosen agreed that physicians need to take the reins in the ranking realm.

No practitioner is an island, and being affiliated with a low-ranked hospital may reflect negatively on a practitioner's practice. "There is a disconnect between physician and hospital," Dr. Dinerstein said. "Many private-practice physicians don't realize they're part of an ecology. Physicians should look closely at the rankings so they can compare their perception of the hospital with what other people are saying."

He also advised his colleagues to pay more attention to the outcomes and ranking data that CMS makes publicly available. Keeping track of those data is not second nature to physicians, but it should be, he said. "It does impact us, and the sooner we use those tools, the better it's going to be for us."—By Shalmali Pal


References

  1. STAT News. U.S. News postpones release of hospital rankings due to data errors. Accessed September 1, 2017, from https://www.statnews.com/2017/07/17/us-news-hospital-rankings/.
  2. U.S. News & World Report. U.S. News announces 2017-18 Best Hospitals. Accessed September 1, 2017, from https://www.usnews.com/info/blogs/press-room/articles/2017-08-08/us-news-announces-2017-18-best-hospitals.
  3. Medicare.gov. What is Hospital Compare? Accessed September 1, 2017, from https://www.medicare.gov/hospitalcompare/about/what-is-HOS.html.
  4. Hospital Consumer Assessment of Healthcare Providers and Systems. Accessed September 2, 2017, from http://www.hcahpsonline.org/home.aspx#background/.
  5. U.S. News & World Report. Methodology updates for Best Hospitals 2017-18. Accessed August 31, 2017, from http://health.usnews.com/health-news/blogs/second-opinion/articles/2017-07-05/methodology-updates-for-best-hospitals-2017-18.
  6. Consumer Reports. How we rate hospitals. Accessed August 31, 2017, from https://www.consumerreports.org/cro/2012/10/how-we-rate-hospitals/index.htm.
  7. The Leapfrog Group. Survey content. Accessed August 31, 2017, from http://www.leapfroggroup.org/ratings-reports/survey-content.
  8. Rothberg MB, Morsi E, Benjamin EM, et al. Choosing the best hospital: the limitations of public quality reporting. Health Aff. 2008;227:1680-7.
  9. U.S. News & World Report. What we corrected in the embargoed 2017-18 Best Hospitals Rankings." Accessed September 1, 2017, from http://health.usnews.com/health-news/blogs/second-opinion/articles/2017-07-26/what-we-corrected-in-the-embargoed-2017-18-best-hospitals-rankings.
  10. U.S. News & World Report. FAQ: How and why we rank and rate hospitals." Accessed September 2, 2017, from http://health.usnews.com/health-care/

Advertisement intended for health care professionals

Connect with us:

CURRENT ISSUE
October 2024

Advertisement intended for health care professionals

Close Modal

or Create an Account

Close Modal
Close Modal

Advertisement intended for health care professionals