Strategic Planning: Demonstrating Value and Report Cards of Key Performance Measures



Introduction





The field of Hospital Medicine has enjoyed tremendous growth over the past decade. Although partially driven by manpower needs derived from resident duty-hour restrictions and the declining availability of primary care physicians to oversee inpatient care, the wide-spread adoption of Hospital Medicine practice over the last ten years has also been fueled by the concomitant growth of the health care quality movement. The compelling need for improvement in the quality of care delivery in U.S. hospitals, heralded in two seminal Institute of Medicine reports, one in 2000 (To Err is Human) and the other in 2001 (Crossing the Quality Chasm), created an important platform upon which hospitalists could offer potential value to hospitals, patients, and referring primary care providers as a new field of inpatient specialists offering both the clinical and operational expertise needed to achieve optimal outcomes in hospital-based care.






The definition of “value” in Hospital Medicine has evolved over the last 10 years. The need to demonstrate some concept of “value” to key stakeholders (hospitals, patients, and referring primary care physicians) has been valid since the inception of the field, given that a significant proportion of hospitalist groups rely in some measure upon institutional fiscal support to exist. Hospitals, in particular, have been very much interested in understanding their “return on investment” for their financial commitments to hospitalist groups. Early studies demonstrated that hospitalist-driven care was associated with reduced lengths of stay and enhanced adherence with payor-defined “core measures” of performance. Fiscal value was a primary driver of early adoption of hospitalist medicine practices as lower lengths of stay of medical inpatients implied greater capacity for inpatient volume growth in high-margin specialties, and adherence to payor-defined performance measures meant hospitals could qualify for incentive payments (or avoid disincentive penalties) related to their quality of care delivery. As public reporting of hospital performance evolved, however, there has been increasing focus on clinical outcomes and patient satisfaction survey results as valid measures by which to compare hospitals; such attention has quickly translated into new domains by which the “value” of hospitalist-driven care can be assessed.






Performance Assessment for Hospitalists



Despite a number of studies designed to assess the quality of hospitalist-driven care, there remains a relative paucity of compelling evidence thus far that hospitalist care is necessarily more likely to result in improvements in meaningful outcomes such as mortality, readmission rates, or the quality of patients’ hospital experience. In an increasingly financially constrained, and in some regions increasingly competitive, operating environment it may therefore be incumbent upon individual hospitalist groups to be able to demonstrate the value of their work in order to deliver an anticipated “return on investment” to sponsoring institutions through specific measures derived from their groups’ own practice and measured at their own institutions.



A useful framework to assess value may be found in the Institute of Medicine’s 2001 Crossing the Quality Chasm report, in which 6 “aims” for quality health care were defined: safe, efficient, effective, equitable, timely, and patient-centered. The various domains of hospitalist practice can be summarized to match this framework, and in many cases a variety of established metrics are readily available to most institutions in order to assess actual performance within each of these aims.



In considering the concept of performance assessment for hospitalists, hospital leaders should determine how process and outcome-based metrics might be applied: to individual hospitalists, a given hospitalist group, or to the hospital as a whole. The applicability of measurement at the level of the individual hospitalist depends on the nature of staffing and shared management of each patient over the course of the typical hospital admission. In many shift-based models, a number of consecutive hospitalists might be involved in the care of a single patient over the course of one hospitalization. If an ACE-inhibitor has not been prescribed during a hospitalization for a patient with chronic systolic heart failure and no contraindications, to whom might accountability lie (the admitting hospitalist, the daily rounding hospitalist who saw the patient early in the admission, or the discharging hospitalist)? In many instances, metrics applied at the group level might prove most useful by serving to identify opportunities for group directors to implement group-level interventions to enhance suboptimal outcomes. Metrics at the hospital level might apply for those groups whose members have achieved positions of departmental or hospital leadership, with the expectation that hospital-wide practice would improve once standards and innovations introduced and implemented by hospitalist staff are ultimately adopted by all services within a given hospital.



Another important consideration in determining the value of a Hospital Medicine group is clarifying whether there is an expectation for clinical processes and outcomes to improve only for patients admitted under hospitalists’ care, or whether hospitalists are expected to contribute to system-wide transformation to improve outcomes across services for all admitted patients. In many academic medical centers, hospitalists play a central role in both undergraduate and graduate medical education, with the ability to provide individual instruction and mentorship as well as potentially contributing to curriculum and/or administrative leadership to improve educational programs overall at a given institution. In both clinical and educational impacts, therefore, hospitalists might be expected to be responsible for (and accountable to) outcomes at both the individual patient/learner level as well as system-wide performance.



| Print

Practice Point





  • As public reporting of hospital performance evolved, there has been increasing focus on clinical outcomes and patient satisfaction survey results as valid measures by which to compare hospitals; such attention has quickly translated into new domains by which the “value” of hospitalist-driven care can be assessed. In an increasingly financially constrained, and in some regions increasingly competitive, operating environment, it may therefore be incumbent upon individual hospitalist groups to be able to demonstrate the value of their work in order to deliver an anticipated “return on investment” to sponsoring institutions through specific measures derived from their groups’ own practice and measured at their own institutions.



There remains active investigation and debate as to which data actually represent meaningful metrics of hospital-based practice. Hospitalist groups are often largely reliant on existing hospital-level administrative databases, generally designed around payor-defined incentives and requirements for participation. Such data has traditionally emphasized process-based metrics (eg, whether pneumococcal vaccination was administered to appropriate atrisk populations) but have recently expanded to include well-defined clinical outcomes such as disease-specific mortality and readmission rates. Whether current process-based metrics alone sufficiently or accurately reflect the quality of care delivery has come under increasing scrutiny (especially given resources required to collect and report such metrics), and concurrent focus on associated clinical outcomes helps to ensure that such process-based performance predictably correlates with actual patient outcomes. Given the resources typically required to collect and report such metrics, most hospitalist groups will by necessity rely on data abstracted from existing hospital-based administrative measures for their own reporting and quality assurance purposes. The utility of such data, how-ever, depends in part on the particular hospitalist group’s clinical scope of practice, as standard disease-specific reporting may or may not coincide with those conditions primarily managed by that hospitalist group. For example, if the majority of heart attack patients are admitted to a cardiologistled subspecialty service, hospital-level metrics of compliance with aspirin on arrival may not reasonably reflect hospitalist practice at that institution. Depending on the extent of quality assurance programs at a given institution, acquisition of more detailed information regarding practice performance typically requires design and application of dedicated reporting instruments focused on specific out-comes. Given the often prohibitive administrative resources needed for such data collection, this chapter primarily focuses on strategies to assess hospitalist performance derived largely from data that might reasonably be expected to be collected by hospitals for purposes of payor and/or regulatory compliance.



The issue of risk adjustment may also be important, especially if data is intended to offer valid comparison with either prior performance (eg, has length of stay declined because of better physician performance or because of declining acuity?) or with outside hospital/hospitalist groups (ie, “our patients tend to be sicker than theirs…”). Such risk adjustment may or may not be straightforward at an institutional level, but would typically be applied to publicly-reported data when comparing outcomes across institutions. Risk adjustment for comparison of out-comes within a given institution over time might be accomplished using case-mix index versus diagnostic-related groups (DRG)-specific measurements versus more formal assessment of acuity and comorbidity (using established tools such as the Charlson score).






Six Aims for Health Care as Described by the Institute of Medicine





Safety



Patient safety may be defined as “the avoidance, prevention, and amelioration of adverse outcomes or injuries stemming from the processes of health care.” The complexity and fast pace of the inpatient environment certainly presents major challenges to ensuring a “safe” episode of care (free of iatrogenic infection, injury, or error), and the hospitalist is in many ways ideally suited, due to his/her clinical area of focus, systems expertise, team leadership, and committed time within the operational environment of the hospital ward, to being a champion of patient safety for his/her patients as well as for the hospital in general.



As safety science continues to develop within health care, a number of adverse outcomes have been recognized as avoidable through reliable practice of evidence-based prevention measures. Such outcomes include infections acquired as a consequence of hospital-based procedures or exposures; falls related to debility, delirium, concurrent illness, or medication effects; thromboembolic events; as well as less frequently encountered but potentially devastating incidents such as wrong-site surgery, transfusion of mismatched blood type, or patient suicide. Such events are included on the National Quality Forum’s Serious Reportable Events list, which may be used by hospitals as a basis for reporting outcomes to regulatory and oversight bodies. The safety performance of a given Hospital Medicine group can thus be assessed through the frequency of safety “failures” involving patients admitted to that group.



An important consideration when reviewing safety performance metrics, however, is the likelihood of a historical bias toward “under-reporting” of safety-related adverse events at most institutions. Should the result of hospitalist quality improvement efforts ultimately succeed in fostering greater transparency and better reporting of adverse events, this may initially paradoxically drive up the rate of reported serious adverse events, regardless of the actual frequency of such events before and after the introduction of the hospitalist group.






Effective



The Institute of Medicine’s second aim for health care quality improvement is effectiveness, described as the provision of services based on scientific knowledge. In the context of the practice of Hospital Medicine, this could be interpreted to mean the provision of evidence-based medicine as demonstrated by adherence to consensus-based clinical guidelines.



In 2003, the Centers for Medicare and Medicaid Services (CMS) began the Reporting Hospital Quality Data for Annual Payment Update (RHQDAPU) program, which based an annual incentive payment on the public reporting of a set of quality of care measures including process of care measures, outcome of care measures, and survey data on patients’ perspectives of care; all designed by the Hospital Quality Alliance (HQA), a collaboration between the Center for Medicare Services (CMS), provider organizations, and participating acute care hospitals nationwide. These data are publically available through the Hospital Compare Web Site (http://www.hospitalcompare.hhs.gov), and as of Fiscal Year 2009, 96% of hospitals participated in the reporting program and received the full incentive payment update for Fiscal Year 2010. The included process of care measures reported to the CMS are derived from evidence-based clinical guidelines for several of the most common diagnoses requiring inpatient care among Medicare and Medicaid recipients. Examples of ‘process of care measures’ specifically relevant to a hospitalist’s common clinical practice include: prescription of an angiotensin converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) for left ventricular systolic dysfunction, pneumococcal and influenza vaccination for pneumonia, and appropriate antibiotic selection for pneumonia. Relevant ‘outcome of care’ measures include the 30-day risk-adjusted mortality rates following hospitalizations for pneumonia and heart failure. These indexes of quality may be appropriate surrogate measures of effective clinical care, and the HQA data are often referenced as the existing national standard of quality. As nearly all acute care hospitals already acquire and report these measures both publically and to the CMS, evaluation of the clinical effectiveness of hospitalists or a hospitalist group within an institution through these metrics would not place an undue additional burden on administrative resources available to the group performing a self-evaluation by this method.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 13, 2016 | Posted by in CRITICAL CARE | Comments Off on Strategic Planning: Demonstrating Value and Report Cards of Key Performance Measures

Full access? Get Clinical Tree

Get Clinical Tree app for offline access