Back Home

Project Background

STUDY INTRODUCTION

In 2001, the Institute of Medicine (IOM) released the report, Crossing the Quality Chasm: A New Health System for the 21st Century. Highly critical of the U.S. healthcare system, the IOM argued that current systems of care cannot do the job of providing Americans with the quality healthcare system they need, want and deserve. Simply trying harder will not work. If we want safer, high-quality care, we need intense and far reaching efforts at all levels of the organization to fundamentally redesign, or transform, systems of care (IOM, 2001).

The Quality Chasm report proposed six aims for improvement focused on the patient experience: health care should be safe, effective, patient-centered, timely, efficient, and equitable. However, the report did not prescribe how the new healthcare delivery system should meet those aims, arguing instead that the details of major system redesign are better left to local efforts (IOM, 2001).

Today he six IOM aims are widely recognized and many healthcare systems are striving to make major changes in their organizations. Few, however, have succeeded in making substantial transformations to achieve those aims. The call for dramatic redesign stands in contrast with the usual approach to improvement that most healthcare systems undertake. Many systems have utilized various quality improvement tools and techniques for many years. And many have succeeded in achieving short-term improvements in targeted areas as a result of hard work and focused priority attention. However, the results of improvement strategies in yielding lasting system change have been disappointing (Beer, Eisenstat, & Spector, 1990; Reinertsen, Pugh, & Bisognano, 2005; Repenning & Sterman, 2001; Solberg et al., 2000). Too often, improvement success is limited to particular projects, fails to spread throughout the organization (Rondeau & Wagar, 2002), and does not lead to sustained performance after the goal has been met or when attention moves on to other problems (Berlowitz, Young, Brandeis, Kader, & Anderson, 2001; Donaldson & Mohr, 2001). Transformational change, by contrast, is pervasive, involving not only structures and processes, but also the very nature of the healthcare organization reflected in its culture and values (NHS, 2006).

The question, then, is how can healthcare systems transform to provide consistently safe, high-quality care for patients as envisioned in the Quality Chasm? We addressed that question by identifying factors critical to successful system redesign, or transformation, from the experiences of twelve healthcare systems striving to provide superior – and in some cases, perfect – care to their patients. This model is a representation of our conclusions. It is based on analyses from the national evaluation of the Pursuing Perfection (P2) Program, a major initiative of the Robert Wood Johnson Foundation (RWJF). Created in 2001 to respond to the challenges outlined in the Quality Chasm report, P2 was directed toward assisting hospitals and physician organizations to achieve dramatic improvements in patient outcomes by pursuing perfection in all of their major care processes. RWJF was supported in this initiative by the Institute for Healthcare Improvement (IHI), which served as the national program office for P2 and provided guidance and technical assistance to grantee health care systems.

This model offers an understanding how organizations move from short-term or isolated performance improvements to sustained, organization-wide, highly reliable, evidence-based improvements in patient care (VanDeusen Lukas, 2007). The elements that we present as critical to successful transformation have been studied before – some quite extensively. Our contribution lies in bringing them together, and in some cases extending their conceptual basis, to show how they behave and interact in healthcare systems striving for perfect care.

There are many theories of organizational change and improvement, and extensive research on the challenges of organizational transformation. The objective of the P2 evaluation was to identify, describe and understand the factors that contributed to – or impeded – the health care systems’ ability to achieve their transformational goals. No single theory seemed adequate to explain the complex phenomena that we were to evaluate. Therefore, consistent with literature that highlights the value of using multiple theories and disciplinary perspectives in studying organizational change (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Grol, Bosch, Hulscher, Eccles, & Wensing, 2007; Poole & Van de Ven, 2004b), the P2 evaluation drew constructs from multiple theories. In designing the evaluation, the initial conceptual framework was based in research on microsystem effectiveness (including especially the concepts of communication, coordination, organizational culture and management support and involvement) (Nelson et al., 2002) and on diffusion of innovation at the organizational level (Rogers, 1995)). The initial design reflected IHI’s intervention strategy in each P2 system: to focus first on achieving perfect patient care in two specific clinical areas, then expand to five areas, and finally expand to full organizational transformation. In addition, we used the IOM Quality Chasm framework of six aims for patient care (IOM, 2001) and Malcolm Baldrige National Quality Program guidelines (Baldrige National Quality Program, 2005) as frames of reference. Most systems explicitly used the IOM aims. Several systems adopted the Baldrige criteria and others were considering adoption. At the same time, the data collection strategy was designed, as described in the Methods section below, to capture important system experiences, dynamics and learning that were not necessarily emphasized in the original frameworks. What we report here are the factors across theories that emerged as most important in the systems that we studied.

STUDY METHODS

Using a mixed-methods evaluation design, a multi-disciplinary team conducted comparative case studies in 12 healthcare systems over three and one-half years.

Study Sites: The participating healthcare systems included seven systems that received RWJF funding (P2 systems) and five systems initially selected to provide a basis for distinguishing the effects of P2 participation from other improvement efforts in the healthcare environment (expanded-study systems). The 12 systems included single hospitals, multi-hospital systems, integrated delivery systems and health plans in all regions of the United States. The seven P2 systems were selected competitively by RWJF. Each system received $2.4 million in implementation funding over four years in addition to ongoing support from IHI. The five expanded-study systems were selected by the evaluation team to exemplify healthcare organizations of different size and complexity but all with strong, long-standing commitments to improvement and high-quality care. Two of these systems were selected because they initially received P2 planning grants but were not selected for implementation funding. The other three systems were selected because they were recognized through public ratings and professional networks as high-performing organizations with reputations for focusing on patient care quality improvement. While not participating in the implementation phase of P2, the leaders from all expanded-study systems, to varying degrees, independently participated in IHI forums and learning groups other than P2.

Data Sources: The primary data source for the analyses reported here were semi-structured interviews: We conducted more than 750 interview sessions in the 12 systems over the 3.5-year study period (2002 – 2005). We visited each system up to seven times, conducting between 5 and 21 interview sessions in each visit. At each site visit, we interviewed individuals in positions we specified across the organization to obtain multiple perspectives on the organization and the changes underway. These interviews included senior leaders (CEO and clinical executive staff); senior quality improvement manager(s) and staff; members of the interdisciplinary quality improvement project teams (e.g., middle managers, improvement staff, physicians, nurses and other frontline staff). On at least two visits to each system, we interviewed representative frontline physicians and nurses affected by improvement initiatives, in addition to those participating in the improvement project teams. We also interviewed managers responsible for information technology, human resources, customer service and other business functions as their functions related closely to the organizational transformation. Many interview sessions had multiple participants, ranging from two when, for example, the senior quality manager sat in on the interview with the CEO to a large group when we met with the full improvement team. Outside of the improvement teams, individuals were interviewed with their peers. We recognized the drawbacks of group interviews but because of the broad scope and limited resources of the project, we accepted the tradeoff to talk with more people than would have been possible with all individual interviews. Interviews were conducted by two- or three-person teams using semi-structured guides and were one to two hours long. Seven members of the core evaluation team participated in the interview process, rotating interview team membership with the objective of having each team member visit all systems while at the same time ensuring the continuity of at least one core team member from one visit to the next in each system. In each interview, one team member took detailed notes which were subsequently converted to detailed visit transcripts.

To augment interview data, we reviewed materials provided by the systems such as strategic plans; improvement team workplans and presentations; team and organizational performance measures; and communication materials. These materials were used to clarify, expand and document information gathered in the interviews.

Analytic Approach: We conducted analyses as longitudinal comparative case studies, using an explanation-building analytic strategy based on analysis of interview transcripts to build, test and refine our conceptual model over time. After the first three waves of interviews, we coded and sorted the site visit transcripts into descriptive meta-matrices organized by domains from the original conceptual model and new themes that emerged empirically from the site visits. Consistent with Miles and Huberman’s guidelines for comparative case studies, we first created individual site matrices in order to analyze each case individually before seeking cross-site explanations, and then cycled back and forth between analytic strategies aimed at understanding case dynamics versus understanding the effect of key variables (Miles & Huberman, 1994). For each emerging domain, we added questions to the protocols for subsequent rounds of interviews. Following each round of interviews, documented team discussions of transcripts and interview findings created further definition and refinement of domains. This process was iterative, following Denzin’s interpretive synthesis approach of collecting multiple instances and inspecting them for essential elements, rather than Yin’s replication strategy of developing a theory by studying one case in depth and then successively testing it in other sites (Miles & Huberman, 1994). Using this approach, as we gained a deeper understanding of each site’s approach to improvement and transformation over time, we were able to validate themes, domains and interactions between elements. As an important step in the analytic process, we further refined the model by presenting it to study systems in several iterations for input, feedback, revision and validation.

Finally, as the basis for a summary rating of model presence in each system, we created from the qualitative analyses a short profile of each system on each model element. Each of the seven core members of the evaluation team reviewed the profiles and independently rated each system on each element on a 1 to 5 scale (1 = no or negligible evidence of that dimension present, 5 = fully present – this is what we think the dimension should look like at its best). The ratings across team members were consistent, with consistency defined as all ratings being on one or two adjacent points on the scale. We aggregated scores across elements and averaged them across raters to create a summary score of the extent of model present for each site.