Introduction
The continuous improvement of quality and safety is paramount in the healthcare sector.1, 2 Quality health care is fundamentally about “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge”3 (p. 1161). The seminal report from the Institute of Medicine (IOM), To Err Is Human,4 highlighted that most medical errors are not due to individual incompetence but rather stem from systemic and procedural flaws. Inefficient and inconsistent processes, fluctuating patient demographics, diverse insurance landscapes, variations in provider training and experience, and a myriad of other elements contribute to the inherent complexity of health care delivery. In light of these challenges, the IOM underscored that the healthcare industry’s performance is often below its potential and defined six crucial aims for healthcare systems: effectiveness, safety, patient-centeredness, timeliness, efficiency, and equity.2 Effectiveness and safety are particularly addressed through process-of-care measures, which evaluate whether healthcare providers are implementing procedures known to achieve desired outcomes and avoid harmful practices. Ultimately, the objectives of healthcare quality measurement are to ascertain the impact of care on health outcomes and to evaluate the extent to which healthcare adheres to evidence-based processes or professional consensus, while also respecting patient preferences.
Given that system or process failures are the root causes of errors,5 it is essential to employ various process-improvement methodologies. These techniques help pinpoint inefficiencies, ineffective care practices, and preventable errors, thereby facilitating system-wide improvements. Each method involves performance assessment and the application of findings to drive change. This chapter will explore key quality improvement strategies and Quality Management Tools And Techniques In Health Care, including Failure Modes and Effects Analysis (FMEA), Plan-Do-Study-Act (PDSA), Six Sigma, Lean methodologies, and Root Cause Analysis (RCA), all of which are instrumental in enhancing healthcare quality and safety.
Measures and Benchmarks in Healthcare Quality
Measuring the impact of quality improvement initiatives is crucial to determine their effectiveness. This measurement helps ascertain if improvement efforts (1) are driving positive change in the intended primary outcomes, (2) are causing unintended consequences in other areas of the system, and (3) necessitate further adjustments to bring processes within acceptable parameters6 (p. 735). The underlying principle of quality improvement measurement is that superior performance reflects high-quality practice and that comparing performance across providers and organizations fosters a drive for excellence. Recent years have seen a significant increase in the measurement and reporting of healthcare system and process performance.1, 7–9 While public reporting of quality metrics can highlight areas needing improvement and establish benchmarks at national, state, or local levels,10, 11 some healthcare providers have expressed concerns about the public release of comparative performance data.12 Furthermore, consumers, another key audience for these reports, often struggle to interpret the data effectively, limiting their use in making informed decisions about healthcare quality.13–15
The inherent complexity of healthcare systems, service delivery, the unpredictable nature of health conditions, and the specialized roles and interdependencies among healthcare professionals16–19 all contribute to the challenges in measuring quality. One significant hurdle is the variability in attribution linked to high-level cognitive processes, discretionary decision-making, problem-solving, and accumulated experience within healthcare settings.20–22 Another measurement challenge involves distinguishing between near misses with potential for harm and adverse events that are either isolated incidents or indicative of systemic risks.23
Organizations such as the Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, and The Joint Commission advocate for the use of reliable and valid quality and patient safety measures to drive healthcare improvement. A wealth of valuable measures applicable across various care settings and processes can be found on platforms like AHRQ’s National Quality Measures Clearinghouse (http://www.qualitymeasures.ahrq.gov) and the National Quality Forum’s website (http://www.qualityforum.org). The development of these measures typically involves a rigorous process, including assessing the scientific strength of evidence from peer-reviewed literature, evaluating measure validity and reliability, determining optimal measure application (e.g., risk adjustment needs), and conducting thorough measure testing.24, 25
Quality and safety measures are essential for tracking the progress of quality improvement initiatives against external benchmarks. Healthcare benchmarking is defined as a continuous and collaborative process of measuring and comparing key work process outcomes against top performers26 to evaluate organizational performance. Two primary benchmarking types are used in healthcare quality assessment. Internal benchmarking identifies and compares best practices within a single organization and tracks practice changes over time. Data from internal benchmarking can be visualized using control charts with statistically defined upper and lower control limits. However, internal benchmarking alone may not reflect best practices across the broader healthcare landscape. Competitive or external benchmarking involves comparing performance data between different organizations to assess performance and identify successful improvement strategies implemented elsewhere. Comparative data is accessible through national reports such as AHRQ’s annual National Health Care Quality Report1 and National Healthcare Disparities Report,9 as well as from proprietary benchmarking firms and groups, such as the American Nurses Association’s National Database of Nursing Quality Indicators.
Quality Improvement Strategies in Healthcare
Over four decades ago, Donabedian27 proposed a framework for evaluating healthcare quality based on structure, processes, and outcomes. Structure measures assess the availability, accessibility, and quality of resources, such as health insurance coverage, hospital bed capacity, and the number of nurses with advanced certifications. Process measures evaluate the delivery of healthcare services by clinicians and providers, for example, adherence to clinical guidelines for managing diabetic patients. Outcome measures reflect the ultimate results of healthcare interventions and can be influenced by environmental and behavioral factors. Examples of outcome measures include mortality rates, patient satisfaction scores, and improvements in overall health status.
Two decades later, healthcare leaders adopted methodologies from Deming’s work28 in revitalizing manufacturing industries in post-World War II Japan. Deming, considered the father of Total Quality Management (TQM), emphasized “constancy of purpose” and the systematic analysis and measurement of process steps in relation to capacity and outcomes. The TQM model is an organizational approach that integrates organizational management, teamwork, defined processes, systems thinking, and change management to foster an environment conducive to continuous improvement. This approach underscores the necessity of organization-wide commitment to quality and improvement for achieving optimal results.29
In healthcare, continuous quality improvement (CQI) is often used interchangeably with TQM. CQI serves as a framework for refining clinical practices30 and is based on the premise that every process and interaction presents opportunities for enhancement.31 Many hospital-based quality assurance (QA) programs typically concentrate on issues identified by regulatory or accreditation bodies, such as documentation audits, oversight committee reviews, and credentialing process evaluations.32 Several other strategies have been proposed for enhancing clinical practice. For example, Horn and colleagues introduced clinical practice improvement (CPI) as a “multidimensional outcomes methodology that has direct application to the clinical management of individual patients”33 (p. 160). CPI, a clinician-led approach, aims for a comprehensive understanding of healthcare delivery complexities, utilizing team collaboration, defining objectives, collecting data, assessing findings, and translating these insights into practical changes. Across these models, management and clinician commitment and active involvement are consistently identified as crucial for successful change implementation.34–36 Other quality improvement strategies emphasize the importance of management’s belief in the project, clear communication of purpose, and staff empowerment.37
Over the past two decades, quality improvement methods have increasingly focused on “identifying processes with suboptimal outcomes, measuring key performance indicators, employing rigorous analysis to develop new approaches, integrating redesigned processes, and reassessing performance to confirm improvement success”38 (p. 9). Beyond TQM, other prominent quality improvement frameworks have emerged, including ISO 9000, Zero Defects, Six Sigma, Baldridge, and the Toyota Production System/Lean Production.6, 39, 40
Quality improvement is defined as “systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings”41 (p. 667). A quality improvement strategy is “any intervention aimed at reducing the quality gap for a group of patients representative of those encountered in routine practice”38 (p. 13). Shojania and colleagues38 have developed a taxonomy of quality improvement strategies (see Table 1), suggesting that the choice of strategy and methodology depends on the specific nature of the quality improvement project. Additional quality improvement strategies and quality management tools and techniques in health care are accessible through AHRQ’s quality tools website (www.qualitytools.ahrq.gov) and patient safety website (www.patientsafety.gov).
Table 1
Taxonomy of Quality Improvement Strategies With Examples of Substrategies
Quality improvement projects and strategies differ fundamentally from research. While research seeks to evaluate and solve problems to produce broadly applicable results, quality improvement projects may involve smaller sample sizes, frequent intervention adjustments, and the rapid adoption of strategies that show promise.6 In a review comparing quality improvement and research, Reinhardt and Ray42 proposed four distinguishing criteria: (1) quality improvement applies research findings to practice, whereas research develops new interventions; (2) quality improvement poses minimal risk to participants, unlike research which may involve participant risk; (3) the primary audience for quality improvement is the organization itself, with findings potentially specific to that context, while research aims for generalizable conclusions across similar settings; and (4) quality improvement data is organization-specific, whereas research data is often collected from multiple organizations.
The limited volume of scientific literature on health services has historically slowed the adoption of quality improvement methods in healthcare,43, 44 but this is changing as more rigorous studies emerge. A quality improvement project increasingly resembles research when it involves practice changes, affects patient outcomes, employs randomization or blinding, and exposes patients to additional risks for the purpose of generalizability.45–47 Regardless of its classification as research, projects involving human subjects must prioritize ethical considerations, ensuring participant respect, informed consent, and scientific validity.41, 46, 48
Plan-Do-Study-Act (PDSA) Cycle
The Plan-Do-Study-Act (PDSA) model is a powerful tool for quality improvement projects and studies aimed at achieving positive changes in healthcare processes and outcomes. This methodology, extensively promoted by the Institute for Healthcare Improvement, facilitates rapid cycle improvement.31, 49 A key characteristic of the PDSA model is its iterative nature, emphasizing small, frequent cycles of change and assessment rather than large, slow implementations,50 before system-wide changes are adopted.31, 51
PDSA cycles are designed to establish a functional or causal link between process changes (specifically in behaviors and capabilities) and desired outcomes. Langley and colleagues51 suggest addressing three core questions before initiating PDSA cycles: (1) What is the project’s goal? (2) How will we know if the goal is achieved? and (3) What actions will be taken to reach the goal? The PDSA cycle begins by defining the problem’s nature and scope, identifying potential and necessary changes, planning specific changes, determining stakeholder involvement, selecting metrics to assess change impact, and defining the target area for the strategy. Next, the planned change is implemented, and data and information are collected. The implementation results are then analyzed and interpreted by reviewing key measurements indicative of success or failure. Finally, action is taken based on the results, either by fully implementing the change or by iterating the process with further refinements.51
Six Sigma Methodology
Six Sigma, initially developed as a business management strategy, focuses on enhancing, designing, and monitoring processes to minimize waste, optimize satisfaction, and improve financial stability.52 Process performance, or process capability, is the metric used to measure improvement, comparing baseline capability (pre-improvement) with post-improvement capability after piloting potential solutions.53 Six Sigma employs two primary measurement methods. The first method involves inspecting process outcomes, counting defects, calculating defects per million opportunities (DPMO), and using statistical tables to convert DPMO to a sigma (σ) metric. This approach is suitable for pre-analytical and post-analytical processes (pre-test and post-test phases). The second method uses process variation estimates to predict performance by calculating a σ metric based on defined tolerance limits and observed process variation. This method is well-suited for analytical processes where precision and accuracy can be experimentally determined.
A core component of Six Sigma is the Define, Measure, Analyze, Improve, and Control (DMAIC) approach, a structured, disciplined, and rigorous five-phase process.53, 54 The DMAIC cycle begins with project identification, review of historical data, and defining the scope of expectations. Next, continuous total quality performance standards are selected, performance objectives are defined, and sources of variability are identified. As the project is implemented, data is collected to assess the effectiveness of process changes. To support analysis, validated measures are developed to determine the capability of the new process.
Six Sigma and PDSA are interconnected methodologies. DMAIC builds upon Shewhart’s Plan, Do, Check, Act cycle.[55](#ch44.r55] The key elements of Six Sigma align with PDSA phases as follows: the ‘Plan’ phase of PDSA corresponds to defining core processes, key customers, and customer requirements in Six Sigma; the ‘Do’ phase aligns with measuring performance; the ‘Study’ phase corresponds to analysis; and the ‘Act’ phase aligns with improving and integrating solutions.56
Toyota Production System/Lean Methodology
The Toyota Production System, originally applied in Toyota’s automotive manufacturing,57 led to the development of the Lean Production System, or simply Lean methodology. While overlapping with Six Sigma, Lean distinguishes itself by focusing on customer needs and process improvement through the elimination of non-value-added activities (waste). Lean methodology involves maximizing value-added activities in an optimal sequence to ensure continuous operations.58 Root cause analysis is a critical component of Lean, used to investigate errors, enhance quality, and prevent recurrence.
Healthcare settings, including pathology laboratories, pharmacies,59–61 and blood banks,61 are increasingly adopting Lean principles to improve patient care effectiveness and reduce costs. Reviews of Lean implementation projects in healthcare show that organizations have improved patient safety and care quality by systematically defining problems, employing root cause analysis, setting clear goals, eliminating ambiguity and workarounds, and clarifying responsibilities. Project teams using Lean develop action plans to improve, simplify, and redesign work processes.59, 60 According to Spear, the Toyota Production System method clarifies “which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals are used to indicate that the work should begin (connection); and precisely how each step is carried out”60 (p. 84).
Successful Lean implementation in healthcare involves eliminating unnecessary daily activities linked to “overcomplicated processes, workarounds, and rework”59 (p. 234), engaging frontline staff throughout the process, and rigorously tracking problems during the problem-solving phase.
Root Cause Analysis (RCA)
Root cause analysis (RCA), widely used in engineering62 and similar to the critical incident technique,63 is a structured investigation and problem-solving approach focused on identifying the fundamental causes of events, including near misses. The Joint Commission mandates RCA for all sentinel events, requiring organizations to develop and implement action plans based on RCA findings to mitigate future event risks and monitor improvement effectiveness.64
RCA is a technique used to identify trends and assess risks whenever human error is suspected,65 grounded in the understanding that systemic factors, rather than individual actions, are typically the root causes of most problems.2, 4 The critical incident technique, a similar approach, involves collecting data on the causes and actions leading to an event after it has occurred.63
RCA is a reactive assessment initiated after an event, retrospectively outlining the sequence of events, identifying causal factors, and pinpointing root causes to comprehensively understand the event.66 Due to its intensive nature, RCA is ideally conducted by a multidisciplinary team trained in the methodology to triangulate findings and enhance validity.67 Aggregate RCA, used by the Veterans Affairs (VA) Health System, aims to optimize staff time by conducting multiple simultaneous RCAs focusing on trend assessment rather than in-depth case analysis.68
Using a qualitative approach, RCA aims to uncover the underlying causes of errors by examining contributing factors, including enabling factors (e.g., inadequate training), latent conditions (e.g., failure to verify patient ID), and situational factors (e.g., patients with the same name) that contribute to adverse events (e.g., medication errors). The investigation involves asking key questions, such as what happened, why it happened, what immediate factors contributed, why those factors were present, and what underlying systems and processes are implicated. Answering these questions helps identify ineffective safety barriers and systemic issues to prevent similar problems. Examining events immediately preceding the event in question is also crucial, as remote factors may also contribute.68
The final step in traditional RCA is developing system and process improvement recommendations based on investigation findings.68 This step is critical, as literature reviews suggest that RCA alone may not be sufficient to improve patient safety without subsequent action implementation.69 Aggregate RCA, a non-traditional approach used by the VA, involves simultaneous reviews of multiple cases within event categories to enhance efficiency.68, 70
Given the wide range of adverse event types and root causes, it is important to differentiate between system and process factors without assigning individual blame. It is widely recognized that errors rarely stem from irresponsibility, personal neglect, or intentional harm,71 a view supported by the IOM.4, 72 However, categorization systems for individual errors, such as the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP), which focuses on factors like “lack of attentiveness, lack of agency/fiduciary concern, inappropriate judgment, lack of intervention on the patient’s behalf, lack of prevention, missed or mistaken MD/healthcare provider’s orders, and documentation error”73 (p. 512), may divert attention from systemic and process factors that can be modified through targeted interventions. Even individual factors can often be addressed through improved education, training, and the implementation of forcing functions that make errors less likely.
Failure Modes and Effects Analysis (FMEA)
Errors are inevitable and unpredictable. Failure modes and effects analysis (FMEA) is a proactive evaluation technique used to identify and mitigate potential failures, problems, and errors in a system, design, process, or service before they occur.74–76 Originally developed for the U.S. military and utilized by NASA, FMEA predicts and evaluates potential failures and unrecognized hazards (e.g., probabilistic events) and proactively identifies process steps for failure reduction or elimination.77 The goal of FMEA is to prevent errors by identifying all potential failure modes, estimating their probability and consequences, and implementing preventive actions. In healthcare, FMEA focuses on the care system and involves multidisciplinary teams to evaluate processes from a quality improvement perspective.
FMEA can assess alternative processes or procedures and monitor changes over time, requiring well-defined measures to objectively track process effectiveness. In 2001, The Joint Commission mandated that accredited healthcare providers conduct proactive risk management activities to identify and predict system weaknesses and implement changes to minimize patient harm in one or two high-priority areas annually.78
Health Failure Modes and Effects Analysis (HFMEA)
Developed by the VA’s National Center for Patient Safety, Health Failure Modes and Effects Analysis (HFMEA) is a risk assessment tool involving five steps: (1) define the topic; (2) assemble a multidisciplinary team; (3) develop a process map, numbering each step and substep; (4) conduct a hazard analysis (identify failure mode causes, score each using a hazard scoring matrix, and use decision tree analysis);79 and (5) develop actions and desired outcomes. Hazard analysis involves listing all possible failure modes for each process step, determining if further action is needed, and listing causes for each failure mode requiring action. Post-hazard analysis, crucial steps include defining necessary actions and outcome measures, specifying what will be eliminated or controlled, and assigning responsibility for each new action.79
Research Evidence on Quality Improvement Implementation
An analysis of fifty studies and quality improvement projects revealed several recurring themes related to implementing quality management tools and techniques in health care. These themes were categorized by the type of quality method used (FMEA, RCA, Six Sigma, Lean, and PDSA) and included: (1) prerequisites for implementing quality improvement strategies, (2) insights from evaluating change intervention impacts, and (3) current knowledge on using quality improvement tools in healthcare.
Prerequisites for Implementing Quality Improvement Strategies
Strong and visible leadership support,80–83 active involvement,81, 84 consistent commitment to continuous quality improvement,85, 86 and tangible presence,87 both in communication and action,86 are essential for driving significant changes. Substantial commitment from hospital boards is also necessary.86, 88 Resource demands associated with process changes necessitate senior leadership to: (1) ensure adequate financial resources87–89 by allocating funds for training, innovative technologies,90 and equipment;91 (2) enable key personnel to dedicate sufficient time to change processes,85, 88, 89 providing administrative support;90 (3) support time-intensive projects by allowing adequate time for success;86, 92 and (4) emphasize safety as a top organizational priority, reinforcing expectations even during project delays or when initial results are not immediately apparent.87 Senior leaders must also understand the impact of high-level decisions on workflows and staff time,88 especially during practice changes, and integrate quality improvement into system-wide leadership development programs.88 Leadership should prioritize patient safety in all meetings and strategies,85, 86 establish formal processes for setting annual patient safety goals, and hold themselves accountable for patient safety outcomes.85
Despite strong leadership, organizational resistance to quality improvement efforts may arise due to past failed change initiatives,93 lack of organization-wide commitment,94 poor interdepartmental relationships, and ineffective communication.89 However, these barriers can be mitigated by fostering a culture that embraces change,95 actively institutionalizing a safety and quality improvement culture,90 and demonstrating organizational commitment to change. Creating a non-punitive culture of change is a gradual process,61, 90 sometimes requiring legal department involvement to shift focus from individual blame to systemic issues.96 Staff involvement in process improvement is enhanced when cost savings are realized and job security is protected despite efficiency gains.84
Successful improvement processes require engaging97 and involving all stakeholders, fostering an understanding that resource investments in quality improvement can yield returns through efficiency gains and reduced adverse events.86 Stakeholders are crucial for: (1) prioritizing safe practices through consensus building,86, 98 focusing on clinically significant issues impacting daily practice and patient safety; (2) developing solutions addressing fundamental interdisciplinary communication and teamwork, essential for a safety culture; and (3) learning from successes in other hospitals.86 Successful rapid-cycle collaboratives involve stakeholders in subject selection, objective definition, role and expectation setting, team motivation, and data analysis utilization.86 Considering diverse stakeholder perspectives is vital,[97](#ch44.r97] managing opinion variations,99 and securing buy-in by early stakeholder involvement, feedback solicitation,[100](#ch44.r100] and support for critical process changes.101
Effective communication and information sharing with stakeholders and staff are critical for clarifying quality initiative purpose and strategy;101 establishing open communication channels across disciplines and leadership levels to voice concerns and observations;88 ensuring patient and family inclusion in dialogues; fostering a sense of shared responsibility for patient safety; disseminating lessons from root cause analyses; and highlighting patient safety stories and celebrating successes to gain attention and buy-in.85 However, some staff may resist system changes based on data, despite efforts to keep everyone informed.89
Motivated80 and empowered teams are essential for successful strategy implementation. Multidisciplinary teams reviewing data and leading change offer numerous advantages.91 These teams should include the right staff,91, 92 peers,102 and stakeholders from senior management to frontline staff, with strong leadership support.85, 86 Specific stakeholders (e.g., nurses and physicians) must be involved81 and supported to champion and implement changes and solve departmental problems.59 Given that quality initiatives often require significant shifts in daily clinical work,[86](#ch44.r86] considering frontline staff attitudes and willingness to adopt improvements is crucial.59, 88, 104
Other key success factors include implementing adaptable protocols,[93](#ch44.r93] tailored to patient needs and unit-specific contexts based on experience, training, and culture.[88](#ch44.r88] Defining and testing diverse approaches is important, as different paths can converge to the same outcome.81 Mechanisms to enhance staff buy-in include highlighting error types and causes,102 engaging staff in work assessment and waste identification,59 providing insights into project feasibility and measurable impact,[105](#ch44.r105] and presenting evidence-based changes.100 Physician leadership106 or active involvement86 is particularly critical, especially when physician behaviors contribute to inefficiencies.84 Physician champions can advocate for patient safety and integrate it into leadership and medical management strategies.85
Team leaders and team composition are also vital. Leaders who foster strong relationships offline are essential for team success.83, 93 Dedicated team leaders with sufficient time commitment are needed.84 While leader roles vary, co-leadership by a physician and administrator has been effective.83 Visible champions are crucial for initiative visibility.100 Multidisciplinary teams should understand the numerous steps in quality improvement and potential error points, enabling prioritization of critical areas and reducing subjectivity in analysis. Diverse professional perspectives within teams facilitate step identification, barrier anticipation, idea generation, and robust discussions, promoting team building.100, 107 FMEA/HFMEA minimizes group biases by leveraging multidisciplinary team diversity and focusing on structured goal outlines.107, 108
Teams need preparation and enablement through ongoing education, weekly debriefings, problem review, principle application,84 and continuous monitoring and feedback.92, 95 Staff95, 80, 95, [101](#ch44.r101], 104 and leadership80 education on current problems, quality improvement tools, planned interventions, and project updates are key.[92](#ch44.r92] Training is an ongoing process91 addressing skill deficits82 and adapting to lessons learned and data analysis during project implementation.[109](#ch44.r109] Training needs extend to senior staff and leadership.[105](#ch44.r105] Consultants or facilitators with advanced quality improvement expertise can support teams lacking experience.[106](#ch44.r106] Community-hospital interface models coupled with education programs can also be beneficial.97
Teamwork processes enhance interdepartmental relationships.[89](#ch44.r89] Effective team building,110 rapid-cycle (PDSA) model implementation, frequent meetings, and monthly outcome data analysis are crucial.86 Team effectiveness relies on teamwork, communication, information transfer, interdepartmental coordination, and organizational culture changes.86 However, competing workloads can hinder team member engagement.[97](#ch44.r97] Improved understanding of team roles is a significant project outcome, fostering continued practice development.[97](#ch44.r97] Team motivation is sustained through progress sharing and achievement celebration.87
Teamwork benefits include increased knowledge scope, improved interdisciplinary communication, and enhanced problem understanding.111 Proactive teams integrate technical processes and organizational relationships,[83](#ch44.r83] collaborating to understand situations, define problems, pathways, tasks, and connections, and develop multidisciplinary action plans.59 Teamwork can be challenging, time-consuming,[111](#ch44.r111] and prone to conflicts when individual preferences clash,97 delaying consensus. Team members must learn to navigate group dynamics, peer confrontation, conflict resolution, and address detrimental behaviors.111
Insights from Evaluating Change Intervention Impacts
As Berwick112 suggested, successful quality improvement initiatives emphasize simplification;96, 104 standardization;104 stratification for effect analysis; improved auditory communication; communication support against authority gradients;96 proper default utilization; cautious automation;96 affordance and natural mapping (designing processes and equipment for intuitive use); respecting vigilance and attention limits;96 and encouraging near-miss, error, and hazard reporting.96 Policy and procedure revisions and standardizations have effectively made new processes easier than old ones, reducing human error from vigilance and attention limitations.78, 80–82, 90–92, 94, 96, 102, 103, 113, 114
Simplification and standardization are effective forcing functions, reducing reliance on individual decision-making. Standardizing medication ordering and administration protocols78, 87, 101, 103, 106–108, 109, 114–116 has improved patient outcomes, nurse efficiency, and effectiveness.103, 106, 108, 109, 114–116 Standardized blood product ordering forms have also been beneficial.94 Standardized metrics and tools for pain assessment and management have improved pain management.80, 93, 100, 117 Across these initiatives, simplification and standardization were effective strategies.
Information technology (IT) offers benefits through checks, defaults, and automation, enhancing quality and reducing errors by embedding forcing functions.96, 106 Redundancy, such as double-checking, mitigates human error, leveraging the skills of two practitioners,61, 101 effectively reducing dosing errors.78 IT has been used to: (1) automate processes to decrease human error opportunities;61 (2) standardize medication concentrations78 and dosing using computer-enabled calculations,115, 116 standardized protocols,101 and order clarity;116 (3) provide alerts and reminders for quality care; (4) improve medication safety through bar coding and computerized provider order entry; and (5) track performance via database integration and indicator monitoring. Workflow and procedure revisions are necessary to align with technology advancements.78 Technology investment demonstrates organizational commitment to improvement,85 though resource limitations for data collection can hinder initiative analysis and evaluation.93, 97
Data and information are crucial for understanding error and near-error root causes,99 adverse event magnitude,106 performance tracking,84, 118 and initiative impact assessment.61 Near-miss, error, and hazard reporting should be encouraged.96 Error reporting is often low and influenced by organizational culture,106 potentially biased, skewing results.102 Organizations not prioritizing safety culture may underreport errors and near misses (see Chapter 35. “Evidence Reporting and Disclosure”). Data analysis is critical, and staff may benefit from training in effective analysis and presentation.106 Transparent feedback of findings82 brings patient safety to the forefront.107 Lack of data hinders statistical analysis115 and cost-benefit assessments.[108](#ch44.r108] Multi-organizational collaborations should utilize common databases.98
Measures and benchmarks enhance data interpretation. Repeated measurements monitor progress,118 especially with clear success metrics.83 Measures can engage clinicians, particularly physicians. Objective, broader measures mark progress and drive action and celebration.106 Demonstrating links between care process changes and outcomes is crucial when using care process measures.61
Multiple measures and improved documentation facilitate impact assessment on patient outcomes.93 Hospital administrators should encourage initiative evaluations focusing on patient outcomes, satisfaction, and cost-effectiveness.114 Realistic goals, not unattainable 100% change targets,119 and comparisons to state, regional, and national benchmarks61, 88 enhance outcome assessments.
Initiative cost is a crucial factor, even when adverse effects necessitate rapid change.106 Feasible changes with minimal practice disruption are preferred.[99](#ch44.r99] Replicability across units or sites is important.99 Standardizing processes improves replicability but may incur costs.106 Rapid resolution of small problems facilitates system-wide replication.84, 106 Cost-effective recommendations are implemented quickly.93, [107](#ch44.r107] Cost reductions and shorter patient stays have been reported,103 though data verification is needed. Change costs can be offset by return on investment or reduced liability from patient risk reduction.61
Staff education is essential for initiative success. Pain management initiatives demonstrated that staff education on guidelines and protocols improved understanding, assessment, documentation, patient/family satisfaction, and pain management.80, 93 Staff nurse education on IV site care and central line assessment improved patient satisfaction, reduced complications, and lowered costs.109
Despite initiative benefits, implementation challenges include:
Despite these challenges, perseverance and focus are crucial, as implementing new processes can be difficult,84, 100 but the rewards of quality improvement are worthwhile.84 Quality improvement is time-consuming, resource-intensive (time, money, energy),94 and involves trial and error.91 Celebrating successes is vital.84
Sustaining changes post-implementation is a key objective.[105](#ch44.r105] Quality improvement should be integral to organization-wide, ongoing improvement processes. Factors influencing success include bedside-friendly practice changes;82 simple communication strategies;88 maximized project visibility to sustain momentum;100 safety culture establishment; and infrastructure strengthening.121 However, there are differing views on whether to spread change steps through process redesign or solely rely on best practice adaptation.106 Enthusiasm for change can be generated through internal and external collaboration103 and healthy competition. Collaboratives can promote evidence-based practice, rapid-cycle improvement, and consensus on best practices.86, 98
Current Knowledge on Using Quality Improvement Tools in Healthcare
Quality management tools and techniques in health care are valuable for defining and assessing healthcare problems, prioritizing quality and safety issues,99 and focusing on systems,98 not individuals. These tools address errors, rising costs,88 and aim to change provider practices.117 Many initiatives use multiple tools, starting with RCA and then applying Six Sigma, Lean, or PDSA for process change implementation. Pretesting/pilot testing is common.92, 99 Specific tool advantages include:
Root Cause Analysis (RCA): Useful for assessing reported errors/incidents, differentiating active and latent errors, identifying policy/procedure changes, and suggesting system improvements, including risk communication enhancement.82, 96, 102, 105
Six Sigma/Toyota Production System (Lean): Successfully decreases defects/variations59, 61, 81 and operating costs81 and improves outcomes across healthcare settings and processes.61, 88 Six Sigma clearly differentiates between variation causes and process outcome measures.61 It makes workarounds and rework difficult by targeting pre-implementation process root causes.59, 88 Team proficiency and effectiveness increase with experience.84 Effective use requires leadership time and resource commitment, yielding improved patient safety, lower costs, and increased job satisfaction.84 Six Sigma is valuable for problem-solving, continuous improvement, clear problem communication, implementation guidance, and objective result presentation.59
Plan-Do-Study-Act (PDSA): Widely used for gradual initiative implementation and iterative improvement. Rapid-cycle PDSA starts with piloting a single new process, examining results, problem-solving, adjusting, and initiating subsequent cycles. Gradual implementation through small, rapid cycles is more effective than large, slow changes,80 allowing early process adjustments87, 119, 122 and focused attention to key details.87, 119, [122](#ch44.r122] PDSA effectiveness is enhanced by training on PDSA cycles, feedback on baseline measurements,118 regular meetings,120 and collaboration with others, including patients and families,80 to achieve common goals.87 Challenges include difficulty in rapid-cycle change, data collection, and run chart construction,86 with some suggesting simpler rules may be more effective in complex systems.93
Failure Modes and Effects Analysis (FMEA): Used prospectively and retrospectively to avoid events and improve care quality.123 FMEA prospectively identifies potential failure areas[94](#ch44.r94] and retrospectively characterizes process safety by identifying potential failures from staff perspectives.[94](#ch44.r94] Process flowcharts focus teams and ensure shared understanding.94 FMEA data prioritizes improvement strategies, benchmarks improvement efforts,116 informs practice change diffusion,115 and enhances team ability to drive change across services and departments.124 FMEA facilitates systematic error management, crucial in complex processes and settings, relying on multidisciplinary approaches, integrated incident/error reporting, decision support, standardized terminology, and caregiver education.116
Health Failure Modes and Effects Analysis (HFMEA): Provides detailed analysis of smaller and larger processes, leading to specific recommendations. HFMEA is a valid tool for proactive hospital risk analysis, thoroughly assessing vulnerabilities (failure modes) before adverse events occur.108 It identifies the multifactorial nature of errors108 and potential error risks,111 but is time-consuming.107 HFMEA minimizes group biases through multidisciplinary teams78, 108, 115 and facilitates teamwork via a step-by-step process,107 requiring a paradigm shift for many.111
Evidence-Based Practice Implications for Quality Improvement
Several themes emerged from successful quality improvement initiatives that nurses can apply to guide their efforts. The strength of these practice implications is based on the methodological rigor and generalizability of the assessed strategies and projects:
-
Leadership Commitment and Support: Strong leadership commitment and support are indispensable. Leaders must empower staff, be actively involved, and continuously promote quality improvement. Without senior leadership commitment, even well-designed projects are likely to fail. Quality initiative and improvement champions are needed throughout the organization, particularly in leadership roles and within project teams.
-
Culture of Safety and Improvement: Cultivating a culture that values and rewards improvement is vital. This culture should support a quality infrastructure with the necessary resources and human capital for successful quality enhancement.
-
Stakeholder Involvement: Quality improvement teams must include the right stakeholders, ensuring diverse perspectives and buy-in.
-
Multidisciplinary Teams and Strategies: Given healthcare complexity, multidisciplinary teams and strategies are essential. Teams from participating centers/units should collaborate closely, utilizing effective communication methods like face-to-face meetings, conference calls, and dedicated email lists. Trained facilitators and expert faculty can provide valuable guidance during change implementation.
-
Problem and Root Cause Understanding: Teams and stakeholders must thoroughly understand the problem and its root causes, establishing a consensus on problem definition. A clearly defined and universally agreed-upon metric is crucial, as vital as data validity itself.
-
Proven, Methodologically Sound Approach: Employ a proven, methodologically sound approach, focusing on clear models, terms, and processes rather than being overwhelmed by quality improvement jargon. Many quality management tools and techniques in health care are interrelated; using a combination of tools is often more effective than relying on a single method.
-
Standardized Care Processes: Standardizing care processes and ensuring universal adherence to these standards can enhance efficiency, effectiveness, and improve organizational and patient outcomes.
-
Evidence-Based Practice Integration: Evidence-based practice should be integrated to facilitate ongoing quality improvement efforts, ensuring interventions are grounded in best available evidence.
-
Flexible Implementation Plans: Implementation plans must be flexible and adaptable to accommodate necessary changes as they arise during the improvement process.
-
Multiple Improvement Purposes: Quality improvement efforts can serve multiple purposes, including redesigning care processes for efficiency and effectiveness, improving customer satisfaction, enhancing patient outcomes, and improving organizational climate.
-
Appropriate Technology Utilization: Strategic use of technology can enhance team functioning, foster collaboration, reduce human error, and improve patient safety.
-
Sufficient Resource Allocation: Initiatives require sufficient resources, including dedicated staff time, financial support, and necessary tools and technologies.
-
Continuous Data Collection, Analysis, and Results Communication: Continuously collect and analyze data and communicate results on critical indicators across the organization. The primary goal of quality assessment and monitoring is to use findings to evaluate performance and identify areas for further improvement.
-
Time and Perseverance: Recognize that change takes time and requires sustained focus and perseverance. Quality improvement is an ongoing journey, not a quick fix.
Research Implications for Quality Improvement
Assessing quality improvement in healthcare is complex and dynamic. The body of knowledge in this area is growing, albeit slowly, partly due to ongoing debates about whether quality improvement initiatives qualify as research requiring methodological rigor for publication. While various quality management tools and techniques in health care have been used since Donabedian’s 1966 publication,27 Six Sigma and similar methodologies have only recently been applied and published in healthcare quality improvement, often focusing on isolated system components, hindering organizational learning and generalizability. Despite the long-standing importance of quality improvement, driven by external entities like CMS and The Joint Commission, numerous organizational quality improvement efforts may remain unpublished and thus not captured in reviews, potentially not meeting peer-reviewed publication criteria. Researchers, leaders, and clinicians need to define generalizable and publishable quality improvement work to advance knowledge in this field.
While many quality improvement projects report clinical, functional, patient, and staff satisfaction outcomes, cost and utilization outcomes measurement remains crucial, especially when variation occurs. Several key questions warrant further research:
- How can quality improvement efforts effectively balance the needs of patients, insurers, regulators, and staff to achieve success?
- What are the most effective methods for prioritizing improvement areas and addressing the competing needs of diverse stakeholders?
- What is the acceptable threshold of variation required to achieve consistently desired results in healthcare processes?
- How can bottom-up approaches to changing clinical practice succeed in the absence of supportive senior leadership or organizational cultures that resist change?
Researchers planning quality improvement initiatives or research should utilize conceptual models, such as the quality tools discussed, to guide their work. To enhance the generalizability of empirical findings, increasing sample sizes through collaborations across organizations and providers is essential. Further research is needed to determine which tools, used alone or in combination, are most effective. Mixed methods research, including non-research methodologies, may offer a more comprehensive understanding of quality improvement science. Understanding how tailored implementation interventions impact process and patient outcomes, and identifying the most effective cross-intervention steps, are also critical research areas. Finally, more research is needed to determine which strategies or combinations of strategies work best for whom, in what contexts, why they succeed in some settings and fail in others, and the underlying mechanisms driving their effectiveness.
Conclusions
Regardless of the specific method acronym (e.g., TQM, CQI) or tool (e.g., FMEA, Six Sigma), the essence of quality improvement lies in a dynamic process often employing multiple quality management tools and techniques in health care. Successful quality improvement hinges on five essential elements: fostering and sustaining a culture of change and safety, developing a clear and shared understanding of the problem, engaging key stakeholders, testing change strategies, and continuously monitoring performance and reporting findings to maintain improvements.
Search Strategy
To identify quality improvement efforts for this systematic review, PubMed and CINAHL databases were searched from 1997 to the present. Keywords and terms included: “Failure Modes and Effects Analysis/FMEA,” “Root Cause Analysis/RCA,” “Six Sigma,” “Toyota Production System/Lean,” and “Plan Do Study Act/PDSA.” This search yielded 438 articles. Inclusion criteria were: reported processes involving nursing, projects/research using FMEA, RCA, Six Sigma, Lean, or PDSA methods, qualitative and quantitative analyses, and reporting patient outcomes. Exclusions were projects/research not involving nursing teams, lacking sufficient process and outcome descriptions, nursing not directly involved in patient/study outcomes, or settings in developing countries. Findings from included projects/research were grouped into common themes related to applied quality improvement.
Evidence Table
Quality Methods Evidence Table
References
-
National Healthcare Quality Report. Rockville, MD: Agency for Healthcare Research and Quality; 2006. [Accessed March 16, 2008]. http://www.ahrq.gov/qual/nhqr06/nhqr06.htm.
-
Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001. pp. 164–80. [PubMed: 25057539]
-
Lohr KN, Schroeder SA. A strategy for quality assurance in Medicare. N Engl J Med. 1990;322:1161–71. [PubMed: 2406600]
-
Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press; 1999.
-
McNally MK, Page MA, Sunderland VB. Failure mode and effects analysis in improving a drug distribution system. Am J Health Syst Pharm. 1997;54:17–7. [PubMed: 9117805]
-
Varkey P, Peller K, Resar RK. Basics of quality improvement in health care. Mayo Clin Proc. 2007;82(6):735–9. [PubMed: 17550754]
-
Marshall M, Shekelle P, Davies H, et al. Public reporting on quality in the United States and the United Kingdom. Health Aff. 2003;22(3):134–48. [PubMed: 12757278]
-
Loeb J. The current state of performance measurement in healthcare. Int J Qual Health Care. 2004;16(Suppl 1):i5–9. [PubMed: 15059982]
-
National Healthcare Disparities Report. Rockville, MD: Agency for Healthcare Research and Quality; 2006. [Accessed March 16, 2008]. Available at: http://www.ahrq.gov/qual/nhdr06/nhdr06.htm.
-
Schoen C, Davis K, How SKH, et al. U.S. health system performance: a national scorecard. Health Affiars. 2006:w457–75. [PubMed: 16987933]
-
Wakefield DS, Hendryx MS, Uden-Holman T, et al. Comparing providers’ performance: problems in making the ‘report card’ analogy fit. J Healthc Qual. 1996;18(6):4–10. [PubMed: 10162089]
-
Marshall M, Shekelle PG, Leatherman S, et al. The public release of performance data: what do we expect to gain, a review of the evidence. JAMA. 2000;283:1866–74. [PubMed: 10770149]
-
Schneider EC, Lieberman T. Publicly disclosed information about the quality of health care: response of the U.S. public. Qual Health Care. 2001;10:96–103. [PMC free article: PMC1757976] [PubMed: 11389318]
-
Hibbard JH, Harris-Kojetin L, Mullin P, et al. Increasing the impact of health plan report cards by addressing consumers’ concerns. Health Affairs. 2000 Sept/Oct;19:138–43. [PubMed: 10992661]
-
Bentley JM, Nask DB. How Pennsylvania hospitals have responded to publicly release reports on coronary artery bypass graft surgery. Jt Comm J Qual Improv. 1998;24(1):40–9. [PubMed: 9494873]
-
Ferlie E, Fitzgerald L, Wood M, et al. The nonspread of innovations: the mediating role of professionals. Acad Manage J. 2005;48(1):117–34.
-
Glouberman S, Mintzberg H. Managing the care of health and the cure of disease– part I: differentiation. Health Care Manage Rev. 2001;26(1):56–9. [PubMed: 11233354]
-
Degeling P, Kennedy J, Hill M. Mediating the cultural boundaries between medicine, nursing and management—the central challenge in hospital reform. Health Serv Manage Res. 2001;14(1):36–48. [PubMed: 11246783]
-
Gaba DM. Structural and organizational issues is patient safety: a comparison of health care to other high-hazard industries. Calif Manage Rev. 2000;43(1):83–102.
-
Lee JL, Change ML, Pearson ML, et al. Does what nurses do affect clinical outcomes for hospitalized patients? A review of the literature. Health Serv Res. 1999;29(11):39–45. [PMC free article: PMC1089070] [PubMed: 10591270]
-
Taylor C. Problem solving in clinical nursing practice. J Adv Nurs. 1997;26:329–36. [PubMed: 9292367]
-
Benner P. From novice to expert: power and excellence in nursing practice. Menlo Part, CA: Addison-Wesley; Publishing Company: 1984.
-
March JG, Sproull LS, Tamuz M. Learning from samples of one or fewer. Organizational Science. 1991;2(1):1–13.
-
McGlynn EA, Asch SM. Developing a clinical performance measure. Am J Prev Med. 1998;14(3s):14–21. [PubMed: 9566932]
-
McGlynn EA. Choosing and evaluating clinical performance measures. Jt Comm J Qual Improv. 1998;24(9):470–9. [PubMed: 9770637]
-
Gift RG, Mosel D. Benchmarking in health care. Chicago, IL: American Hospital Publishing, Inc.; 1994. p. 5.
-
Donabedian A. Evaluating quality of medical care. Milbank Q. 1966;44:166–206. [PubMed: 5338568]
-
Deming WE. Out of the Crisis. Cambridge, MA: Massachusetts Institute of Technology Center for Advanced Engineering Study; 1986.
-
Berwick DM, Godfrey AB, Roessner J. Curing health care. San Francisco, CA: Jossey-Bass; 2002.
-
Wallin L, Bostrom AM, Wikblad K, et al. Sustainability in changing clinical practice promotes evidence-based nursing care. J Adv Nurs. 2003;41(5):509–18. [PubMed: 12603576]
-
Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128:651–6. [PubMed: 9537939]
-
Chassin MR. Quality of Care–part 3: Improving the quality of care. N Engl J Med. 1996:1060–3. [PubMed: 8793935]
-
Horn SD, Hickey JV, Carrol TL, et al. Can evidence-based medicine and outcomes research contribute to error reduction? In: Rosenthal MM, Sutcliffe KN, editors. Medical error: what do we know? What do we do? San Francisco, CA: Jossey-Bass; 2002. pp. 157–73.
-
Joss R. What makes for successful TQM in the NHS? Int J Health Care Qual Assur. 1994;7(7):4–9. [PubMed: 10140850]
-
Nwabueze U, Kanji GK. The implementation of total quality management in the NHS: how to avoid failure. Total Quality Management. 1997;8(5):265–80.
-
Jackson S. Successfully implementing total quality management tools within healthcare: what are the key actions? Int J Health Care Qual Assur. 2001;14(4):157–63.
-
Rago WV. Struggles in transformation: a study in TQM, leadership and organizational culture in a government agency. Public Adm Rev. 1996;56(3)
-
Shojania KG, McDonald KM, Wachter RM, et al. Closing the quality gap: a critical analysis of quality improvement strategies, Volume 1–Series Overview and Methodology Technical Review 9 (Contract No 290-02-0017 to the Stanford University–UCSF Evidence-based Practice Center). Rockville, MD: Agency for Healthcare Research and Quality; Aug, 2004. AHRQ Publication No. 04-0051–1. [PubMed: 20734525]
-
Furman C, Caplan R. Appling the Toyota production system: using a patient safety alert system to reduce error. Jt Comm J Qual Patient Saf. 2007;33(7):376–86. [PubMed: 17711139]
-
Womack JP, Jones DT. Lean thinking. New York: Simon and Schuster; 1996.
-
Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med. 2007;146:666–73. [PubMed: 17438310]
-
Reinhardt AC, Ray LN. Differentiating quality improvement from research. Appl Nurs Res. 2003;16(1):2–8. [PubMed: 12624857]
-
Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625–48. [PMC free article: PMC2751093] [PubMed: 9879305]
-
Shortell SM, Bennet CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593–624. [PMC free article: PMC2751103] [PubMed: 9879304]
-
Lynn J. When does quality improvement count as research? Human subject protection and theories of knowledge. Qual Saf Health Care. 2004;13:67–70. [PMC free article: PMC1758070] [PubMed: 14757803]
-
Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight. Am J Public Health. 2001;91:1512–7. [PMC free article: PMC1446813] [PubMed: 11527790]
-
Choo V. Thin line between research and audit. Lancet. 1998;352:1481–6. [PubMed: 9717915]
-
Harrington L. Quality improvement, research, and the institutional review board. J Healthc Qual. 2007;29(3):4–9. [PubMed: 17708327]
-
Berwick DM. Eleven worthy aims for clinical leadership of health care reform. JAMA. 1994;272(10):797–802. [PubMed: 8078145]
-
Berwick DM. Improvement, trust, and the healthcare workforce. Qual Saf Health Care. 2003;12:2–6. [PMC free article: PMC1758027] [PubMed: 14645761]
-
Langley JG, Nolan KM, Nolan TW, et al. The improvement guide: a practical approach to enhancing organizational performance. New York: Jossey-Bass; 1996.
-
Pande PS, Newman RP, Cavanaugh RR. The Six Sigma way. New York: McGraw-Hill; 2000.
-
Barry R, Murcko AC, Brubaker CE. The Six Sigma book for healthcare: improving outcomes by reducing errors. Chicago, IL: Health Administration Press; 2003.
-
Lanham B, Maxson-Cooper P. Is Six Sigma the answer for nursing to reduce medical errors and enhance patient safety? Nurs Econ. 2003;21(1):39–41. [PubMed: 12632719]
-
Shewhart WA. Statistical method from the viewpoint of quality control. Washington, DC: U.S. Department of Agriculture; 1986. p. 45.
-
Pande PS, Newman RP, Cavanagh RR. The Six Sigma was: team field book. New York: McGraw-Hill; 2002.
-
Sahney VK. Generating management research on improving quality. Health Care Manage Rev. 2003;28(4):335–47. [PubMed: 14682675]
-
Endsley S, Magill MK, Godfrey MM. Creating a lean practice. Fam Pract Manag. 2006;13:34–8. [PubMed: 16671348]
-
Printezis A, Gopalakrishnan M. Current pulse: can a production system reduce medical errors in health care? Q Manage Health Care. 2007;16(3):226–38. [PubMed: 17627218]
-
Spear SJ. Fixing health care from the inside, today. Harv Bus Rev. 2005;83(9):78–91. 158. [PubMed: 16171213]
-
Johnstone PA, Hendrickson JA, Dernbach AJ, et al. Ancillary services in the health care industry: is Six Sigma reasonable? Q Manage Health Care. 2003;12(1):53–63. [PubMed: 12593375]
-
Reason J. Human Error. New York: Cambridge University Press; 1990.
-
Kemppainen JK. The critical incident technique and nursing care quality research. J Adv Nurs. 2000;32(5):1264–71. [PubMed: 11115012]
-
Joint Commission. 2003 hospital accreditation standards. Oakbrook Terrace, IL: Joint Commission Resources; 2003.
-
Bogner M. Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates; 1994.
-
Rooney JJ, Vanden Heuvel LN. Root cause analysis for beginners. Qual Process. 2004 July; [Accessed on January 5, 2008]; Available at: www.asq.org.
-
Giacomini MK, Cook DJ. Users’ guides to the medical literature: XXIII. Qualitative research in health care. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 2000;284:357–62. [PubMed: 10891968]
-
Joint Commisssion. Using aggregate root cause analysis to improve patient safety. Jt Comm J Qual Patient Saf. 2003;29(8):434–9. [PubMed: 12953608]
-
Wald H, Shojania K. Root cause analysis. In: Shojania K, Duncan B, McDonald KM, et al., editors. Making health care safer: a critical analysis of patient safety practices. Evidence Report/Technology Assessment No. 43. Rockville, MD: AHRQ; 2001. AHRQ Publication Number: 01–058. [PMC free article: PMC4781305] [PubMed: 11510252]
-
Bagian JP, Gosbee J, Lee CZ, et al. The Veterans Affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002;28(10):531–45. [PubMed: 12369156]
-
Leape LL. Error in medicine. JAMA. 1994;272:1851–7. [PubMed: 7503827]
-
Institute of Medicine. Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: National Academy Press; 2004.
-
Benner P, Sheets V, Uris P, et al. Individual, practice, and system causes of errors in nursing: a taxonomy. JONA. 2002;32(10):509–23. [PubMed: 12394596]
-
Spath PL, Hickey P. Home study programme: using failure mode and effects analysis to improve patient safety. AORN J. 2003;78:16–21. [PubMed: 12885066]
-
Croteau RJ, Schyve PM. Proactively error-proofing health care processes. In: Spath PL, editor. Error reduction in health care: a systems approach to improving patient safety. Chicago, IL: AHA Press; 2000. pp. 179–98.
-
Williams E, Talley R. The use of failure mode effect and criticality analysis in a medication error subcommittee. Hosp Pharm. 1994;29:331–6. 339. [PubMed: 10133462]
-
Reiling GJ, Knutzen BL, Stoecklein M. FMEA–the cure for medical errors. Qual Progress. 2003;36(8):67–71.
-
Adachi W, Lodolce AE. Use of failure mode and effects analysis in improving safety of IV drug administration. Am J Health Syst Pharm. 2005;62:917–20. [PubMed: 15851497]
-
DeRosier J, Stalhandske E, Bagin JP, et al. Using health care failure mode and effect analysis: the VA National Center for Patient Safety’s Prospective Risk Analysis System. J Qual Improv. 2002;28(5):248–67. [PubMed: 12053459]
-
Buhr GT, White HK. Management in the nursing home: a pilot study. J Am Med Dir Assoc. 2006;7:246–53. [PubMed: 16698513]
-
Guinane CS, Davis NH. The science of Six Sigma in hospitals. Am Heart Hosp J. 2004 Winter;:42–8. [PubMed: 15604839]
-
Mills PD, Neily J, Luan D, et al. Using aggregate root cause analysis to reduce falls and related injuries. Jt Comm J Qual Patient Saf. 2005;31(1):21–31. [PubMed: 15691207]
-
Pronovost PJ, Morlock L, Davis RO, et al. Using online and offline change models to improve ICU access and revenues. J Qual Improv. 2000;26(1):5–17. [PubMed: 10677818]
-
Thompson J, Wieck KL, Warner A. What perioperative and emerging workforce nurses want in a manager. AORN J. 2003;78(2):246–9. 258. passium. [PubMed: 12940425]
-
Willeumier D. Advocate health care: a systemwide approach to quality and safety. Jt Comm J Qual Patient Saf. 2004;30(10):559–66. [PubMed: 15518360]
-
Leape LL, Rogers G, Hanna D, et al. Developing and implementing new safe practices: voluntary adoption through statewide collaboratives. Qual Saf Health Care. 2006;15:289–95. [PMC free article: PMC2564013] [PubMed: 16885255]
-
Smith DS, Haig K. Reduction of adverse drug events and medication errors in a community hospital setting. Nurs Clin North Am. 2005;40(1):25–32. [PubMed: 15733944]
-
Jimmerson C, Weber D, Sobek DK. Reducing waste and errors: piloting lean principles at Intermountain Healthcare. J Qual Patient Saf. 2005;31(5):249–57. [PubMed: 15960015]
-
Docimo AB, Pronovost PJ, Davis RO, et al. Using the online and offline change model to improve efficiency for fast-track patients in an emergency department. J Qual Improv. 2000;26(9):503–14. [PubMed: 10983291]
-
Gowdy M, Godfrey S. Using tools to assess and prevent inpatient falls. Jt Comm J Qual Patient Saf. 2003;29(7):363–8. [PubMed: 12856558]
-
Germaine J. Six Sigma plan delivers stellar results. Mater Manag Health Care. 2007:20–6. [PubMed: 17506407]
-
Semple D, Dalessio L. Improving telemetry alarm response to noncritical alarms using a failure mode and effects analysis. J Healthc Qual. 2004;26(5):Web Exclusive: W5-13–W5-19.
-
Erdek MA, Pronovost PJ. Improving assessment and treatment of pain in the critically ill. Int J Qual Health Care. 2004;16(1):59–64. [PubMed: 15020561]
-
Burgmeier J. Failure mode and effect analysis: an application in reducing risk in blood transfusion. J Qual Improv. 2002;28(6):331–9. [PubMed: 12066625]
-
Mutter M. One hospital’s journey toward reducing medication errors. Jt Comm J Qual Patient Saf. 2003;29(6):279–88. [PubMed: 14564746]
-
Rex JH, Turnbull JE, Allen SJ, et al. Systematic root cause analysis of adverse drug events in a tertiary referral hospital. J Qual Improv. 2000;26(10):563–75. [PubMed: 11042820]
-
Bolch D, Johnston JB, Giles LC, et al. Hospital to home: an integrated approach to discharge planning in a rural South Australian town. Aust J Rural Health. 2005;13:91–6. [PubMed: 15804332]
-
Horbar JD, Plsek PE, Leahy K. NIC/Q 2000: establishing habits for improvement in neonatal intensive care units. Pediatrics. 2003;111:d397–410. [PubMed: 12671159]
-
Singh R, Singh A, Servoss JT, et al. Prioritizing threats to patient safety in rural primary care. Inform Prim Care. 2007;15(4):221–9.
-
Dunbar AE, Sharek PJ, Mickas NA, et al. Implementation and case-study results of potentially better practices to improve pain management of neonates. Pediatrics. 2006;118(Supplement 2):S87–94. [PubMed: 17079628]
-
Weir VL. Best-practice protocols: preventing adverse drug events. Nurs Manage. 2005;36(9):24–30. [PubMed: 16155492]
-
Plews-Ogan ML, Nadkarni MM, Forren S, et al. Patient safety in the ambulatory setting. A clinician-based approach. J Gen Intern Med. 2004;19(7):719–25. [PMC free article: PMC1492477] [PubMed: 15209584]
-
Baird RW. Quality improvement efforts in the intensive care unit: development of a new heparin protocol. BUMC Proceedings. 2001;14:294–6. [PMC free article: PMC1305833] [PubMed: 16369633]
-
Luther KM, Maguire L, Mazabob J, et al. Engaging nurses in patient safety. Crit Care Nurs Clin N Am. 2002;14(4):341–6. [PubMed: 12400624]
-
Middleton S, Chapman B, Griffiths R, et al. Reviewing recommendations of root cause analyses. Aust Health Rev. 2007;31(2):288–95. [PubMed: 17470051]
-
Farbstein K, Clough J. Improving medication safety across a multihospital system. J Qual Improv. 2001;27(3):123–37. [PubMed: 11242719]
-
Esmail R, Cummings C, Dersch D, et al. Using healthcare failure mode and effect analysis tool to review the process of ordering and administrating potassium chloride and potassium phosphate. Healthc Q. 2005;8:73–80. [PubMed: 16334076]
-
van Tilburg CM, Liestikow IP, Rademaker CMA, et al. Health care failure mode and effect analysis: a useful proactive risk analysis in a pediatric oncology ward. Qual Saf Health Care. 2006;15:58–64. [PMC free article: PMC2564000] [PubMed: 16456212]
-
Eisenberg P, Painer JD. Intravascular therapy process improvement in a multihospital system: don’t get stuck with substandard care. Clin Nurse Spec. 2002:182–6. [PubMed: 12172487]
-
Singh R, Servoss T, Kalsman M, et al. Estimating impacts on safety caused by the introduction of electronic medical records in primary care. Inform Prim Care. 2004;12:235–41. [PubMed: 15808025]
-
Papastrat K, Wallace S. Teaching baccalaureate nursing students to prevent medication errors using a problem-based learning approach. J Nurs Educ. 2003;42(10):459–64. [PubMed: 14577733]
-
Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989;320(1):53–6. [PubMed: 2909878]
-
Pexton C, Young D. Reducing surgical site infections through Six Sigma and change management. Patient Safety Qual Healthc [e-Newsletter]. 2004. [Accessed November 14, 2007]. Available at: www.psqh.com/julsep04/pextonyoung.html.
-
Salvador A, Davies B, Fung KFK, et al. Program evaluation of hospital-based antenatal home care for high-risk women. Hosp Q. 2003;6(3):67–73. [PubMed: 12846147]
-
Apkon M, Leonard J, Probst L, et al. Design of a safer approach to intravenous drug infusions: failure mode and effects analysis. Qual Saf Health Care. 2004;13:265–71. [PMC free article: PMC1743853] [PubMed: 15289629]
-
Kim GR, Chen AR, Arceci RJ, et al. Computerized order entry and failure modes and effects analysis. Arch Pediatr Adolesc Med. 2006;160:495–8. [PubMed: 16651491]
-
Horner JK, Hanson LC, Wood D, et al. Using quality improvement to address pain management practices in nursing homes. J Pain Symptom Manage. 2005;30(3):271–7. [PubMed: 16183011]
-
van Tiel FH, Elenbaas TW, Voskuilen BM, et al. Plan-do-study-act cycles as an instrument for improvement of compliance with infection control measures in care of patients after cardiothoracic surgery. J Hosp Infect. 2006;62:64–70. [PubMed: 16309783]
-
Dodds S, Chamberlain C, Williamson GR, et al. Modernising chronic obstructive pulmonary disease admissions to improve patient care: local outcomes from implementing the Ideal Design of Emergency Access project. Accid Emerg Nurs. 2006 Jul;14(3):141–7. [PubMed: 16762552]
-
Warburton RN, Parke B, Church W, et al. Identification of seniors at risk: process evaluation of a screening and referral program for patients aged > 75 in a community hospital emergency department. Int J Health Care Qual Assur. 2004;17(6):339–48. [PubMed: 15552389]
-
Nowinski CV, Mullner RM. Patient safety: solutions in managed care organizations? Q Manage Health Care. 2006;15(3):130–6. [PubMed: 16849984]
-
Wojciechowski E, Cichowski K. A case review: designing a new patient education system. The Internet J Adv Nurs Practice. 2007;8(2)
-
Gering J, Schmitt B, Coe A, et al. Taking a patient safety approach to an integration of two hospitals. Jt Comm J Qual Patient Saf. 2005;31(5):258–66. [PubMed: 15960016]
-
Day S, Dalto J, Fox J, et al. Failure mode and effects analysis as a performance improvement tool in trauma. J Trauma Nurs. 2006;13(3):111–7. [PubMed: 17052091]
-
Johnson T, Currie G, Keill P, et al. New York-Presbyterian hospital: translating innovation into practice. Jt Comm J Qual Patient Saf. 2005;31(10):554–60. [PubMed: 16294667]
-
Aldarrab A. Application of lean Six Sigma for patients presenting with ST-elevation myocardial infarction: the Hamilton Health Sciences experience. Healthc Q. 2006;9(1):56–60. [PubMed: 16548435]