Keywords

FormalPara Key Messages
  • Quality Improvement (QI) is a continuous structured process involving all members of the healthcare team to bring about change to optimise patient outcomes, improve healthcare system performance and enhance professional development .

  • Healthcare delivery systems are complex dynamic environments in a constant state of flux. The key to identifying beneficial change is measurement .

  • To determine whether a change leads to an improvement, QI teams must test it in the real work setting—by planning it, trying it, observing the results, and acting on what is learned.

  • QI team members across clinical, managerial and information technology professions need to work together communicating effectively, to analyse, interpret and evaluate clinical, functional, satisfaction and cost outcomes to achieve sustainable healthcare improvements.

Stakeholders, quality improvement, key performance indicators, toolkits, dashboards—all terms with differing degrees of relevance and importance to the many members of today’s diverse healthcare team—medical, nursing and allied clinical and non-clinical health professionals contributing to the patient journey through our healthcare system . To move on from good intentions and aspirations about improving quality of care, we must move beyond our own areas of comfort and expertise and examine care processes, cooperating across professional disciplines at local, regional and national level in a continuous attempt to improve what we do and how we do it. In this chapter we aim to demystify quality improvement (QI) tools and language.

Much of today’s QI methods were developed in industry in the 1940s and 1950s in Japan pioneered there by the US experts W Edwards Deming, Joseph Juran and Armand Feigenbaum and the Japanese expert Kaoru Ishikawa [1]. Each of these QI pioneers recognised the contribution of each individual worker to their organisations goals, the importance of empowering all staff and encouraging each and every individual to take responsibility for quality improvement, every member of staff needing to continually improve what they do.

In the late 1980’s, in the U.S. the National Demonstration Project on Quality Improvement in Health Care (NDP) was launched to explore the application of modern quality improvement methods to health care. “Improving Health Care Quality” courses were added to the Harvard School of Public Health curriculum and in the late 1980’s the NDP launched its first national forum which became incorporated into a National Forum on Quality Improvement in Health Care [2]. The Institute for Healthcare Improvement (IHI) was founded around this time by Dr Don Berwick and colleagues in Boston who were committed to adapting these same principles for use in healthcare [35]. Over the past 25 years, IHI have collaborated nationally in the US and internationally, in UK with the NHS and the National Institute for Clinical Excellence (NICE) and with several other European health institutions as well as promoting QI methods throughout the developing world. In collaboration with the British Medical Journal, the Quality and Safety in Healthcare Journal was launched in 2002.

Definition and Dimensions of Quality

Quality: the following definition, from the US Institute of Medicine (IOM), is often used:

“Quality is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” [6].

The IOM has identified six dimensions of healthcare quality. These state that healthcare must be:

  • safe—avoiding harm to patients from care that is intended to help them

  • effective—providing service to patients based on evidence and which produce a clear benefit

  • patient-centred—establishing a partnership with patients to ensure care respects their needs and preferences

  • timely—reducing waits and sometimes harmful delays

  • efficient—avoiding waste

  • equitable—providing care that doesn’t vary in quality because of a patient’s characteristics

Essentially QI in healthcare can be translated as providing a structured approach, or method, that focuses on changing how we provide care, with the patient at the centre of what we do, to deliver a better outcome for the patient while aiming to achieve better performance in the healthcare system and ideally making this system a better place for all involved.

The IHI has adapted these six dimensions of quality into a ‘no needless’ framework, [7, 8] which aspires to promote:

  • no needless deaths

  • no needless pain or suffering

  • no helplessness in those served or serving

  • no unwanted waiting

  • no waste

  • no one left out.

To help in achieving these improvement aims, in its blueprint for QI in healthcare in the twenty-first century the IOM formulated a set of ten simple rules, or general principles, to inform efforts to redesign the health system [6]. These rules are:

  1. 1.

    Care is based on continuous healing relationships.

  2. 2.

    Care is customized according to patient needs and values.

  3. 3.

    The patient is the source of control.

  4. 4.

    Knowledge is shared and information flows freely.

  5. 5.

    Decision making is evidence-based.

  6. 6.

    Safety is a system property.

  7. 7.

    Transparency is necessary.

  8. 8.

    Needs are anticipated.

  9. 9.

    Waste is continuously decreased.

  10. 10.

    Cooperation among clinicians is a priority.

It is all very well to have such grand aspirations to make things “better”, it is another matter entirely try and put this into practice and demonstrate objectively in a quantifiable manner that improvement is happening across patient outcomes in our healthcare system .

Measurement of Quality

All improvement requires change, but not all change results in improvement [9]. The key to identifying beneficial change is measurement. The major components of measurement include: (1) determining and defining key indicators; (2) collecting an appropriate amount of data; and (3) analysing and interpreting these data [10]. Individual measurements from any process will exhibit variation . Measurement data from healthcare processes display natural variation which can be modelled using a variety of statistical distributions. Distinguishing between natural “common cause” variation and significant “special cause” variation is key both to knowing how to proceed with improvement and whether or not a change has resulted in real improvement. Shewhart developed a relatively simple statistical tool—the control chart (Fig. 6.1) to aid in distinguishing between common and special cause variation [11]. A control chart consists of two parts: (1) a series of measurements plotted in time order, and (2) the control chart “template” which consists of three horizontal lines called the centre line (typically, the mean), the upper control limit (UCL), and the lower control limit (LCL). Where to draw the UCL and LCL is important in control chart construction. Shewhart and others recommend control limits set at ± 3 standard deviations for detecting meaningful changes in process performance while achieving a rational balance between two types of risks . If the limits are set too narrow there is a high risk of a “type I error”—mistakenly inferring special cause variation exists when, in fact, a predictable extreme value is being observed which is expected periodically from common cause variation. On the other hand, if the limits are set too wide there is a high risk of a “type II error” analogous to a false negative laboratory test. Control charts can help QI teams to decide on the correct improvement strategy—whether to search for special causes (if the process is out of control) or to work on more fundamental process improvements and redesign (if the process is in control). The charts can also be used as a simple monitoring aid to assure that improvements are sustained over time [12].

Fig. 6.1
figure 1

Control chart. (Courtesy: Damber Shrestha, Department of Neonatal Paediatrics, KEM Hospital for Women, Perth

Clinical Value Compass

The Clinical Value Compass framework (Fig. 6.2) places patients both as individuals and as a patient population at the centre of what we measure. It has us examine not just the traditional clinical outcomes such as mortality and key morbidities that we are familiar with, but expands our measures to include those measures that matter to patients and their families in terms of functional outcome, satisfaction and costs to include assessment of the value of our service [13]. Value can be considered as a measure of quality defined by the outcomes (clinical/functional/satisfaction) measured as a function of cost for same over a defined period of time. The strength of the Clinical Value Compass is that it encourages us to look at outcomes in all directions—clinical outcomes of interest to medical/nursing professionals, functional and satisfaction outcomes that may matter more to patients and their families (especially in the medium to longer term) and to healthcare managers who will wish to measure cost in addition to the other domains to ensure value within the healthcare system .

Fig. 6.2
figure 2

Clinical value compass framework

In terms of driving changes and improvement , we need to measure our outcome data to quantify how patients are doing across key clinical, functional, satisfaction and cost domains. Many of our own personal drivers towards improvement include firstly comparing our current outcomes against our previous results to see if we are getting better or worse than before—time trend analyses e.g. process control charts (Fig. 6.1) and secondly comparing our own service and patient outcomes against our peers and colleagues i.e. benchmarking our outcomes against others or indeed wider international reference standards.

In discussing measurement of processes and outcomes within healthcare, it is important to be clear about the language and definitions used, particularly given the diversity of the modern healthcare multidisciplinary team . Any system can be simply defined as being composed of multiple parts working together for a common purpose or goal. A healthcare system can then be defined as the organisation of people, institutions, and resources to deliver healthcare services to meet the health needs of target populations. Within healthcare systems, process can be defined as a series of connected steps or actions to achieve an outcome. Performance measurement is the use of both process and outcome measures to understand the healthcare systems organizational performance and effect positive change to improve care [14]. A key performance indicator (KPI) is any quantifiable measure that is tied to organizational goals , used to evaluate performance over a designated time period. It is used to determine whether the practice, hospital, or other accountable organization is meeting predefined targets [15]. Many healthcare systems use dashboards which are performance monitoring systems that provide data on structure, process, and outcome variables [16]. A dashboard within a healthcare setting typically includes:

  1. a.

    reports on a selection of performance indicators (feedback);

  2. b.

    comparison of performance to established ideal levels (benchmarking);

  3. c.

    alerts when performance is sub-optimal to trigger action (warning or signal).

Similar to the dashboard in our car, an organizational dashboard provides a visual display of how various components or systems within the organization are functioning.

Appropriate benchmarks are necessary to determine how performance compares against desired goals and objectives and against others. Benchmarking is that process through which best practice is identified and continuous quality improvement pursued through comparison and sharing [17]. However, for comparisons to be fair and valid because centres may vary with respect to their population case-mix , risk adjustment is essential to making fair comparisons. The term “case-mix” reflects the fact that, within a patient population, individual patients may have a range of risks, and that the aggregate outcome reflects the aggregate risks [18]. Risk adjustment is the process of sorting patients in each comparison group into different levels of risk and then making comparisons separately for each level. The aim of adjustment is to permit fair comparisons between groups.

“Common cause” and “special cause” variation is seen in every area of medical practice. Potential sources for variation seen in both interventions and outcomes include case-mix, chance and differences in quality or effectiveness of care. When benchmarking outcomes, if differences due to case-mix and chance can be minimised through risk adjustment , then the residual variation may provide useful information about the quality of care provided.

Benchmarking and risk adjustment requires strict definition of each specific outcome. Each risk factor is measured and weighted accordingly. Severity of illness scores attempt to measure illness severity and assist in adjusting for case mix between populations. For example, the main illness severity scores in use in neonatal medicine are CRIB [19] (Clinical Risk Index for Babies) and SNAPPE-II (Score for Neonatal Acute Physiology—Perinatal Extension II) [20]. Like illness severity scores in adult critical care medicine, both these scores rely on physiology-based items from bedside vital signs and laboratory tests to quantify illness severity. Each scores derangements from physiological norms, the greater the derangement from physiological norm, the greater the likelihood of adverse outcome with a composite severity score derived from weighted sum of derangement across all organ systems . Combining these physiology derangements with other risk factors including birth-weight, gestational age, low Apgar scores and the presence or absence of severe congenital abnormalities an illness severity score with an overall risk of mortality is generated. A recognised disadvantage of both CRIB and SNAPPE-II scores is that they rely on physiological variables measured after admission to the neonatal intensive care unit (NICU). Because these variables may be influenced by the treatments provided after admission to the NICU, the scores are not independent of the effectiveness or quality of care provided [21].

Within Perinatal-Neonatal medicine, the Vermont Oxford Network (VON) was established in 1988 as a non-profit voluntary collaboration of health care professionals dedicated to improving the quality and safety of medical care for newborn infants and their families [22]. It now comprises over 950 Neonatal Units around the world. VON facilitates benchmarking and comparison by utilising strictly defined data definitions within clearly defined patient populations within the network and case-mix risk adjustment . To adjust for risk VON uses a multivariable risk adjustment model designed to capture important factors related to patient risk [22]. The model is used to calculate an expected number of cases for each specific outcome of interest based on the case mix seen at each hospital. Measures of interest can then be created for each hospital. One such measure is the ratio of the number of observed to expected cases (O/E), called the standardized mortality or morbidity ratio (SMR-Fig. 6.3). This measure and its confidence intervals are corrected or shrunken using methods that recognize that some of the observed variation is random noise caused by chance. The shrunken values are more stable estimates because they are adjusted for imprecise estimates and filter random variation. This VON Risk Adjustment model has performed as well as the SNAPPE-II score in a study of more than 10,000 infants [23, 24].

Fig. 6.3
figure 3

Standardised Mortality and Morbidity Ratios (SMR). (Annual Quality Management Report. Burlington, VT: Vermont Oxford Network, 2012

The standardised mortality/morbidity ratio (SMR) is the ratio of observed to predicted mortality/morbidities at each centre i.e. SMR = Observed Mortality/Morbidity Rate/Predicted Mortality/Morbidity Rate. The SMR indicates whether a centre has more or fewer deaths than would be expected based on the characteristics of infants treated at this centre. If the upper bound of the SMR is less than 1, this indicates that the centre has significantly fewer deaths than expected. If the lower bound of the SMR is greater than 1, this indicates that the centre has significantly more deaths than expected. If the lower and upper bounds of the SMR include 1; this indicates that number of deaths expected is not significantly different from the number of deaths observed, based on the characteristics of infants treated.

A graphical representation of several standardised morbidity ratios for clinical morbidities (pneumothoraxes, chronic lung disease, necrotising enterocolitis, bacterial infections, mortality) as reported by VON to participating centres as key clinical performance indicators for a neonatal unit is shown in Fig. 6.3.

Comparison and Benchmarking of Several Centres

Comparison and benchmarking of several centres (perhaps regional networks or national collaborations) can be represented by a combination of bar charts and box-plot and whiskers. In Fig. 6.4, mortality (or any other key performance indicator) can be represented as two charts placed side by side. The left chart provides bars with the data for the individual centres within a regional or national collaborative group while the right side provides information about the overall distribution in the form of one or two boxplots. A boxplot is a graphical representation of the distribution of a set of observations. It resembles a rectangular box with a pair of whiskers extending from its ends. The “whiskers” represent the extremes of the data (minimum and maximum), while the box represents the central portion of the distribution. The top edge of the box represents the 75th percentile of the distribution and the bottom edge of the box represents the 25th percentile of the distribution. By definition, 25 % of the centres have event proportions at or below the 25th percentile (the bottom edge) and 25 % have event proportions at or above the 75th percentile (the top edge). The remaining 50 % within the box represents the middle 50 % (from the 25th to the 75th percentiles) of the hospital proportions for each group. The line across the middle of the box represents the median (50th percentile). Half of the centres lie at or below this line and the other half lie above it. Finally, the cross represents the mean value of all of the hospitals.

Fig. 6.4
figure 4

Mortality Bar Chart with Box-Plot and Whiskers. (Annual Group Report. Burlington, VT: Vermont Oxford Network, 2012

Even when a comparison is appropriately risk-adjusted, there are important cautions about interpretation, including the source of the reference (benchmark) population, sample size, and biases from incomplete risk adjustment [18].

Plan-Do-Study-Act Cycle

The Plan-Do-Study-Act (PDSA) cycle (Fig. 6.5) is part of the IHI Model for Improvement, for accelerating quality improvement [25]. Once a team has set an aim, established its membership, and developed measures to determine whether a change leads to an improvement, the next step is to test a change in the real work setting. The PDSA cycle is shorthand for testing a change—by planning it, trying it, observing the results, and acting on what is learned [26]. This is the scientific method, used for action-oriented learning .

Fig. 6.5
figure 5

Plan-Do-Study-Act Cycle

Pareto Principle—the 80/20 rule—named after economist Vilfredo Pareto, states that, for many phenomena, 20 % of invested input is responsible for 80 % of the results obtained. The point of the Pareto principle is to recognize that most things in life are not distributed evenly. In focussing on quality improvement in healthcare settings , we should allocate time, resources and effort based on those issues that are drivers of important patient outcomes, that are readily quantifiable and most importantly, are somewhat controlled by clinician behaviour and thus are relatively modifiable for improvement to be possible. Furthermore, we should not try and change everything at once—recognising indeed that not all changes lead to improvement, but rather recognise that repeated small changes and evaluations can lead to significant improvement in care over time (Fig. 6.6).

Fig. 6.6
figure 6

Repeated use of the ‘Plan-Do-Study-Act’ cycle

As QI collaboratives were developing national and internationally, since 1995, within Perinatal–Neonatal medicine VON have sponsored a series of intensive QI collaboratives in which multidisciplinary teams of healthcare professionals and families work together under the guidance of expert faculty to identify, test, and implement potentially better practices designed to improve the quality of neonatal care [27]. In these internet based international collaboratives, participants are encouraged to develop four key habits for improvement which are the basis of each collaborative (Fig. 6.7). These key habits emphasize:

Fig. 6.7
figure 7

The four key habits of the Vermont Oxford Network

  • Change

  • Collaborative learning

  • Evidence based practice

  • Systems thinking.

An example of a key clinical performance indicator that has been used by many NICU’s within these collaboratives to initiate quality improvement efforts has been targeted reductions in central line associated bloodstream infections [2834]. This SMART objective (Specific, Measurable, Achievable, Realistic, and Timed) through well recognised and validated interventions including hand hygiene and care bundles for central line placement and care, is a patient centred KPI with demonstrable improvement across all four domains of the value compass—clinical reduction in morbidity, improvement in functional outcome related to association between infection and cerebral white matter injury with impact on longer term neurodevelopmental outcome, parental satisfaction, particularly if infant establishes feeds sooner and reduction in length of clinical inpatient stay with impact on overall patient costs.

Future Directions

At a national level, there are many external influences that can be used to promote improvements in quality. These include professional requirements e.g. evidence of continuing professional development , centralised government initiatives across primary and secondary care and economic drivers such as “money following the patient”.

At a local level, to put data-collection and benchmarking into action, emphasis must shift from simple clinical audit and data collection towards a quality improvement approach. Instead of an endpoint in themselves, data should be seen as a resource that can show that a change is needed and that an improvement has been made. Quality improvement projects usually focus on the actual process of care at a local level. Involvement of patients/families and all members of multidisciplinary healthcare team allows for identification of specific local problems , rather than concentrating on high level outcomes laid down from higher management or national regulatory authorities trying to enforce change from a ‘command and control’/top down model. The intrinsically local efforts used by effective quality improvement projects allow more targeted solutions to be developed. Local ownership of the solution should enhance the sustainability of any change project [35].

Improved information technology and clinical data management systems can make data collection for measurement easier, but with rapid expansions evolving in health information technology, we run the risk of moving from insufficient data based on manual paper derived data for QI towards data overload. For IT and electronic patient records to be properly harnessed for QI, we must ensure that data be accurate, timely, relevant (to the questions being asked), directed (at the right people), analysed appropriately and visualised in a way that makes sense to all members of the QI team [36, 37]. In the field of Neonatal Medicine in the UK, collaboration between the National Neonatal Audit Programme and the newly established Neonatal Data Analysis Unit has created the National Neonatal Database, whose collection and secure storage is greatly facilitated by electronic data capture across greater than 96 % of all neonatal units in the UK utilising a uniform technical platform (Badger.net). This National Neonatal Database is available for local regional and national projects, supporting healthcare commissioning, service development, quality improvement and neonatal research [38].

Summary

In God we trust, all others must bring data. W. Edwards Deming

Data is the foundation upon which all quality improvement is built. It is used to describe how well any healthcare system is working. Data separates what is thought to be happening from what is really happening. As outlined previously, all improvement requires change, but not all change results in improvement. Measurement and accurate data establishes whether changes lead to improvement and helps ensure that any achieved improvements are sustained as well as permitting benchmarking of performance locally, regionally and nationally.

QI teams that monitor and improve both resources (inputs) and activities carried out (processes) together will be most successful in improving quality of care (outputs/outcomes). Assessing what is done (what care is provided), and how it is done (when, where, and by whom care is delivered) together in a collaborative manner, utilising the knowledge, skills, experience and perspectives of all the different individuals within the team leads to the best and most sustained improvements [39].

True transformation in care requires not just data in real time but also clinical leadership, engaging the skills and enthusiasm of all members of the multidisciplinary team at the frontline of service delivery, involving families and especially engaging senior medical staff as partners for change. The Medical Leadership Competency Framework now incorporated into the online NHS Leadership Academy which has been adopted by all the medical royal colleges highlights that “improving services” is a fundamental part of clinical leadership [40].

Against the background of increasing demands and dwindling resources, complex healthcare systems need professionals and leaders across clinical, managerial and supporting IT disciplines who have the expertise and commitment to continually improve the quality of service they provide. QI is now a core component of our daily work. In modern healthcare delivery, it is not enough simply to turn up and do the day job and go home again. To make sustained QI integral to care, flexible, practical, clinically relevant and adaptable measures are required so that it is easy and non-threatening; a voluntary process that harnesses the intrinsic motivation to make things better for patients that brings the healthcare team to work each day.