Keywords

Understanding and improving patient safety in healthcare have been a focus in the United Stated since the early 1990s. Despite more than 20 years of effort, harm from healthcare remains high leading to over 400,000 deaths and over $1 trillion in costs in the United States annually. Much work remains to be done to understand the risks and mitigation strategies for care in the ambulatory setting and in the patient’s home. Errors in healthcare, as in other industries, are primarily due to the faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them. Improving safety in healthcare requires a framework that addresses such topics as leadership, governance, teamwork and communication, culture, effective error-prevention strategies embedded in the care systems, and patient/family engagement in care, care design, and organizational structures. Improving safety must be embedded in an organization’s approach to patient care, rather than a set of safety improvement projects. A wide range of publicly available tools can be used by organizations to improve safety. We do not yet have effective strategies to address patient safety across the entire continuum of care from the home, to the clinic, and to the hospital. Eliminating harm will require multiple groups acting in concert across the entire spectrum of healthcare.

Introduction to Patient Safety

History of the Patient Safety Movement

Many consider the publication of the Harvard Medical Practice Study in the New England Journal of Medicine in 1991 the true beginning of the modern patient safety movement [1]. The study examined 30,000 records from hospitals in New York State in 1984 and found that 3.7% of all hospitalizations included an adverse event caused by medical treatment. More than two-thirds of the adverse events were considered preventable. Of the approximately 2.7 million discharges in New York State in 1984, they estimated there were 98,609 adverse events, including 13,451 deaths. Total costs of these adverse events were estimated at $4 billion [2].

Public attention became focused on medical error after a series of publicly reported events including Betsy Lehman’s death, a Boston Globe health reporter, from a fourfold chemotherapy overdose at Dana-Farber Cancer Institute in 1994, the death of Ben Kolb at Martin Memorial Hospital from a tenfold overdose of epinephrine during routine surgery in 1996, the death of Jose Eric Martinez at Memorial Hermann Hospital in Texas from a tenfold overdose of digoxin that was almost intercepted three different times, and multiple episodes of wrong-site surgery [3,4,5]. Multiple newly formed and existing organizations began to focus on error prevention and improving the quality of care including the Institute for Healthcare Improvement (IHI), the Institute for Safe Medication Practices (ISMP), the Joint Commission for the Accreditation of Healthcare Organizations (JCAHO, now TJC), the National Patient Safety Foundation (NPSF), and the National Quality Forum.

By 1999, the IOM report To Err Is Human reported that there were as many as 98,000 preventable deaths/year in US hospitals, far exceeding the number of deaths from motor vehicle accidents, breast cancer, and AIDS [6]. The total costs of medical error including lost income and disability were estimated at $17 to $19 billion. The IOM report laid out a comprehensive strategy including government, healthcare providers, industry, and consumers to reduce preventable medical errors. A key conclusion of the report was that the majority of medical errors do not result from individual recklessness but “are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them.” In 2001, Congress appropriated $50 M annually to the Agency for Healthcare Research and Quality (AHRQ) for patient safety research but cut that funding only 3 years later. Hospitals began to focus on changes to improve the safety of their patients. Research in error prevention and patient safety grew steadily.

Current State of Patient Safety

Despite all of these efforts and the focus on improving safety, harm from healthcare that is intended to help is still too common. It is estimated that medical errors are the third leading cause of death in the United States with over 400,000 deaths annually and costs over $1 trillion [7,8,9]. Recent studies show harm in 13–33% of all hospital admissions, with 44–63% of events categorized as preventable, 2% leading to permanent injury, and 1.5% resulting in to death [10,11,12]. Similar rates have been reported in pediatrics [13].

Pediatric-Specific Patient Safety Risks

Pediatric healthcare has unique risks. Pediatric patients may be particularly vulnerable to medication errors due to the need for weight-based dosing with weights ranging from <1 kg to >100 kg, medications formulated and packaged primarily for adult dosing, lack of pediatric-specific indications, healthcare settings primarily built around the needs of adults, and immature hepatic and renal function in newborns [14]. Compared to adults, children have a higher risk of inpatient potential adverse drug events and a higher rate of prescription errors [15, 16]. The widespread use of liquid medications which require conversion from ingredient amounts (in mg) to volumes (in ml) and the choice between multiple concentrations also increases risk.

Many pediatric patients have limited communication skills. If the wrong wrist band is placed on a child, can he or she speak up? Some pediatric early warning systems thus assign an extra risk point to a child without a parent or other adult at the bedside who can speak up on behalf of the child [17]. The wide range of normal vital signs in pediatrics, where a heart rate of 50 can be the appropriate resting heart rate for a teenage athlete or an indication for cardiopulmonary resuscitation in a newborn, can make the recognition of abnormal vital signs difficult. Chlorhexidine gluconate, a common antibacterial solution used to prevent infections, can cause harm in preterm infants [18].

Pediatric Hematology/Oncology Patient Safety Risks

Pediatric hematology/oncology has additional safety risks. The narrow therapeutic index of many chemotherapy drugs may increase the impact of any dosing errors in both adult and pediatric oncology. Of 310 pediatric chemotherapy errors, 85% reached the patient and 16% required additional monitoring or a therapeutic intervention [19]. Although clinical research studies have led to the many major advances in pediatric cancer outcomes, the research studies themselves may cause safety risks with conflicting information about chemotherapy agents both between protocols and within the same protocol [20]. Despite standardization of the protocol document layout and road maps by the Children’s Oncology Group (COG), the treatment regimens remain highly complex with varying dosing rules, modifications for specific disease characteristics, and modifications for specific side effects that may vary during different treatment phases. Advances in precision medicine are creating a virtual explosion in the amount of information that must be synthesized in order to make optimal treatment decisions, and the complexity of the reports increases the risk of misinterpretation.

But the risks are not just associated with the use of chemotherapy. Since most children do not have an extensive past medical history, the first discovery of children at risk of bleeding may be unmasked by routine pediatric surgical care. Challenges with venous access in young children with both cancer and blood disorders lead to the increased use of implanted central venous catheters both in the hospital and at home with the attendant risk of infection. Fixed tablet sizes for many key medications make it difficult to achieve child-appropriate dosing. In short, the potential safety risks in pediatric hematology/oncology are many.

Patient Safety Across the Care Continuum

The vast majority of patient safety research has focused on the inpatient setting. Limited research in the ambulatory setting has generally focused on such issues as medication safety, diagnostic errors, office-based surgery and anesthesia, and communication. The generalizability of the results is in question given the limited number of research sites, usually only in primary care and often with electronic health records. Intervention research has been remarkably rare [21]. Still less is known about safety in the patient’s home, including care delivered either by trained healthcare professionals or, more frequently, by the patient and/or the patient’s family. Implementation of line care bundles to prevent central line-associated blood stream infections (CLABSI) has been effective in reducing CLABSI in pediatric hematology/oncology inpatients [22]. However, the vast majority of days at risk for a CLABSI occur when the patient is at home with the central line. These infections are twice as common in outpatients and cost $35,000 per episode, but little attention has been focused on ambulatory prevention until the ongoing efforts of the Children’s Hospital Association Childhood Cancer and Blood Disorders Network [23, 24]. Line removal, another key prevention strategy in the inpatient setting, is rarely appropriate for pediatric hematology/oncology patients whose care is dependent on a central line for months to years at a time. Medication errors in the outpatient and home setting are more common in pediatric than adult visits with at least one medication error detected at each pediatric visit [25]. Little is known about how to prevent such error.

Errors in Healthcare

Error Definition

Errors in healthcare are common, although not all lead to harm. The model developed for medication error can be applied more broadly to healthcare errors in general [26].

  • Medical error is the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. Medical error results from either an act of commission (doing something wrong) or an act of omission (failing to do the right thing) and may lead to an undesirable outcome or a significant potential for an undesirable outcome. Most medical errors do not result in harm and have no potential to cause harm.

  • An adverse event is an injury resulting from medical care rather than from the underlying disease or patient condition. Examples include graft-versus-host disease, myelosuppression from chemotherapy, and anaphylaxis from penicillin. An adverse event is an undesired outcome of care but does not imply error, negligence, or poor quality of care.

  • A preventable adverse event occurs when the harm is the result of an error or system design flaw. Anaphylaxis to penicillin which was ordered and administered despite a known allergy to penicillin is a preventable adverse event.

  • A potential adverse event is an error with potential to harm. Some potential adverse events are intercepted before they reach the patient, such as when a pharmacist intercepts the order for penicillin in a patient with a known allergy. Still other potential errors reach the patient but do not cause harm. Potential adverse events are also called near misses.

The overall frequency of adverse events in pediatrics is not known, but medication errors are common. In the first large-scale study of pediatric medication errors and events, medication errors were made in 5.7% of all orders and impacted 55% of admitted patients. Potential adverse events occurred in 1.1% of orders and impacted 10% of admitted patients. Preventable adverse events occurred in only 0.05% of orders and impacted 0.5% of patients [15].

Error Causation

Human Factors

We know that errors are common, and we know the conditions that contribute to errors. Human factors engineering, the study of the interaction between individuals, individuals and machines, and individuals and the environment, helps us to understand the human condition that contributes to errors. In any process, errors can occur at the “sharp end” when a human being involved in the process makes an “active” mistake. In general, these errors are not deliberate. Errors can be classified by Rasmussen’s three levels of performance. [27] Skill-based mistakes, slips and lapses, involved stored patterns of preprogrammed instructions that are applied incorrectly leading to the final action not matching what was intended. Examples include a skilled driver stepping on the brake instead of the clutch or forgetting to sign an order after writing it. Rule-based mistakes occur in the setting of familiar problems that the person usually addressed by application of stored rules. The action does not match the intention because the wrong rule is applied and thus does not achieve the desired result. An example is applying the rule “order prophylactic antiemetics for chemotherapy” when ordering vincristine which is not emetogenic. Knowledge-based mistakes usually occur in novel situations when the actions must be planned but fail because of knowledge deficits. An example is an intern who orders fresh frozen plasma (FFP) to treat a prolonged PTT in a patient with severe hemophilia who is admitted for observation after minor surgery. In this case, the appropriate treatment, factor was given prior to surgery, and the prolonged PPT is a manifestation of the underlying disease. There is no reason to treat the prolonged PPT, and, if the patient was bleeding, FFP would have been the wrong treatment. A violation, as described further below, occurs when there is deliberate deviation from an accepted protocol or standard of care.

Contributing factors to human error include conditions such as fatigue, illness, distractions, and stress (Table 3.1). When we investigate adverse events and design countermeasures to minimize the opportunity for recurrence, we must consider human factors and develop processes that address the human condition to minimize the opportunity for errors. The patient safety movement has embraced the need to understand and incorporate human factors as a key part of improvement. Some organizations are now hiring engineers, former pilots, and other system designers to join improvement teams. In addition, we recommend that organizations, at the very least, help staff learn about the conditions and contributing factors and use that understanding to develop countermeasures.

Table 3.1 Factors contributing to human error

As technology continues to advance and more and more electronic tools become available, it is necessary to understand how humans interact with the technology. Many errors occur when the interface between humans and machines is poorly designed. An example of well-designed technology is the smartphone. The developers purposely designed the phone to be easy to use right out of the box. The icons are easily recognized, and the features are intuitive. In The Design of Everyday Things, (Basics Books 2013), Donald Norman offers an easy way to understand the impact of poor design on human actions. The author describes the use of affordances, the design of a device, or an environment that helps a user perceive how to perform an action. An example is the design of handles or doors: a bar implies a push is needed to open a door versus a handle implies a pull is needed. Proper design of processes and equipment must be taken into account when making improvements. The Food and Drug Administration (FDA) offers advice on how to incorporate human factors into the design of equipment [28].

System Design

A system is a number of processes or steps that interact with each other to achieve a desired outcome. James Reason uses this definition to describe the difference between active and latent errors. Latent errors are those errors that result from poor system design [29]. The common approach to managing errors was to train and educate individuals and/or to punish, driven by the expectation that individuals will execute flawlessly. What we have learned, however, is that errors are common. Even the best-trained individual will find himself or herself in a position to make an error. Disciplining or removing the individual who made the error does not prevent someone else from making the error again if the contributing factors are part of the system. Reason referred to these as latent errors: errors just waiting to happen. The cause of latent errors includes poor design, situations where staff is constantly distracted, complex protocols, policies that do not support evidence-based practices, and pressures from management and others that cause individuals to take shortcuts.

Reason Swiss Cheese Model

Many times there are a series of steps in the process that are intended to block an error from reaching the patient. Reason likened these barriers to slices of Swiss cheese (Fig. 3.1). The holes represent flaws in the system that may go undetected until an event occurs. The more layers and the smaller the holes in each layer, the higher the chance of blocking an error. However, there are times when all of the holes line up, and the error reaches a patient. Efforts to address error reduction should focus on strengthening the design and the defenses of the system so that the opportunity for error is minimized and likewise is the opportunity for any errors to reach a patient.

Fig. 3.1
figure 1

Reason Swiss Cheese Model for Error

Normalization of Deviance (Amalberti)

Amalberti and colleagues introduced us to the concepts of violations and migration, and they provide a framework to understand and manage them [30]. Violations are deliberate deviations from standard protocols which may result in bad or good results. Bad results are when a patient is harmed. Good results are when the protocol is violated because of its complexity and the outcome for the patient is good. The problem is that unless someone is harmed, these violations are seldom acknowledged or tracked and in fact sometimes encouraged and accepted and they become the norm. Managers build systems and processes in which they anticipate clinicians and staff to work, expecting operations in a safe space. Because of a myriad of external pressures or complexity of the procedures, individuals will migrate away from the safe space to the point where they may not be following the protocols just to complete tasks as expected. Amalberti calls this phenomenon migration to an illegal normal space. That is the area where many in healthcare function everyday. The systems and processes put undue pressure on clinicians, resulting in work-arounds and violations. The further someone drifts from this safe space, the greater the chance of serious harm. Managers are usually not aware of the staff performing in this space until something bad happens and there is an investigation. Staff is not likely to inform managers that they are performing in the illegal normal space because they fear being punished. It is the responsibility of managers to understand staff performance and the pressures that may be forcing individuals to perform in this space. Corrections must be made to the processes in the safe space so that people can use processes as designed.

How Often Do Errors Occur?

How often errors occur remains an unknown, and different error measurement strategies lead to very different results [12]. Mandated reporting by federal and state agencies, as well as nongovernmental groups such as the Joint Commission, may be useful to identify a subset of serious adverse events, particularly so-called “never” events such as wrong-site surgery. Another approach such as adjusted hospital mortality rates also measures safety at the very crude level of only extreme events. This measure is even less useful in pediatrics where the overall mortality ratio is lower and variation between hospitals is hard to measure [31]. Other sources of error detection range from regional or national malpractice claim data to mortality and morbidity conferences within a specific program. Many healthcare organizations utilize internal safety event reporting systems to measure safety within their own systems. Even in an organization with a very strong safety culture, such reporting will miss many events. At the other end of the spectrum, direct observation finds a higher rate of error than chart review, but both are extremely expensive and impractical to use outside of a research setting [32]. Automated review of discharge codes to detect adverse events has been shown to have relevance for pediatrics [33, 34]. The Institute for Healthcare Improvement (IHI) Global Trigger Tool detects adverse events at a rate nearly ten times the rate of the AHRQ Patient Safety Indicators [12]. A modified pediatric system has also been developed [35]. We have very limited tools to measure harm in ambulatory care and in the patient’s home or to measure preventable harm and potential adverse events in all settings.

How to Make Healthcare Safer

An Institutional Response to Patient Safety

In March of 1995, the leaders of the Dana Farber Cancer Institute (DFCI) and many others around the country woke to this headline:

Big Doses of Chemotherapy Drug Killed Patient, Hurt 2d. The two patients, one a reporter for the Boston Globe, received a fourfold overdose of chemotherapy which caused life ending damage to their hearts. The normal reaction at the time was to find out who was involved and discipline or dismiss them from employment so that they could not hurt someone else at the institution. During the investigation by a number of agencies including The Joint Commission, Boards of Registration in Medicine, Nursing and Pharmacy, and the Department of Public Health, it became evident that the clinical team involved included very capable and experienced individuals. The investigation also identified numerous deficiencies, including protocol violations, ineffective drug error reporting, and oversight of quality assurance by hospital leaders.

The response from DFCI leadership included the following:

  • New rules were adopted mandating close supervision of physicians in fellowship training.

  • Nurses were required to double-check high-dose chemotherapy orders and to complete specialized training in new treatment protocols.

  • Interdisciplinary clinical teams reviewed new protocols and reported adverse events and drug toxicities.

  • A trustee-level quality committee was reorganized and strengthened.

  • Discussions were begun regarding the transfer of inpatient beds to nearby Brigham and Women’s Hospital.

However, as important as these changes were to decrease the opportunity for error, the leaders of the organization under Chief Operating Officer James Conway learned that other more profound changes contributed to improving safety.

First was the adoption of a systems approach and design to prevent errors. Understanding the contribution of human factors contribution to the error, DFCI worked to design systems to prevent errors including the development of protocols and templates for chemotherapy ordering, as well as implementing technology to assist in the process. The application of the principles of standardization and simplification was critical to this change.

  • Safety was no longer to be viewed as someone else’s problem. All clinical staff and leaders, up the Board, had a responsibility and accountability to ensure safe practices.

  • DFCI developed a learning system through which staff and others collected and analyzed information from reporting systems, pharmacy interventions, and safety rounds. This analysis helped to identify opportunities for improvement.

  • DFCI began the process of engaging patients in advisory councils that provided patients’ view of the system and what kind of improvements would help them be safer.

  • The staff at DFCI adopted the approach that cancer care is very risky because of the condition of the patients and the medication used. As a result, the clinicians and leaders adopted a relentless pursuit of constantly improving. They recognized that mistakes will happen even in the best designed systems, and it is the responsibility of all the staff to identify these errors, mitigate their impact, disclose to patients, and provide support to the clinicians involved.

Learning from Other Industries

We often hear that aviation and healthcare have much in common. However, there are differences in that in the aviation industry, the teams involved consist of a smaller group of individuals, the norms and processes to operate a plane have been standardized and provide customization-based well-evaluated and practiced activities, and the equipment has been tested and will not react differently because of individual variation. Healthcare on the other hand involves a team with many players, best practices exist but may have to be individualized based on the patient, there is more than one way to achieve the same result, and individual autonomy has been allowed. So why the comparisons? [36]. John Nance in Why Hospitals Should Fly describes how a fictitious hospital can take the lessons learned in the aviation industry to help a hospital achieve the same kind of reliability found in the aviation industry [37, 38].

The comparisons between healthcare and aviation serve to help understand what should be in place to ensure that we provide the safest care possible for patients. Although there are many routines in healthcare that have been standardized, healthcare providers also encounter highly unpredictable situations which require rapid responses on a daily basis. Emergencies and departures from routine practices are unusual and to be avoided in other high-risk industries. In healthcare it is not uncommon to encounter a patient with an unknown diagnosis, where the disease may be masked or may be complicated by comorbidities.

High-risk industries have developed a culture in which individuals share a common vision and work together as teams, communicate clearly and frequently, have flattened the hierarchy, see any defect as an opportunity to improve, and have developed a learning system so that any improvements are shared with all who need to know. In healthcare, we identify these characteristics in a safety culture in which there is little tolerance for poor practice and staff are uniformly conscientious and careful. [38].

Framework for Preventing Error/Maximizing Safety

In order to achieve long-lasting improvements in safety, it is necessary to change the paradigm from improving safety as a project to improving safety as a part of the organization’s work in all ways, at all times. The second is to use a framework that provides the skeleton upon which all of the work can be added. There are two overarching components under which a set of elements must be in place and depend on each other: a learning system and culture [39].

Common to each is the role of leadership. It is the responsibility of leaders at all levels of the organization to develop an environment of teamwork, psychological safety, and respect. Psychological safety is an environment where people feel free to speak up, are respected, and are accepted [40]. Accountability is ensuring that individuals know their roles and are held to a standard of acting and in a safe way will receive the appropriate training to act in that way and will be judged fairly. Teamwork and communication are key building blocks to ensuring safe care. Healthcare providers develop a shared understanding, anticipate needs and problems, and have agreed methods to manage these as well as conflict situations. Empirical evidence from high-risk industries has been demonstrated to produce high-quality results [41]. Negotiation skills to be able to gain genuine agreement on matters of importance to team members, patients, and families are critical components of safety. Continuous learning refers to the organization’s commitment to collect and learn from defects and reflect on what changes are necessary to improve [42]. Improvement and measurement: in order to improve the processes we work in, organizations must adopt an improvement method which applies the appropriate techniques to the issues to be addressed in order to improve processes and outcomes. Measurement is a critical part of testing and implementing changes; measures tell a team whether the changes they are making actually lead to improvement. Reliability refers to the application of processes to ensure continued failure-free operations over time in which patients receive evidence-based care. Transparency refers to respectfully sharing data and information with staff and patients and families.

The patient safety movement urged us to move away from a culture of blame to a blame-free culture. The pendulum swung too far from one extreme to the other. Over time, we came to realize that we must act in a manner that is a balance between blame and blame-free, a balance between safety and accountability [43]. The biggest challenge in adopting this culture is the implementation across the entire organization. There are several guides available: James Reasons Decision Tree for Unsafe Acts Culpability [29], David Marx Just Culture [44, 45], and the Fair Evaluation and Response Chart [46].

The Manchester Patient Safety Framework (MaPSaF) is a tool to help National Health Service (NHS) organizations and healthcare teams in the UK and assess their progress in developing a safety culture [47]. The framework can be applied in the acute care, ambulatory, mental health, and ambulance settings. The Agency for Healthcare Research and Quality (AHRQ) sponsored the development of patient safety culture assessment tools for hospitals, nursing homes, ambulatory outpatient medical offices, community pharmacies, and ambulatory surgery centers [48]. Similar to the Manchester tool, organizations can assess the present state of the culture, identify where there are differences, identify strengths and opportunities for improvement, and conduct internal and external comparisons.

In order to change a culture, it is necessary to match strategy and culture. The ingrained attitudes and practices may be such that any new strategy will be at odds with the prevailing culture. In order to build a different culture, one must act in the new way that is desired. By matching the actions with the beliefs, over time attitudes will change and along with the culture.

Deming offered advice on improvement in his 14-point philosophy [49]. He included items such as make the vision clear. Slogans are great and may be memorable but may not clearly indicate the direction and what is expected of staff. He also added that organizations should continuously improve their processes and systems. This is the kind of change that will impact the culture of an organization. The phrase “Act your way into believing” comes to mind.

Governance

Healthcare board members, senior executives, and physician leaders play key roles in patient safety. Patient safety depends on effective governance with highly engaged executive leadership teams working with highly engaged boards [50, 51]. Ensuring safe and harm-free care is a board responsibility, not one that is delegated to the executive leadership team. Table 3.2 illustrates Conway’s six key steps for boards [52]. Empirical studies have shown that boards demonstrating effective patient safety leadership have positive impacts on their organization’s safety performance and that boards that review and track their organization’s performance have better quality outcomes [53]. Although ensuring high-quality, safe care was already clearly within the fiduciary responsibility of hospital boards, the Affordable Care Act of 2010 emphasized that responsibility still further.

Table 3.2 Six key steps for boards

Teamwork and Communication

Teamwork and communication are critical to healthcare delivery, which depends on multiple individuals and systems. Communication failure is a major contributing factor in 70% of sentinel events [54]. Multiple reviews have shown that various aspects of team function contribute to team performance [55,56,57]. Effective teamwork has been shown to be a critical ingredient in multiple aspects of patient safety including the reduction of safety events, increasing safety culture, improving communication, improving staff satisfaction, and decreasing staff turnover [58]. There has been increasing recognition that patients and families can and should be core members of healthcare teams in addition to staff. Bedside multidisciplinary rounds and bedside report include patients and families in the care team. Inclusion of patients and families in other teams, such as process improvement or safety teams, is necessary to ensure that patient-centered care is designed with patients and families not for them.

High functioning teams have a common purpose, a shared mental model of the situation and the goals, effective communication, a common understanding of how each team member can contribute to the outcome, mutual trust with good cohesion and respect among team members, effective leadership, good situational awareness, and the ability to resolve conflicts. All members of the team participate in the work, and all feel comfortable speaking up regardless of rank or role. Leadership within a team is clear but flexible, and the same individual does not always serve in the leadership role. Conflicts can be raised and resolved. Teams emphasize “we” and “us” not “I” and “me.”

Effective strategies to improve teamwork focus on the cognitive and interpersonal skills needed to manage a process within a system rather than specific technical knowledge and skills. Team training focuses on facilitating human interaction and provides opportunities to practice and develop the necessary skills [57]. The principles of team training began with crew resource management (CRM) in the aviation industry and were first applied in healthcare in the 1990s [59]. TeamSTEPPS™ is a team training program developed by AHRQ specifically for use in healthcare [60].

Specific communication strategies facilitate team function. Structured briefings are opportunities to increase situational awareness, set a common goal, share information, and improve teamwork. De-briefings after an event, a simulation, or routine patient care provide an opportunity for teams to assess their own performance and identify opportunities for improvement. Planned and unplanned huddles help reestablish situational awareness and review existing plans and assess the need to adjust the plan.

Communication is critical to team function but can be impeded by perceptions of hierarchy, gender, culture, and many other factors. Key elements of effective communication include clarifying the problem and gathering relevant data, concisely describing the problem, actively listening to the response, and asserting concerns if needed [61]. Specific communication strategies that have been used in healthcare, such as SBAR, Call-out, Check-Back, two-challenge rule, DESC, and CUS, are designed to minimize conflict and the impact of hierarchy and maximize effective information transfer. Specific communication strategies to support handoffs, such as IPASS, maximize transfer of complex information including synthesis by the receiver [13].

A clinical example of SBAR communication from an experienced nurse

Situation

Dr. Smith, I am calling about Mary Jones who has a fever and a new oxygen requirement

Background

She is a 12-year-old girl with sickle cell disease, admitted for vaso-occlusive pain crisis. Her pain control is poor despite PCA. Her oxygen saturation is usually 96%

Assessment

Her oxygen saturation is now 90% despite 1 L by nasal cannula, and her fever is 39. I am concerned that she is developing acute chest syndrome

Recommendation

I think you need to order a CXR and blood culture and come see her right away

Addressing Human Error

You cannot change the human condition, but you can change the conditions under which people work.

—Dr. James Reason [29].

Since errors result from a combination of system design and the fallible human beings who work within those systems, the key to error prevention is proactively addressing those issues. Human factors is the “scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theories, principles, data and methods to design in order to optimize human well–being and overall system performance” [62]. If the system is designed to make it easier for people to do their jobs well, while accounting for their fallibilities, the overall system performance will improve. When analyzed thoughtfully, however, few systems in healthcare are actually designed to achieve the desired results. This frequently leads to work-arounds, consistent bypassing of policies or procedures by frontline workers, which then creates additional opportunities for error. Adding to this complexity is the tendency of many healthcare organizations to react to an event by adding a new step to an existing process rather than asking how that process should be changed or simplified.

Table 3.3 illustrates error-prevention strategies in order of effectiveness for creating lasting change to decrease errors. The most powerful strategies focus on the systems in which individuals operate. They are usually the hardest to implement. The next most effective strategies still target the systems but also depend on human vigilance and memory. The least effective strategies are usually the easiest to implement but rely entirely on human vigilance. Human factors engineering is critical to designing effective error-prevention strategies that account for our underlying human fallibilities. Usability testing involves testing the systems and equipment under real-world conditions to identify potential problems and unintended consequences of system design. The goal is to build a system that is “mistake proof,” facilitates correct actions, prevents simple errors, and mitigates the negative impact of errors that due occur.

Table 3.3 Error-prevention strategies

Fail-safes or forcing functions and constraints are among the most powerful and effective error-prevention strategies. True fail-safes, such as a microwave that will not start with the door open, are relatively rare in healthcare. Constraints that make it more difficult to do the wrong thing are more common. Preparation of vinca alkaloids in mini-bags makes administration via a spinal needle almost impossible. Many healthcare organizations have policies limiting chemotherapy prescribing to designated physicians. A computer-order entry system that only allows the designated physicians to order chemotherapy ensures that policy is actual practice. Reminders and checklists have gained widespread use in healthcare to prompt specific steps to be followed in a specific order. Implementation of a surgical safety checklist has been shown to reduce surgical deaths, but it remains unclear if it is the checklist itself or the culture changes induced by use of the checklist that improved outcomes [63]. Checklists, however, are only helpful if they are used in a meaningful way, not just a rote performance.

Patient and Family Engagement

Patients are at the center of healthcare. In a 5-day retreat at a Salzburg Seminar, a group of 64 individuals from 29 countries adopted the guiding principle of “nothing about me without me.” The intent was to switch how clinicians thought about care from a biomedicine (it is all about the care we deliver) to an infomedicine (patients and healthcare workers are informed, and there is shared decision making and governance) [64]. Thus was started a movement to engage patients in deciding about their care, developing “quality contracts” that served as building blocks for quality measurement which could be aggregated and recognize the individuality of patient. As Susan Edgman-Levitan notes: “Typically, the most important “experts”—ordinary people managing their health—are left out of the discussion and treated as objects of care, rather than partners in care” [65]. There are three ways to engage patients. The first is in their own care. When planning treatment, it is important to understand the patient’s goals and desires. Opportunities to incorporate patient and family preferences into pediatric hematology/oncology include such decisions as when to start prophylactic factor in severe hemophilia, choosing between surgery and radiation for local control in Ewing’s sarcoma, and many decisions in palliative care. The second way is to engage patient and/or family members in improvement teams. Any efforts to improve systems should include those who will be most affected by the improvement. Although there is little empirical data that this approach has resulted in more significant improvement, patient satisfaction has increased, and systems are designed with more consideration for the patient’s condition and needs. In “Engaging Patients in Team-Based Redesign,” Davis et al. describe different approaches used by the improvement teams to engage patients. The results were positive changes in staff attitudes for partnering with patients and higher patient satisfaction scores than nonparticipating teams [65]. The third way in which patients and or families can be engaged is to establish patient and family advisory councils and include patients on governance committees. In this model, patients and families are partnered with healthcare providers to provide guidance on how to improve the patient and family experience. AHRQ and others offer getting started toolkits [66].

Responding to an Event

As noted, there are many errors in healthcare, and most do not cause harm. However, when an error contributes to patient harm, the impact of that event is felt by patients and families. That impact may be physical, such as damage to an organ; psychological, such as fear of continuing treatment; and emotional for family members. There may be financial loss for the patient and family as well.

Responding to an adverse event requires that clinicians first ensure that harm to the patient is limited or do what is necessary to mitigate the harm. The organization should then begin an investigation into the factors that contributed to the error that resulted in harm. The most common method of investigation is the root cause(s) analysis (RCA) [67]. Adapted from other industries, the RCA involves examining the event in depth and identifying the root causes. The emphasis is on causes because there is always more than one cause. While the investigation is ongoing, there should be ongoing communication with patient and/or family to provide support and share as much as possible. There is a moral and, in some cases, legal requirement that there will be full disclosure to patients and families, as well as an apology and appropriate compensation if warranted. Research at the University of Michigan reports a decrease in claims when there is disclosure to patients and families [68]. The organization must also provide psychological support for clinicians [69]. As the contributing factors to the event are identified, the organization must use this information to improve and strengthen systems and processes to minimize the opportunity for such an error to occur again. In the spirit of improving care for all patients, sharing lessons learned with the healthcare community will be useful to help other organizations work to prevent similar errors.

Supporting Involved Clinicians: The “Second Victim(s)”

Clinicians are impacted as well. Dr. Albert Wu coined the term “the second victim” for clinicians involved in a serious event [70]. These individuals can suffer from physical and cognitive/emotional symptoms. The physical symptoms can include fatigue, insomnia, backache, and nausea. The emotional range experienced is anger, fear, stress, isolation, anxiety, rumination over the event, loss of interest in their work, burnout, and depression. At its most severe, there is post-traumatic stress, self-medication with alcohol and other drugs, and suicidal ideation [70,71,72]. It is important to note that this is not limited to the clinicians “responsible” for the error itself. All involved are at risk for such impact.

Institutions have the responsibility to put in place support systems for all involved clinicians. The successful programs have included both individual peer-to-peer support and support for teams [73]. Consider the difference between these two quotes from affected individuals, the first receiving no such institutional support, and the second benefitting from a peer-to-peer program: (1) “Twenty years later I still find myself angry at the lack of institutional support. There has to be more than getting a handout on PTSD.” (2) “Words cannot express how effective and outstanding this program has been. I truly do not believe I could have dealt (and continue to deal) with this tragedy without knowing that caring people/physicians do exist and do understand and do not judge. The most important aspect to me has been the understanding part which is very difficult to find. I could go on and on about the positives of this system.” Plews-Ogan et al. have shown that such support can help clinicians to not only avoid the array of negative outcomes described above but can give the experience an element of positivity: they become experts in prevention methodology, they improve teamwork, and they find themselves able to teach about the issue [72].

Leading Edge of Patient Safety

In a thought-provoking exercise, eight thought leaders imagined patient safety in 2025. [74] Their perspectives cover wide ranging topics such as the true embedding of safety culture throughout all of healthcare, the design of the healthcare system, the design of the physical design of healthcare environments, technology that supports both personal health records and a multitude of smart devices, truly patient-centered care with fully activated and engaged patients and families, comprehensive strategies to use simulation to maximize patient safety, and the elimination of risk associated with transitions. All shared, however, that no one change alone could truly improve safety. Commenting on this exercise, Dixon-Woods and Pronovost observed, “While these visions include new approaches and definitions for the concept of transitions in care (for example, admission and discharge), they fail to provide a specific vision for patient safety across the entire continuum of care from the patient’s home to the clinic to the hospital. Eliminating harm from health care cannot be achieved by any single health care organization but requires the multiple groups acting in concert across the entire spectrum of health care including payors, regulators, manufacturers” [75].