Introduction

Patient safety has received increasing attention since the Institute of Medicine (IOM) published To Err is Human suggesting that 3–4% of hospitalized patients will experience an adverse event. In looking closer at the etiology of these events, it is obvious that, as surgeons, we can play a major role in improving patient safety. Over half of all medical adverse events are surgical in nature and 75% of these occur in the operating room (OR). It therefore seems that the greatest improvements in patient safety will be achieved by targeting the OR for safety research and intervention.

The predominance of operative adverse events is not surprising. Not only is the OR the site of the most invasive type of medical care, it is also one of the most complex work environments in which people perform. Yet, despite a large body of literature addressing safety and coordination in other complex work environments, limited research on the OR exists.

Limitations of the Term “Error”

The IOM defines “error” as the failure of a planned action to be completed as intended (error of execution) or the use of the wrong plan to achieve an aim (error of planning). This IOM definition fails to describe the full range of adverse medical events because it implies that an error is a discrete action committed by a single agent and all clinicians know this is an oversimplification in most cases.

To illustrate this, we need to turn to literature from industrial safety research. James Reason describes two types of error; active and latent. Active errors produce an immediate, measurable change in a given patient’s status. For this reason, they are easily recognized and studied. By comparison, latent errors are features of the patient care environment, decisions, or plans that do not produce an immediate change in patient status but set up the conditions for such events to occur. We need to focus our attention on latent errors if we want to improve patient safety. The problem is our natural tendency to associate the term “error” with “active error.” This only serves to perpetuate a culture of blame and implies a discrete, identifiable fault that discourages a deeper understanding of the hidden faults and system features that contribute.

A Framework for Studying Safety in the OR

In order to truly understand how adverse events occur, we need to think of the OR as a system, or a complex assembly of people, information, resources, and equipment working toward a common goal: the safe, effective performance of an operation. System vulnerability reflects exposure to events and factors that can make the system less safe or more prone to adverse events. System vulnerability increases and safety decreases as events and factors cause a deviation from the expected safe course of care. If this deviation is allowed to progress, a threshold will be crossed where patient harm occurs (an adverse event). If compensation occurs, the system can return to the expected course of care during the operation, either before or after patient harm (adverse event or near miss). Once compensation occurs, these events are difficult to detect and study. Because surgical providers are accustomed to compensating in a high-risk system, unsafe practices are often not recognized if the outcome is good. Yet, it is the process of care and the environment in which care is delivered that most accurately reflect the overall safety of a system (regardless of outcome) and need to be studied in order to better understand and prevent adverse events.

Methods to Identify and Study Adverse Events and Near Misses

Malpractice Claims

The landmark studies describing adverse events and near misses utilized data from closed malpractice claims. The Harvard Medical Practice Study and study of Adverse Events in Colorado and Utah provided much of the data in the IOM reports on quality and safety. While these original malpractice claims analyses were able to describe where things went wrong, they did not shed much light on how. The Medical Insurer’s Malpractice Error Prevention Study (MIMEPS) was a large analysis of claims data from the Harvard School of Public Health.1 MIMEPS confirmed that 75% of events occur in the OR but, perhaps more importantly, began to identify some of the factors that increased system vulnerability and contributed to adverse events. The two most common were technical competence and communication breakdown.

Self-reporting

Another approach to identifying adverse events and near misses relies on self-reporting. Most institutions have an online reporting system and there are several commercially available. The difficulty with these systems is that they rely on the frontline provider to recognize that safety was compromised, remember what occurred once the operation is completed, and be willing and motivated to report the event. Because of this, self-reports tend to identify serious events with bad outcomes, but as we know there is much to be learned from cases that are recovered.

One self-reporting system that has been quite successful is the Pennsylvania Patient Safety Reporting System (PA-PSRS), a statewide database maintained by the Pennsylvania Patient Safety Authority, an independent agency created by the state to reduce harm from medical errors. Reporting is mandatory and anonymous and contains no identifying information. All information is confidential, nondiscoverable, and not admissible as evidence. These features of the PA-PSRS mirror successful reporting systems in other high-risk work domains. The PA-PSRS collects over 200,000 reports per year, 97% of which are “near-miss” events.

Another approach to increasing the value of frontline providers is to proactively collect data at the time of the operation rather than passively relying on self-reporting. Using this approach, Wong and colleagues report a mean of 3.5 events that compromised patient safety per case in cardiac surgery and 90% of these were recovered making them difficult to identify.2 Oken and colleagues compared the sensitivity for this type of proactive open-ended questioning to online self-reporting.3 Safety-compromising events were identified in 30% of cases with prospective questioning, compared to 1.9% with self-reporting.

Prospective Field Observations

Direct observation at the point of care has the greatest potential to identify events where safety is threatened. Additionally, these studies allow for an in-depth analysis of the system factors that contribute to these events and those that help providers compensate when things start to go wrong. This type of field work is well accepted in other high-risk work environments and is beginning to be adapted to the OR. Data can be collected either by trained observers in the field or by automated data collection. The majority of this work has focused on cardiac surgery and pediatric cardiac surgery in particular. This is likely due to the inherent complexity and risk of these procedures. However, prospective field studies have also investigated safety in intensive care units, orthopedics, and general surgery. Most of this work builds on the theories of human factors engineering, a discipline dedicated to the design of systems and environments for safer, more effective, and more efficient use.

A common finding in all of these studies is that these events occur much more frequently than previously realized. Safety is compromised multiple times per operative procedure so field observations are incredibly rich as a data source. The landmark paper in this area was published by de Leval and colleagues from the UK in 2000.4 They showed that 2.8 major and 6.2 minor events compromising safety occurred per case and both major and minor events increased the odds of patient harm. We prospectively observed ten complex general surgery cases to identify factors that influenced safety in the OR.5 We were able to identify communication breakdown and workload–competing tasks as the two most important factors. By studying the patterns of events surrounding these two areas, we were able to target areas for improvement and design more rigorous studies to further investigate. For example, building on what we learned in the observational study, we performed a more in-depth analysis of the MIMEPS claims data to further understand communication breakdown and develop standards that are currently being implemented to improve communication. Furthermore, we identified the count protocol as a particularly vulnerable part of an operation. We therefore performed a randomized, controlled trial to evaluate whether bar coding sponges could help improve this process.

Conclusion

The OR is the most common site of adverse events and near misses in medicine. It is therefore a high-impact area that should be targeted to improve patient safety. The focus of this work needs to be on understanding system vulnerability and improving resilience. Traditional approaches to research in this area have been outcome-based, often failing to detect recovered events and offering limited information about system factors. Prospective data collection allows for a more accurate estimate of incidence, the identification of contributing factors that can be targeted for intervention, as well as the opportunity to learn about system resilience and provider adaptation. This type of research will improve our ability to learn from adverse events and near misses in the operating room.