Keywords

1 Introduction

Human factors engineering (HFE) is the science and practice of understanding and improving the relationship between people and things. It should generally be considered synonymous with ergonomics, though there may be subtle differences in the use of the terms. HFE is based on the premise of designing work to human abilities, in contrast to the more traditional concept of adapting humans (via training) to work requirements. In a complex system, both may be required. The premise of HFE is that training alone is expensive, time consuming, unreliable, and cannot overcome many barriers to performance, and that instead we can leverage a knowledge of how humans naturalistically understand and respond to the world to enhance their ability to reach goals. Thus, training in conjunction with the design of tasks, technologies, and environment to support human abilities is more likely to be successful than just training alone.

The discipline has its origins in the scientific management principles of Gilbreth [1] and Taylor [2], combined with understanding of human psychology, physiology, anthropometry, and biomechanics among a range of other disciplines which emerged in the twentieth century. HFE became a discipline of its own in the 1940s, at a time when aircraft were becoming exponentially more complicated, and sequences of studies demonstrated a range of mismatches between human perceptual and cognitive abilities, and what they were being required to do. It emerged that human errors were predisposed to designs that required human operation and intervention, but did not account for their limitations. For example, on some aircraft the gear and flap levers were located close to each other, and felt the same in the pilot’s hand, which made it easy to confuse them [3, 4]. The time and visual demands of the tasks in which they were being used (takeoff and landing) meant that pilots used touch to activate them, with a mistake being recognizable only after the aircraft had subsequently entered a risky state. The solution was to change the shape and feel of the levers so they could not easily be confused. These concepts were extended in the 1950s and 1960s to the understanding of accidents such as Three Mile Island, and in the increasing mismatches between what humans were required to do in increasingly complex technological systems, and their abilities to do them [5]. It was recognized that accidents were happening not because people were fallible and technologies were not, but that failures happened where technological weaknesses amplified human weakness, and vice versa [6, 7].

Acknowledging that systems of work were a combination of humans, technologies, processes, policies, management, and training became known as socio-technical systems theory . In particular, the implication was that when things go wrong, to look only at human failures is to ignore the complexity of those accidents, and thus ignore a range of potential areas for improvement. One core principle of HFE is to understand and reduce the mismatch between human and system, and thus, through this socio-technical understanding, provide more highly functioning overall performance.

A more modern example of how the understanding of human cognitive process can shape designs that reduce errors and the need for training, while nearly invisibly enhancing performance, ability, and satisfaction, is found in windows, icons, menus, pointers (WIMPS) interfaces, upon which our interactions with personal computers are now based. These “direct manipulation” concepts were first developed at Xerox-PARC in Palo Alto in the early 1970s, and were leveraged by Apple for their first Macintosh computers a decade later, as a response to the existing DOS-based command-line interfaces that were opaque, required expert knowledge of computer functions, and did not facilitate human conceptual understanding of natural human interaction mechanisms. Thenceforth, the idea of “desktops,” “files,” “worksheets,” and “trashcans” was developed to mimic the office concepts that novice users would immediately recognize, and could directly interact with without needing to understand precisely how the computer worked. This opened the use of personal computing to the general population, which previously had been the preserve of enthusiasts and engineers. The more recent extension of this has been in touch-screen interfaces on mobile and tablet devices that add familiar gestures (pointing, pinching, swiping) to allow more naturalistic interactions immediately, flawlessly, and without needing to use or understand menu or icon selections. Once again, moving from an unnatural method of interaction to a more natural one Apple (and to a lesser extent Nintendo with their Wii games console) reduced the need for a conceptual understanding of an interface, thus reducing the need for training, while increasing ease and pleasure of use, even with products that were otherwise technically inferior. The difference was that anyone could use them.

These examples demonstrate some of the principles that HFE science and practice seek to spread. All systems require people; and in every system, there will be fallible users prone to errors, whose performance is shaped by things beyond their control (and often beyond their awareness or conception). Yet, it is people who create safety in complex systems by accounting for variations that systems designers cannot appreciate [8]. It is thus technological systems that are fundamentally fallible, and humans the “elastic glue” that holds the system together (or the “vehicle suspension” that smooths over the unpredictable and uneven “road” surface) [9]. As our systems of work become more complex, opportunities for mismatches between human abilities and work demands increase, and the more important HFE becomes. Healthcare systems are no different. In the next section we explore some of the most popular and influential HFE concepts in more detail.

2 Humans and Automation

There is no question that the increasing complexity and sophistication of machines can enhance human abilities and system performance. Machines can do repetitive tasks faster, more reliably, and with more force, and precision, day-in day-out than humans generally can. Latterly, they can process more information in more complex ways using sophisticated algorithms that humans are capable of. Yet, at some point, these technological systems need attention and management by humans. They can break down, are inflexible, work reliably only within the parameters for which they have designed, and can demonstrate huge deviations from acceptable performance when their data inputs become unreliable or corrupted. Conversely, humans have evolved to work in highly varying circumstances, can still make effective decisions despite uncertainty or lack of data, and can trade speed for accuracy (or vice versa) at a moment’s notice. In fact, designers seeking to mimic human activities—such as developing machine vision—have quickly recognized how complex the adaptations and judgments that humans are able to make about their environment must be, given the complexity of the world around them. The way humans interact with the naturally unpredictable and chaotic world around them is deceptively complex and it is a strength that humans are not purely information processors [9]. These different strengths of humans and machines—and how we can design ways for them to work together the best—have been of interest in HFE for 50 years.

The initial approach to human-machine integration was to automate tasks that machines could do, and let the humans do the rest (“take up the slack”). The approach, pioneered famously by Fitts [10], was to produce lists of functions (“Fitts Lists”) that machines should do, and functions that humans should do. However, this had a number of disadvantages. In particular, systems designed around these principles relegated previously skilled humans to “passive monitors,” supervising the machines and waiting for things to go wrong [11]. When the machines inevitably did go wrong, control was quickly passed to the human who was already conceptually and actively distant from the situation, and not necessarily at full awareness (since passive monitoring is not a task that humans are naturally good at). They were suddenly confronted with a cascade of complex events and system breakdowns beyond their comprehension, with important information either hidden or not easy to discern among a huge number of displays, alarms, warnings, and other environmental cues, and without a mental model of what was happening [9]. This set up the human to make bad decisions and accidents resulted. This can still be seen in accidents today, such as Air France Flight 447.

On the 1st June 2009, an Air France Airbus A330 flight 447 from Rio de Janeiro to Paris experienced a high-altitude stall and crashed into the South Atlantic. The event was triggered when a pitot tube (which measures airspeed) froze over and malfunctioned. This caused the autopilot to disconnect, though the cause of the disconnection (conflicting airspeed readings) was not displayed prominently. The pilot in control pulled back on the stick to raise the nose and presumably, in the absence of visual cues at night, over the sea, in adherence to the pilots’ heuristic of “staying high and fast.” However, this caused the aircraft to stall, which sounded a stall warning. As the aircraft slowed, the stall warning then stopped automatically, as it was programmed to do, when airspeed dropped below a minimum. This created confusion, as it would then sound again when the pilot pushed the stick forward (which will usually take an aircraft out of a stall). In the absence of reliable speed information , this created further confusion. The pilots then became uncertain about which instruments to trust, and appeared to utilize the flight director (one of the main guidance displays) even though it was reading incorrectly. The problem of freezing pitot tubes was known, with nine incidents in the previous year, and the aircraft in question was due to have them replaced on return to Paris. However, the pilots may not have been aware of this potential threat [12]. The confusion was never resolved, and the aircraft hit the sea, killing all on board.

The idea that “replacing” the human, who is seen as weak and fallible and only there to support the technology, has given way to a different philosophy, which recognizes that humans are essential—and indeed create safety in complex systems. This creates the opportunity for a different approach, to support humans with automation (and not the other way around). Humans should stay in control, actively monitor the systems of work, and be directly involved in delivery by selecting or deselecting automated systems according to their experience and knowledge of the complex components of the tasks which machines are not engineered for. This allows the humans in the system to manage their skills and experience better, and successfully create flexibility and resilience, while also taking advantage of a range of reliable automatic assistive functions. This is seen on most modern aircraft (for example, where an autopilot can make fuel use more efficient), software (such as spelling and grammar checking), and more recently many driving aids and automated driving solutions. The mixed success of these approaches means that there is still much work to do to understand how best to help humans and machines to work together. These surprising and perhaps counterintuitive effects of socio-technical systems [13] have generated a number of themes, collectively referred to as “ironies of automation,” such as the following [14]:

  • Automation does not simply “replace” humans—instead it transforms work, and creates new roles for people.

  • Automation does not always free up mental resources and attention—instead it can create new mental demands, especially in busy, critical, or time-pressured moments—and usually requires the operator to monitor the technology in addition to the task.

  • Instead of requiring less knowledge, it requires different knowledge and a new set of skills, often in addition to the existing skills (which need to be actively maintained to avoid fading of those skills).

  • Instead of providing flexibility, automation creates a wealth of new modes and functions that need to be understood and that require new opportunities for omissions, failures, errors, and misunderstandings.

  • Rather than necessarily increasing safety, the introduction of new technology must pay for itself by doing things faster and more cheaply than before, which can place new throughput and economic demands on other, equally weak, parts of the system.

Many of these issues have been uncovered in infusion pumps [15], electronic health records [16], laparoscopic surgery [17], surgical robots [18], and a range of other clinical and nonclinical contexts. In essence, we have learned that discussions which focus on replacing the human with technology, usually underestimate the extent and value of human contributions to performance and safety, and will likely create a range of new problems. However, if we approach automation design from the point of view of helping the human to achieve their goals, by supporting adaptive human sensemaking and decision making within a complex system, we stand a greater chance of avoiding catastrophes and creating success.

3 Human Factors in Device Design

A resident attending a crash call was the first to arrive at the bedside. Treatment was started, and the resident, working closely with a nurse, decided that IV access was needed. Knowing that the crash cart contained a intraosseous injection device, the resident asked for this from the nurse. This technique for rapidly obtaining a route for IV drugs is based on a spring-loaded needle that is fired into the bone from a tube about 2″ wide and 6″ long. To activate, it is placed onto the skin and the tube pressed forward by the thumb or palm of the hand. The tube is symmetrical with an arrow directing the user towards the needle end of the device. The nurse unwrapped the device, and handed it to the resident. However, as the patient was a below-knee amputee, the resident needed to take more care to locate the appropriate place for the injection. He put down the device, found the right location, picked it up again, and fired it. Unfortunately, in the time pressure, uncertainty, and novelty of the situation, he had unknowingly reversed the device, which was now in the wrong direction. The needle went into his hand.

Designs can predispose to errors, or can guide users towards the right methods and modes of operation [19]. The wrong buttons in the wrong place, displays that are unclear, labels that are ambiguous, or devices that allow unsafe configurations can all contribute to an error. In the above example, if this device had been asymmetric or felt different in the resident’s hand the error could have been prevented. For example, similar bone injection devices have a pistol grip, where the direction is immediately apparent to the user—who may not have time or be too distracted to look. Similar to the flaps and gear levers on 1940s’ aircraft, this resident was set up to fail by design. Fortunately they were not seriously hurt, but could no longer lead the crash call, delaying treatment initially, but eventually without an obvious effect on outcome. In healthcare, which is much more complex than aviation, where incidents are much more numerous, and without reliable objective accident analysis metrics, these error-inducing designs in healthcare frequently go unnoticed.

When we think about technology, we usually think in terms of what it can do (the functionality), rather than what people need to do to make it work (the usability). However, the functionality of a device (i.e., what it can do) is only as good as the usability (how we can do it). A good rule of thumb is that the more functionality a device has the less usable it becomes, but a device with limited functionality can still be limited by poor usability. In effect, usability is always important, but dramatically increases as a device becomes more complex. This complex interplay between functionality and usability also helps to consider acceptability—the likelihood that a device will be adopted and used. The device must also be used appropriately, be reliable, fit into normal working practices, be accessible and understandable, inform decision making, and lead to demonstrably better performance. In 2016 the FDA released new guidance for the consideration of HFE in the design and testing of medical devices [21], which requires the human to be considered—and users tested—from early concept stages to final evaluation. However, HFE is rarely considered in local procurement practices, and the FDA guidance cannot account wholly for the complexity of work. The technology acceptance model (TAM) [20, 22] illustrates this relationship between ease of use and perceptions of utility (see Fig. 4.1).

Fig. 4.1
figure 1

The technology acceptance model [20]

The key themes in human-centered design are the following:

  • Design for the user population: The device should be designed for a carefully identified group of users (not just “experts” or “opinion leaders”). They should be involved at every stage of the design process (including conception), with testing conducted throughout with a chosen sample of those anticipated users. One in ten users will be color-blind. Older users may not have the digital dexterity of younger users.

  • Designs should be adapted to users, not users to designs. Relying on training, memory, warnings, or instructions as a solution to a design problem is weak, expensive, and error inducing.

  • Affordances: Designs should reflect intended use. For example, a handle on a door that you pull, or a push-plate on a door that you push.

  • Consistency: The way users interact with devices should, as far as possible, not vary when using similar functions. For example, changing between numeric keypads with “telephone” type and “calculator” type will predispose a keying error.

  • Redundancy: There should be multiple failure avoidance mechanisms built in. For example, to make a clear distinction on an important dimension, the color, look, and feel should all be different.

  • Control and display compatibility: How you change something on a device should reflect how it is being changed in the real world.

  • Functional grouping: Similar functions, displays, and switches regularly used together should be located together. Some anesthetic machines have the power switch located closer to the suction container than the suction power switch. This predisposes to errors.

  • Understand contexts of use: Where the device is used needs to be considered within a design. The environment, the physical space, interactions with other devices, people, or tasks all affect usability. If an item is to be used while gloved, this may reduce tactile cues.

  • Procurement: The people who purchase devices for an organization should be the people using them. For many high-cost purchases, user trials would be highly beneficial and cost effective.

4 Cognition in Context

Humans make decisions within a broad systems context, and problems with decision making are more common than errors in technical skill [23]. Cognition within work contexts and how it leads to decision making have been of extensive interest in HFE and applied psychology research. Traditional clinical decision making tends to focus on which decision from several is best, often based on comparative evidence-based studies. In contrast, HFE focuses on the mental processes by which an understanding is reached and how a decision is made. It is often focused on process decisions—how we set goals and reach them, or how we navigate a patient through the complex sequence of care required to deliver the appropriate care. In this section we consider three different but dominant paradigms of relevance, situational awareness, naturalistic decision making, and distributed cognition.

Of the three paradigms in this chapter, situational awareness (SA) [24, 25] is perhaps the simplest to understand. As with much HFE work, SA research stems from aviation research, where situational awareness was considered to be a deciding factor in air combat success. Subsequent studies arrived at three levels of perceptual and cognitive processing that can be considered in most dynamic, rapidly changing high-technology tasks. The three levels are the following:

  • Level 1 SA: Noticing (“What?”): This is the basic perceptual level of SA where important elements in the environment become salient to the observer/operator via the basic senses. They might register a change in blood pressure, or a distinctive smell, a vibration or a touch, or the presence of absence of a sound. Without awareness of these stimuli, the next level of SA cannot be reached.

  • Level 2 SA: Understanding (“So what?”): This is the interpretative stage, where the operator applies meaning to the data they have become aware of in stage 1. It is one thing to recognize a change in the environment, and another to know what it means for the task at hand. Technical training is often focused at this stage. In air combat, knowing what speed you are at combined with the optimal turning speed for your aircraft helps you to understand how close to an optimal turning state your aircraft is currently in. In healthcare, for example, this would be understanding the hemodynamic implications of different arterial pressure locations and measurements.

  • Level 3 SA: Projecting (“Now what?”): The highest form of SA is being able to predict future states of the system you are working in. Noticing and understanding what is happening, and applying your previous expertise to make predictions about what will happen next, enable the human to respond in the most appropriate way to move closer to the desired goal. In the original air combat scenario, thinking ahead allowed the pilot to avoid getting into low-energy states that an enemy could take advantage of, and instead allowed the pilot to move into a firing solution position. In cardiac surgery, understanding the trajectory of a patient’s vital signs, and responding early if the predicted outcome is undesirable, yields safer, more responsive care. Projecting is the most challenging level of SA .

The more expertise you have, the better able you are to rise up through the levels of SA; while the higher your workload, the more distractions there are, or the more unpredictable or complex the situation is, the more cognition will reside in the lower levels. The less able we are to project into the future, the more likely we are to arrive at a point that is undesirable, unsafe, or even more error inducing. This is why experienced pilots may tell you that they will always anticipate where their aircraft will be in the future, and never aim to fly in a reactionary way—which means that they can plan more effectively, and will stay out of serious trouble. When they can no longer do this, they know that they are in a risky situation.

A simple example of how the three levels of SA interact can be found in driving. Imagine you are driving along a highway and slower moving traffic is merging from an on ramp. You see a car on the on ramp moving slower than you (Noticing/Level 1 SA). You understand that this means that there is a risk of collision and that you may need to make a decision to alter your course (Understanding/Level 2 SA). You recognize that your car and the merging car will arrive at about the same time at the point where the ramp merges with the highway (Projecting/Level 3 SA). This means that you need to decide to speed up, slow down, or change lanes. You look in your mirrors and check your blind spot seeing, that there are no other cars nearby (Level 1 SA). You realize that this means that you can move into the middle lane (Level 2 SA) and that there is time to execute this move in plenty of time before your paths cross (Level 3 SA). You therefore decide to move into the middle lane. The more cars there are on the road with differing speeds and locations, the more variant your or the speed of the merging car is, or the worse the visibility or shorter the timescale, the more difficult this decision will be, and thus the more risk will be experienced. This is also affected by driver fatigue, experience, distractions, alcohol, automation (which often reduces awareness ), and even the familiarity they have with the vehicle and the road on which they are travelling.

Thus, the concept of situational awareness helps us to understand how information is used to make accurate decisions; and how the clarity of the information, the environment, the training and expertise of the human, and their active involvement in the task over time helps us to make safe and appropriate decisions within complex, unpredictable, changing situations [26]. The best decisions are made when key information is presented clearly and understood by someone with enough expertise and who has been involved in the task long enough to predict what is going to happen next and account for it.

In situations where the goals , and ways to achieve them, may not be as straightforward, the naturalistic decision-making paradigm [27] can be useful. It helps us understand how human decision making is mediated by technological, organizational, and environmental contexts in greater uncertainty, and less dynamic or fluid situations. It has been extremely influential in the science of applied cognition, especially in military operations [28], although it has not been widely applied in healthcare. Decisions are not necessarily logical, linear, and evidence based. Instead, they are based on a wider view of multiple patients, expertise, systems complexity, behavioral intention, individual beliefs, and current understanding of the system. This research has led to a number of conclusions that often run counter to how clinical decision making is usually considered, such as the following [29]:

  • Experienced decision makers can draw on patterns to handle time pressure and never even compare options.

  • Expertise in decision making does not depend upon learning rules and procedures but on tacit knowledge.

  • Problems are not always solved by a clear description of goals at the outset, since many projects involve wicked problems and ill-defined goals.

  • Humans do not make sense of the world as “information processors” by fusing multiple data streams into eventual understanding—instead, experience and understanding define the important data streams, and most data is ignored.

  • Uncertainty is not necessarily reduced through more information—too much data reduces performance, while uncertainty can stem from an absence of contextual cues that accompany data.

  • Decision making is not necessarily improved by understanding assumptions since we may be unaware of our most flawed assumptions.

Moving towards more complex, team-based tasks, studies of human-system relationships in socio-technical environments have also led us to consider that cognition and decision making are not purely the properties of what occurs in the head of one individual. In fact, cognitive processes are often shared between different individuals working together through communication and shared culture; across material environments which aid in recall and action through cognitive artifacts such as computer displays or hand-written notes; and across time, where strategies, approaches, protocols, cultures, and artifacts accumulate over time. This is known as distributed cognition. The classic text by Hutchins (“how a cockpit remembers its speed”) [30] considers the aircraft cockpit as the cognitive unit, and the people, displays, and procedures all components of how cognition is successfully distributed to achieve an understanding of the world that would be impossible for any one component alone. More recently, this approach has been used in anesthesia and other healthcare-related settings [31], considering the following:

  • How information flows in tasks and between people.

  • How tools and representations of work (such as protocols or checklists) are structured and how they affect the work.

  • How the physical layout of a room or environment affects the distribution of information.

  • How the social structure—roles, relationships, knowledge, and goals—affects the “cognition” of the whole.

  • How the whole changes over time.

This alternative approach to the reductionism found in more traditional science and engineering approaches has yet to be well recognized within healthcare, but would seem extremely apt for understanding the complex, highly distributed tasks found in cardiac surgery. In particular, perfusion management requires the complex coordination of people, equipment, information, and tasks in order to perform appropriately. No one person has full knowledge of every aspect of this task. Thus, perhaps we should consider “how an operating room manages cardio-pulmonary bypass.”

5 Performance-Shaping Factors

In this final section, we explore how environmental factors often outside the control of the human can affect human performance. These “performance-shaping factors” include fatigue, noise and vibration, lighting, temperature and humidity, and physical constraints of the workspace. A huge number of experimental studies have explored the effects of these different stressors on a variety of tasks. They can also be considered in terms of staff safety, offering environmental risks. There is a growing interest in these factors and the role they play in patient outcomes. Though there are many models, the general concept is that these factors adapt cognitive capacity downwards, increasing errors. This creates further opportunities for failure that further reduce human capacity, leading to a spiral of increased risk. Fatigue, for example, compromises perceptual abilities, increasing the chances of errors, and decision making, reducing the likelihood of appropriate responses . Noise can mask important communication, and can either reduce or exacerbate fatigue, depending on the types of noise and individuals experiencing it. Interruptions and distractions divert attention from the primary task, which can reduce hand-eye coordination, create task fragmentation, increasing the chances of forgetting or omitting steps, and introduces delays while the human switches away from, and then back to, the primary task. Temperature and humidity increase physiological stress, can lead to dehydration and fatigue, and can also create interruptions, for example, while the human wipes their brow or clears fogging of a lens or goggles (Fig. 4.2).

Fig. 4.2
figure 2

A human factor engineering model of threat and error in surgical care [32, 33]

In surgery, there has been considerable interest in exploring how task deviations occur through these performance-shaping factors, and how they contribute to patient outcomes. The seminal study by Carthey and de Leval in congenital heart surgery found that enough of these small problems that were not appropriately accounted for contributed to increased length of stay and the chance of death in arterial switch operations [34]. Subsequent studies video recorded and analyzed in detail the sequences of events to allow exploration of how those minor process deviations occurred and the causes [35, 36]. This found a model where system threats—from organization, environment, task, technology, and patient—could generate performance-reducing problems. They could also generate human errors—either technical (clinical skills or expertise) or nontechnical (teamwork, decision making, awareness), which would also create performance-reducing problems [37, 38]. In some situations, they could be resolved with no further effects. In others, they could combine, especially with communication failures, absences of staff, equipment failures, or awareness failures, to create more serious situations. This would set up a cascade of events leading to a far more risky and potentially adverse situation [35]. At the same time, in the USA, similar studies were being conducted, showing similar effects [39]. Later studies [40] have explored these work environments, expanding our understanding of where the interoperative risks to our patients might lie. This is summarized in the excellent paper published in Circulation [41] that reviews a vast range of work in this area, which encompasses over 400 papers, and covers safety culture, physical environment, and communication and teamwork.

Beyond cardiac surgery , this work has been replicated and expanded in a range of other intraoperative settings including laparoscopic [42], vascular [43], orthopedic [36, 44, 45], trauma [4649], robotic [18, 5052], and neuro and maxillofacial [53]. Early emphasis on teamwork and checklists is slowly giving way to a more complex and richer understanding of how socio-technical system configurations contribute to success or failure in surgery. While this complexity may take time to elucidate and understand [54], it offers many new ways to think about how improvements in the efficiency, safety, and quality of surgical care might be delivered.

6 Summary

The primary purpose of this chapter has been to describe in some detail several selected theories and concepts derived from human factor engineering and research that could be applied to surgery. While some examples have been provided, there is a huge range of applications for this type of approach. There are many devices in the OR that are poorly designed and predispose to error. Few considerations are given to how OR teams make decisions, the importance of situational awareness , and distributed cognition. Automation is often assumed to perform better than humans, but this is not always the case, and increasing technology always increases complexity and creates unexpected effects. Direct observations of processes and performance-shaping factors in cardiac operating rooms have allowed us to begin to explore how the human factor lens can help us understand why we do what we do, why things go right and why things go wrong, and what we might do—aside from trying harder—to achieve more of the former, and less of the latter.