Keywords

1 Introduction

There is currently an exceptional rate of research and development on automated vehicles, with many prototype systems at all levels of automation being tested on public roads, and some automated functions already on the market (such as adaptive cruise control with lane centering, or automated parking). Human-related research topics such as the role of the human during automation (e.g. driver in or out of the loop), transfer of control, and the need to design HMIs to support system transparency (e.g., communicating system limitations and capabilities) have been brought into focus in the automated vehicle community because of a large body of research pointing out potential problems with automation that are related to the Human Factor and driver behavior.

At the 2014 TRB workshop on “Road Vehicle Automation”, the Human Factors (HF) breakout group discussed research questions framed around topics of transfer of control from a higher to a lower level of automation or to full manual control, and on the potential for misuse and abuse of automated vehicle technologies. Related to transfer of control, discussions centered on the need to design HMIs to support driver situation awareness and mental model development, and to promote use of systems capable of “failing gracefully”. It was proposed that improved feedback on system behavior, either through an HMI or as part of driver training, would address behaviors attributed to unintentional misuse, and constraint of system functionality through forcing functions would address those owing to intentional misuse or abuse.

Recognizing the accelerated pace in research and development of automated vehicles (AVs) and the need for clear direction on issues related to the human, particularly how operators will interact with higher levels of automation, the 2015 Human Factors breakout session set out to identify a set of updated research needs statements. A set of 60 research questions were distilled down to five prioritized ones by over 100 HF professionals in attendance from industry, government, and academia using a modified Delphi method. These five research questions—listed in Table 1—were then reviewed in detail in small groups. Each group produced a draft research needs statement.

Table 1 Top human factors research questions in the development and deployment of automated vehicles (July 2015)

Interestingly, four out of the five key human factors research questions in Table 1 have to do with how to provide feedback or information to the driver, and one research question is related to monitoring the driver status. The research questions in Table 1 all speak to the primary underlying concern of how to design AVs to provide a better understanding of the automation and the situation, or conversely to prevent confusion and misuse. Implicitly, these research priorities are particularly relevant for Levels 2 and 3 automation [1, 2] as it was assumed in the discussions that development of automation in the near-term would focus on these levels.

1.1 Definitions of Mid-Level Automation—Automation Levels 2–4

Although other taxonomies of levels of automation do exist, the current discussion will refer to the NHTSA [1] and SAE [2] definitions of levels of automation as this was what the HF workshop primarily used.

In Level 2 automation, the human is still required to participate in the dynamic driving task by monitoring the driving environment and by providing fallback performance of the dynamic driving task [2]. According to SAE [2] the dynamic driving task “includes the operational (steering, braking, accelerating, monitoring the vehicle and roadway) and tactical (responding to events, determining when to change lanes, turn, use signals, etc.) aspects of the driving task, but not the strategic (determining destinations and waypoints) aspect of the driving task”. In a similar fashion, NHTSAs defines Level 2 as “integration of braking, throttle, and steering control designed to enable hands free/foot off operation” [1].

In Level 3, the human is not required to monitor the driving environment but is expected to respond appropriately to a request to intervene, as a fallback to perform the dynamic driving task according to SAE [2], or similarly in NHTSA’s [1] definition the human is expected to be available for occasional control despite giving up full monitoring and control authority.

In contrast, in a Level 4 system, responsibility for safe operation lies solely with the vehicle and the human is not expected to be available for control even if a human does not respond appropriately to a request to intervene [1, 2]. In Level 4, the system is not designed to rely on the driver as a fallback and responsibility for driving lies with the vehicle, not the driver.

Thus, what differentiates these levels is (a) whether the system is designed so that the driver is expected to provide fallback performance of the dynamic driving task (as in Level 2 and 3, but not Level 4) and (b) whether the driver is expected to monitor the driving environment (as in Level 2, but not Levels 3 and 4).

1.2 System Limitations

Limitations exist on automation system performance, depending on the level of sophistication of the technology. The core of the matter is that the driver needs to detect these limitations and provide fallback driving performance for situations of limited system performance. For example, early technologies may be sensitive to lane marking visibility (e.g., only working when lane markings are visible), road design (e.g., only working on straight or slightly curved roads), may not detect all collision objects (e.g., pedestrians, animals, or debris), and may be restricted in the amount of force that can be applied in actuation (steering, braking, and acceleration). These limitations may be more or less frequently encountered; for example, loss of lane markings may be more frequently detectable but detectability of animals may be very rarely encountered. The question becomes what is the best solution to deal with system limitations?

1.3 Aim

There is an expectation that the driver’s role is to provide fallback driving performance and monitor the driving environment (Level 2), or to provide fallback driving performance when requested to intervene, without being required to monitor the driving environment (Level 3). But how do we support drivers to most effectively and safely take back active control of the vehicle (both planned and unplanned transfers)? How do we support drivers in monitoring the driving environment? Is it reasonable to examine the Human Factors research regarding challenges with using the driver as a fallback and the challenges the driver encounters when monitoring the driving environment? This chapter aims at identifying these Human Factors challenges in more detail and aims at providing potential solutions for how to overcome these challenges.

Further, this chapter is also intended to reflect the sentiment of the AVS2015 human factors practitioners, to provide a more detailed summary and discussion on the main human factors lessons of automation, and to provide a perspective from a human factors professional who is actively involved in the design of automated vehicles, leading Volvo Cars safety research on AVs.

The first part of this chapter identifies and provides more detail on key HF lessons of automation from other domains that have deployed automated technology and from existing research in the vehicle domain. This provides a starting point for discussion on the expected benefits and costs of road vehicle automation. This section builds on the human factors research issues identified in AVS2014 and AVS2015. The second part of the chapter discusses potential solutions for the HF challenges.

2 Key Human Factors Challenges

This section identifies and elaborates key human factors challenges in automation, focusing on levels 2 and 3.

2.1 Automation Is a Cost-Benefit Trade-off Where Reduced Human Performance Is a Cost

In other domains it is long-known that automation can both impose a cost and benefit to human performance [35]. Automation offers benefits over manual operation with increased efficiency, accuracy, and improved control for routine tasks. Safety and comfort is improved when we automate to alleviate humans from performing difficult tasks and/or tasks that induce boredom, stress, and/or fatigue. In aviation, introduced automation in the cockpit has improved safety, reduced flight times, and increased fuel efficiency. In driving, introduced vehicle control automation promises to improve safety, and improve traffic flow and energy efficiency with eased congestion, greater throughput, and less variability in traffic dynamics.

From a safety perspective, automated technologies, through their advanced sensing, algorithms and crash avoidance systems, have the potential to significantly reduce crashes and save lives. For example, automation has certainly played a role in improving aviation safety where the odds of dying in an airplane crash is 1 in 96,566 compared to 1 in 112 for a motor vehicle crash [6]. This safety improvement is largely because automated technologies are expected to perform better than the human driver, where 94 % of crashes are attributed to driver-related critical reasons such as recognition errors, decision errors, and performance errors [7].

Further, the efficiency benefit alone has the potential to provide significant time and cost savings to commuters. A recent report on urban mobility—Texas A&M Transportation Institute’s 2015 Urban Mobility Scorecard study—cited that U.S. drivers lose nearly 7 billion hours each year to traffic congestion—an average of 42 h per commuter—and waste 3 billion gallons of fuel due to these delays with congestion costs estimated at $160 billion [8]. These trends are only expected to increase: by 2020, average delays are projected at 47 h with a total delay climbing to 8.3 billion hours. Such potential to improve routine travel is concomitant with the accelerated push to deploy automation technologies.

Benefits of introduced automation, however, are often derived from those aspects of system operation that do not necessarily consider the interaction with humans, focusing instead on the improved task efficiencies. It is in the interaction with the human that many of the costs of automation arise. Although automation may offload some physical burdens, when systems are imperfectly reliable, operators must monitor the automated system, its performance, and the action that it controls, which leads to cognitive burdens [9, 10]. Potential benefits can be diminished by loss of information due to fundamental changes in system feedback because of automating previously manual tasks. Such feedback changes can lead to operator confusion and reduced awareness of the state and behavior of the automated system [4, 9, 11, 12]. For example, loss of critical haptic, auditory, and visual cues present in manual operation can result in operators having difficulty tracking automation’s status and behavior, and a failure to understand when and how to intervene to avoid undesirable actions by the automation or to achieve required performance. Automation can also fundamentally change the feedback operators receive by integrating or processing data in a way that requires interpretation on the part of the operator. This tradeoff of benefits and costs of automation is particularly prominent for imperfectly reliable automated systems—those systems that occasionally require operator intervention due to hardware or software failures, or from when operators use automated systems outside their designed functional limits. Also, partial automation—in which only part of an operator’s task is automated—induces the same cost-benefit tradeoff.

The same general pattern of cost-benefit lessons of automation seems to transfer from other domains to vehicle control automation. There are clear performance benefits for routine tasks with use of vehicle control automation, and there are also costs—indications of reduced awareness and capability to recover as vehicle automation increases. Analyses of Adaptive Cruise Control (classified as a level 1 automation) illustrate the cost-benefit relationship. In a recent study on the use of Adaptive Cruise Control (ACC) combined with forward collision warnings (FCW) in a field operational trial (FOT) under normal driving conditions, a positive safety effect was observed showing a reduced number of harsh braking events, less critical time headways (THWs; those <0.5 s), and less incident events (as defined by video and kinematic triggers) as compared to periods of manual control [13]. This net positive effect, which was attributed to increased safety margins (time headway), was present despite there being a general increase in secondary task involvement [13] and an increase in eyes off path [14]. Thus, there was a net positive safety benefit. This indicates that, when assessing the overall impact of automation, a holistic approach should be taken. Safety effects should be considered at a joint driver-vehicle control system level, whereby positive effects from automation (such as increased time headway and increased lateral protection) can offset potential negative effects such as increased secondary task engagement. This implies that the smallest unit of analysis is the driver and vehicle in a given context.

2.2 Different Transfer of Control Concerns for Different Levels of Automation

Transfer of control to the driver refers to those situations when the driver must resume control of the dynamic driving task. According to SAE [2], the dynamic driving task “includes the operational (steering, braking, accelerating, monitoring the vehicle and roadway) and tactical (responding to events, determining when to change lanes, turn, use signals, etc.) aspects of the driving task, but not the strategic (determining destinations and waypoints) aspect of the driving task”. The transfer of control is either system-initiated or driver-initiated. A system-initiated transfer occurs when the driver receives a request to take-back control due to limitations of the automated system to manage a particular driving situation or environmental condition. A system-initiated transfer can be planned (e.g., exiting an automation-suitable road section) or unplanned (e.g., a malfunction). The driver may self-initiate a transfer in anticipation of the system’s approach to its functional or design limits (e.g., approaching sharp curved sections of road) due to some level of discomfort or discontent with the dynamics of the system’s response to the driving environment (e.g., inappropriate positioning in relation to merging vehicles), to fulfill tactical goals (e.g., lane changes to avoid slower moving traffic), or to fulfill navigation goals (e.g., exiting a motorway with infrastructure support to reach a destination in an unsupported urban or rural environment [15]).

In the discussion of transfer of control back to the human operator, there is an important distinction of issues between Level 2 and Level 3 systems. For NHTSA Level 2 systems (combined function automation), while activated, the automated driving system performs lateral and longitudinal control functions (ACC + lane keeping assistance) but the driver is required to monitor the driving environment and respond to unexpected objects or events. The driver is expected to be “available for control at all times and on short notice” and “to be ready to control the vehicle safely” as “the system can relinquish control with no advance warning” [7] Automated systems at level 2 are designed and intended to be support systems to help the driver manage vehicle control. Technologies such as ACC in combination with lane centering, while alleviating the driver from physically operating the vehicle, still require intermittent-to-frequent input from the driver; for example, drivers must provide added steering torque in curves when using lane centering systems and are prompted to periodically return their hands to the steering wheel during straight sections of roadway because lane markings can be lost due to unexpected roadway conditions or poor sensing quality. The primary HF concern at level 2 is the need to calibrate driver expectation to system capability.

Because Level 2 systems require the driver to resume control on a moment’s notice, a concern for this level of automation is if drivers will be able to maintain sufficient situation awareness without continuous active engagement in (i.e., moment-to-moment) vehicle control to safely and effectively perform an infrequent hazard detection task. Key HF design issues are the provision of sufficient feedback to ensure appropriate reliance on system control, to minimize secondary task involvement, and to prevent mode confusion where the driver assumes the automation is more capable than it actually is. Concerns over a driver’s ability to resume control are more pronounced for Level 3 systems. For NHTSA Level 3 systems (limited self-driving), while activated, the automated driving system performs the complete dynamic driving task, including lateral and longitudinal control functions, as well as object and event detection and response. While the vehicle is “designed to ensure safe operation during automated driving mode”, the driver is not expected to monitor the driving environment but is expected “to be available for occasional control” and to respond “appropriately”, provided “sufficiently comfortable transition time”, in the event of a hand-off of the dynamic driving task from the automated driving system to his/her manual control [7]. Given this requirement to re-engage in the vehicle control loop (at a sufficiently comfortable transition time), a key concern for these systems is if this is compatible with human performance and whether instead the human driver should in fact be required to monitor the driving environment to some extent, such as in a requirement to monitor the vehicle’s response to the driving environment. Level 3 also raises HF concerns regarding limitations that drivers have in performing vigilance tasks (i.e., monitoring tasks with infrequent control activity), and timely resumption of control due to expected increases in drivers’ secondary task engagement during periods of automated control. A key HF design issue at this level is how to design transfer of control requests, both in terms of their timing relative to required manual control periods and in their presentation modality/frequency.

2.3 The Driver May not Provide Suitable Fallback Performance of the Dynamic Driving Task

Planned transfers of control such as those in Level 3 automation should occur on a timescale that provides drivers a “sufficiently comfortable transition time” [1]. Recent research from Gold and colleagues [16] suggests that a minimum of 5–7 s in advance of the required period of manual driving is needed to engage the attention of the driver presuming the driver is out of the loop (i.e., not actively monitoring the driving environment). Note that the 5–7 s takeover request time window is not feasible in a Level 2 system. This lead time is a minimum requirement to engage drivers as at 5-s prior to the occurrence of the system boundary, though sufficient to avoid any collisions, drivers enacted suboptimal control maneuvers. As part of a high-fidelity driving simulator study, drivers were alerted to the need to take-back control from a highly-automated vehicle (capable of performing lateral and longitudinal control as well as lane changes and overtaking maneuvers up to a maximum speed) either 5 or 7 s in advance of an accident in their lane of travel, which required either evasive steering or a brake response to avoid [11]. They were instructed to either put their hands back on the steering wheel or to press the brake pedal to take back vehicle control. As compared to the 7-s response lead, with the shorter 5-s lead time, gazes to mirrors and shoulder checks decreased, accelerations potentials increased, and the brake was overused at the expense of lateral maneuvering. Compared to a baseline driving group, in which they drove manually and received no advance alert, drivers in the automated conditions generated acceleration potentials close to three times higher and performed more sudden and intense braking maneuvers. These results call into question if even seven seconds provides sufficient time for drivers to enact a “safe” take-over. Added to a take-over-request time window for planned transfers of control are the surrounding disruptive effects on vehicle and driver control. As shown in Fig. 1 surrounding the chronological sequence of a take-over process (or TOR sequence; [17]) with a transition from a highly-automated to manual driving is an upstream disruption of system control and a downstream disruption of driver control.

Fig. 1
figure 1

Longer envelope of disrupted control surrounding take-over requests in a Level 3 automation system (adapted from 17)

Prior to the system’s request for a driver to resume control in Level 3 automation, events or conditions necessarily push automation to move towards its operating boundaries. Following from take-over, drivers show a period of degraded control, in which they require 10 s (if predictable) to 35 s (if unpredictable) to stabilize their lateral control of the vehicle [15]. Such disruption advocates for HF methods or design considerations to help ensure drivers develop appropriate expectations of the vehicle’s capabilities and response behavior, both for planned and unplanned transfers of control. When this longer envelope of disrupted control is considered, effects of transfers of control may extend to minutes. Humans are challenged to remain vigilant to anticipate transfers of control, which would presumably enable them to reduce their disruptive effects. And when alerted to the need to take-over control, there are notable impracticalities for expecting drivers to initiate a timely and safe intervention response. Here too it is worth note that varying levels of driver response capabilities and task engagement at the time of the request will likely further hamper a seamless take-over response.

2.4 The Better the Automation, the Less Attention Drivers Will Pay to Traffic and the System, and the Less Capable They Will Be to Resume Control

Imperfectly reliable automation and partially automated tasks are similar in that they require operator involvement and consequently require communication and coordination between human and machine. In a recent investigation of the relationship between amount of automation and reduction in human performance (such as complacency, skill degradation, and loss of situation awareness), Onnasch and colleagues [18] found that while an increased amount of automation support results in improved performance for routine tasks, operators have a reduced awareness of the situation or operating environment, and show difficulty troubleshooting and recovering if something goes wrong with the automation or if something unexpected happens. This finding—of reduced awareness and capability to recover as automation increases—is based on a meta-analysis of 18 studies from process control, supervisory control, and aviation, and is robust across domains. Such automation-induced performance consequences are largely attributed to operators’ tendency to reduce their monitoring of highly reliable automation because of its ability to function properly for an extended period of time [19, 20]. In mid- or medium levels of automation, in which the operator may be required to resume manual control, the prevailing take-away lesson is to keep operators ‘in the loop’, either through their involvement to some extent in decision and action selection tasks as well as action implementation [21] or through intuitive, “ecological” displays on the state of the automated processes [2224].

2.5 The Driver May Be “Out-of-the-Loop”—May not Monitor the Driving Environment or Be Aware of the Status of Automation

Introducing vehicle automation does not simply relieve drivers of tasks and replace drivers with a more reliable vehicle control system. Instead, it introduces new tasks that the driver must do such as configuring, engaging, and monitoring the automation [3, 9]. Human-automation issues are likely to arise if the human does not understand the automation (in terms of capabilities, boundaries of operation, current functionality, goals, and level of automation [25]). Spanning level 2 and 3 of automated driving is the dilemma of how to keep drivers sufficiently ‘in the loop’ when interacting with automated system(s) so they can intervene if necessary yet still provide the full benefits and conveniences promised with these mid-levels of automation. The inherent attentional constraints when performing vigilance tasks and distraction tendencies when moment-to-moment control is not required are actively working against a driver’s ability to seamlessly resume control. Implicit in a driver’s task of monitoring the driving environment is his/her ability to scan for intermittent, unpredictable, and infrequent or rare events that may require a control response—essentially, to perform a vigilance task. Arguably, many of the situations that cause crashes are unexpected and rare events. Humans are not known to be particularly effective in this role. In other domains, example vigilance tasks include monitoring for infrequent contacts (radar monitor), examining x-rayed carry-on luggage (airport security inspector), and observing a stream of products to detect and remove defective or flawed items (quality control inspector). Analyses of operators’ performance for these tasks generally conclude that (1) operators frequently show lower vigilance levels (defined as steady-state level of vigilance performance) than desired or required to adequately detect events or signals, and (2) the vigilance level commonly declines steeply during the first half hour or so of a watch [26]—an effect of enough prominence to have a term associated with it—“vigilance decrement”. More than five decades of research studying humans in vigilance tasks [2730] point to the need to supplement operators’ ability to remain attentive either through training paradigms [31], periodic manual control [32] or display techniques [33, 34]. In terms of distraction tendencies, drivers seem quick to take advantage of the reduced demand automated systems afford in their assumption of moment-to-moment vehicle control in that they are inclined to direct their attention away from the forward roadway to other locations. Recent driving simulation studies that have examined how highly automated technologies (ACC, LKA, and ACC + LKA) affect driver attention to road center have shown that drivers’ attention decreases as the level of automation increases, and that the type of automation support provided to drivers (lateral versus longitudinal) results in different levels of driver engagement and performance [3537]; automating only lateral control produces scan behavior similar to when both lateral and longitudinal control are automated, with significantly less attention paid to the forward road as compared to conditions with use of automated longitudinal control or manual control. Reduced scanning of the forward roadway (likely a consequence of an increased uptake of secondary task activity) results in a reduced awareness of the surrounding traffic and roadway conditions. This shift in attention can be detrimental to driving safety, particularly in instances when there is an obligation to resume control of driving, e.g., to change lanes due to an incident on the roadway [38]. As a direct performance consequence, drivers can be slower to respond to a warning of critical events [38, 39] when both lateral and longitudinal tasks are under automated control.

Arguably, concerns over insufficient driver vigilance and distraction tendencies are constrained to instances when the automated system or other vehicle safety systems on-board fail to alert the driver of the need to resume control—i.e., unplanned transfers. However, even with planned transfers, in which drivers receive forewarning of the need to resume control, as discussed earlier, there are some concerning practical realities of the required length of a take-over time window to support timely, effective resumption of driver control, as well as overlooked issues of degraded control that extend this time window and its degraded effects on the combined driver-vehicle performance. These situations—when silent failures cause the system to act dangerously—may be the most concerning issue with an automation solution that does not require the driver to monitor the driving environment but be available for prompted, planned transfers of control (Level 3 automation). Clearly, silent failures are incompatible with Level 3 system solutions. If a failure does occur, operators may require a significant period of time to reorient themselves to the current system state and to develop adequate understanding of the state to act appropriately, thus potentially reducing the effectiveness of actions taken and prohibiting operators from carrying out required actions. For example, drivers removed from active steering and speed control with active steering (AS) and adaptive cruise control (ACC) systems, respectively, have been shown to fail to effectively intervene in response to failures, largely a result of missing haptic feedback in control of the steering wheel and of raw data on the automation’s low-level processing in control of speed [12]. In essence, drivers are at risk for out-of-the-loop unfamiliarity when they are removed from the vehicle control loop and unsupported in their monitoring role. Driver out-of-the-loop unfamiliarity (OOTLUF) is defined as a decreased ability to detect system errors and to intervene and perform the task(s) manually in response to failures compared to those who manually perform the same task or set of tasks [5, 10, 25]. Out-of-the-loop unfamiliarity stems from disrupted feedback that diminishes the ability to form correct expectations, detect anomalies, and control the system manually. It is the qualitative shift from active behavior to passive observation and its associated loss or change in the type of feedback received on the state of the system that can lead drivers to develop inaccurate mental models, over- or under-trust the automation respective to its capabilities, display complacent behavior, and adapt their behavior in safety-degrading ways [9, 11, 40, 41]. Indicators of OOTLUF, more broadly, include:

  • Delayed response time (RT) to system failures (either lack of response—missing control—or mis-calibrated or inaccurate response to system failure).

  • Complacent reliance—refers to a driver’s self-satisfaction that may result in a lack of vigilance based on an unjustified assumption of satisfactory system state. Overreliance on a driving automation system is sometimes termed complacency when it results from trusting a system more than is warranted [42].

  • Inaccurate mental models (as measured subjectively by testing knowledge of actions and limits of the system, i.e., its boundary conditions). Note: Operators with substantial previous experience and well-developed mental models detect disturbances more rapidly than operators without this experience [43].

  • Reduced manual control performance.

  • Inaccurate or incomplete expectations of system response and behavior, i.e., inability to anticipate situations that lie beyond the capabilities of the automation.

  • Passive monitoring (glance behavior—e.g., failure to sample safety-critical areas such as crosswalks at intersections, or to make glances to rear-view mirror and/or side mirrors or over-the-shoulder prior to lane changes [16]).

  • Increased uptake of secondary tasks.

  • Unnoticed mode transitions [44].

  • Low situation awareness scores or loss of awareness of the state and processes of the system [25].

  • Lower reported self-confidence scores to make decisions (or control vehicle manually) after system failure.

  • High trust scores indicative of over-trust based on system capability for response—also evident in delayed or lack of response to warnings, e.g., for ACC, delayed and less effective braking responses in situations in which the ACC is not able to respond to a braking lead vehicle [12].

The prevailing HF concern is how to reduce or prevent these emotional and behavioral consequences for drivers that can result when the moment-to-moment control tasks are automated yet they are still required to respond appropriately to requests to intervene for NHTSA and SAE Levels 2 and 3.

3 Potential Solutions for the Human Factors Challenges in Mid-Level Automation

Taking a step back and looking at the research results presented in the previous section, it would appear that Human Factors research is advising against higher levels of automation, in particular level 3 automation that assumes the driver can be a fallback solution despite not monitoring. Results indicate that the better automation becomes, the more the driver becomes out of the loop, the less capable s/he is to recover. On the other hand, imperfect automation is to be compared with the alternative of manual driving by a driver who is far from perfect with 94 % of crashes being attributed to driver-related critical reasons such as recognition errors, decision errors and performance errors [7]. It seems to be a classic dilemma, if we don’t automate we are stuck with the human contribution to crashes, but if we do choose to automate, human performance will get worse as automation gets better (!). This dilemma has been recognized by some Human Factors researchers, such as Norman [45], but clearly there has to be a plan to resolve this dilemma.

A number of alternatives are presented below as possible approaches to mitigate the human factors challenges with level 2 and level 3 automation as outlined in Sect. 2 above.

3.1 Alternative 1: Work Within Given Constraints—Design the Best We Can, Given the Definitions of Level 2 and 3

One approach to solve or mitigate the HF challenges is to make the best of it and work within the current constraints and definitions of levels 2 and 3 of automation (see the levels definitions in Sect. 1 above). For example, to accept that automation is designed with the driver as a fallback for automation, despite not being required to monitor the driving environment.

It does appear that the top five HF research questions selected in the AVS2015 match up well as research proposals to make the best of it, to improve or mitigate most of the human factors challenges identified in Sect. 2 within the given system constraints, primarily by providing better feedback and attention-orienting assistance. Three of the five questions have to do with improving feedback about the vehicle automation status (i.e. improved feedback in general, providing decision-making confidence, and informing about approaching system operating boundaries), one question addresses how to provide driving environment information to reorient the driver’s attention to the roadway, and one question was related to monitoring the driver’s status. Thus, the top research questions in Table 1 all address the primary concern of how to provide a better and timely understanding of the automation itself and the situation. Much research is currently ongoing regarding development of best practices and design guidelines for the design of road vehicle automation [46, 47].

Shared control approaches have been advocated [48, 49], in which it is suggested that the human should always remain in control, but should be able to experience or initiate smooth shifts between levels of authority [48]. This approach seems excellent when the driver is expected to monitor the traffic environment and partake in the control of the dynamic driving task with the support of the automation (as in a Level 2 system); however, it does seem completely incompatible with level 3 automation which allows the driver to withdraw from monitoring the driving environment. That is, it changes the definition of a level 3 system to require the driver to monitor the driving environment, effectively changing a level 3 system into a level 2 system.

It must be stated that imperfect, partial automation may still be safer than today’s imperfect drivers, despite the many unsolved HF challenges [45, 50]. It could be the case that level 3 automation is necessary to pass before achieving higher levels of automation, but then again maybe not.

3.2 Alternative 2: Advice Against Level 3—Advocate for Two Levels of Automation (Shared and Delegated Driving)

Considering the fairly conclusive results on human monitoring and response deficiencies outlined above, an alternative solution is to advise against the level 3 solution which is designed with the human as a fallback to perform the dynamic driving task – a human that is not required to monitor the driving environment but is expected to respond appropriately to a request to intervene.

Instead, a conclusion could be made from the review that we need to polarize the “levels of automation” into two types of automation—shared driving (or supervised Level 1 or 2 automation) and delegated driving (or unsupervised Level 4 or 5 automation). In the current context this means that this suggestion is to intentionally stay at level 2 automation until such a point where level 4 automation can be achieved. In shared driving (level 2), the role of the driver should be clear. The driver is responsible for driving and is receiving support for this role from the automation. In level 2, the role of automation is to support the driver’s actions, much like a manager-employee relationship where the driver is the manager and the automation is the employee. Thus, we should emphasize the driver’s role as being in command but having support to perform better.

Whereas in delegated driving (unsupervised automation or levels 4 and 5) the automated system is not designed to be reliant on the human as a fallback, because it is evident from research that the driver does not function well as a fallback solution. When automation is designed so that it cannot use the driver as a fallback, there are other precautions that will be taken such as system redundancy and fulfillment of higher levels of ISO 26262 functional safety.

4 Conclusions

This chapter started by outlining a number of key human factors design challenges:

  • that automation is a cost-benefit trade-off where reduced human performance is a cost; that there are different transfer of control concerns for different levels of automation;

  • that the driver may not provide suitable fallback performance of the dynamic driving task;

  • that the better the automation, the less attention drivers will pay to traffic and the system, and the less capable they will be to resume control; and

  • that the driver may be “out-of-the-loop”—may not monitor the driving environment or be aware of the status of automation.

When we take drivers out of the vehicle control loop by automating primary vehicle control tasks we place them in the role of supervising the automation and its performance—fundamentally a vigilance task—a task known to undercut timely and effective human response. Review of research on a driver’s ability to act in this role and to serve as a fallback performer point to inherent concerns with Level 3 systems in particular—those systems for which the driver is both taken out of the control loop yet still held responsible for staying aware of the driving environment. When designed well, despite removing drivers from the primary vehicle control tasks, Level 2 systems can offer a performance benefit and improve overall driving safety because they still engage the driver to be responsible for driving and to monitor the driving environment (e.g. for system limitations such as missing lane markings). In order to be successful, however, the key will be not just the development of the automated technology (i.e., reliable sensing capability) but duly to support the driver to be in the loop and to inform the human of its state, limitations, and capabilities through its HMI.

Addressing how to design these systems to achieve these aims is the key to navigate through these mid-levels of automation to when Level 4 systems are realizable. But in the interim, as we navigate through partial automation, given what we know about the human factor in response to automated control, it is imperative that we avoid the pitfalls in expecting humans to compensate for poorly designed or insufficiently-capable automation.

Two suggestions to solve the human factors issues are proposed: (1) to work within given constraints, to design the best we can, according to the given definitions of level 2 and 3, or (2) to advise against developing level 3 automation and instead advocate two levels of automation: shared driving wherein the driver understands his/her role to be responsible and in control for driving, and delegated driving in which there is no expectation that the driver will be a fallback for performing the dynamic driving task.