Keywords

1 Introduction

On March 18, 2018, in Arizona, an Uber developmental automated vehicle (AV) struck and fatally injured a pedestrian pushing a bicycle walking across N. Mill Avenue. According to the accident report from National Transportation Safety Board (NTSB) [1], 5.6 s prior to the crash, its automated driving system (ADS) first detected the pedestrian, however, recognized her as a vehicle, and then as an unknown object and a cyclist. When the system determined that the crash was inevitable and imminent, “the situation exceeded the response specifications of the ADS braking system” (p. v) [1]. It was the first AV crash involving the death of a pedestrian in the world. This crash shows that automated systems are not as safe as we think; in other words, automation may make mistakes and cause harms on roads.

It is reported that the majority of vehicle crashes (e.g., more than 90% in the USA [2] and China [3, 4]) are caused by human factors. The advent of driving automation may address this issue. The Society of Automotive Engineers (SAE) [5] identified driving automation as six levels: No Driving Automation (Level 0), Driver Assistance (Level 1), Partial Automation (Level 2), Conditional Automation (Level 3), High Automation (Level 4), and Full Automation (Level 5). Driving automation technology in Level 2 vehicles is called the advanced driver assistance system (ADAS), which requires the driver and the system to perform the dynamic driving task together. The driver must monitor the system’s behavior and make appropriate responses to guarantee safety. Vehicles equipped with ADAS are commercially available. AVs refer to vehicles with Level 3 or higher levels equipped with the automated driving system (ADS). AVs have not yet been adopted on a large scale. In the future, ADS is expected to operate a vehicle by itself.

However, as indicated by the 2018 Uber AV crash, perfect automation is impossible to guarantee and difficult to achieve [6]. It is unavoidable that automation will go wrong. Due to limited even incorrect performance, current automation cannot fully ensure driving safety. On one hand, automation with limited capabilities cannot perform all tasks. It is difficult for designers and engineers to anticipate any possible dangerous situations and thus set up the corresponding functions of automation in advance. On the other hand, automation is unreliable because of its infrequently but fatally incorrect responses (e.g., misclassifying the pedestrian in the 2018 Uber AV crash). Thus, the driver and automation need to share vehicle control at present, also known as human-machine cooperative driving or human-machine co-driving [7].

Human-machine cooperative driving compensates for the possible problems of driving automation technology since the driver or operator plays a backup role. However, safety issues occur when the human and driving automation system operate the vehicle together. First, drivers’ behavior may change compared to traditional driving (i.e., behavioral adaptation) influenced by the addition of driving automation systems; for example, they increase speed and reduce headway [8] unwittingly. Such behavioral changes, which are not conducive to traffic safety, make it difficult for automation to achieve its intended benefits. In addition, automation functions effectively in the conditions for which it was programmed, but it needs manual intervention in other circumstances [9]. The driver or operator needs to complete tasks they are not good at, and pays attention to the system operating status and the road environment for a long time; however, a human nature is that humans are not good at keeping vigilant for a long time [10, 11]. Human-machine cooperative driving brings other problems. For example, using automation may lead drivers to gradually lose their manual control abilities because they have fewer opportunities to drive manually [12]. Also, unjustified use of automation related to overtrust, overreliance, and complacency will occur [12].

Driving automation systems reduces the probability and severity of traffic crashes by minimizing manual operation [13], while road safety is still threatened by inadequate automation systems and inappropriate interaction between the system (including ADAS and ADS) and driver (or operator). This study analyzed and summarized the causal factors of crashes from the perspective of human errors. We discussed the probable causes combined with insights from the current literature, potential gaps of current focus in human-machine cooperative driving between research and practice, the controversial issue of complacency being considered as a causal factor, and finally offered potential countermeasures.

2 Driving Automation Crashes

We showed scenes and summarized probable causes of six crash cases, see Table 1. These cases had relatively complete analysis reports and were provided detailed information by NTSB, which is helpful for subsequent analysis. Brief crash event descriptions are as follows.

In Case A [14], on May 7, 2016, near Williston, Florida, a 2015 Tesla Model S (Level 2) was traveling east while a tractor-semitrailer truck was traveling west to make a left turn. The Tesla struck the right side of the semitrailer, causing extensive roof damage. After the crash with the semitrailer, the Tesla struck and broke a utility pole before coming to a halt. The Tesla driver and a passenger in the vehicle were killed in the crash. The driver of the semitrailer was not injured. It is the first reported traffic fatal crash involving Level 2 automation in the world.

In Case B [15], on January 22, 2018, in Culver City, California, a 2014 Tesla Model S (Level 2) drove on an interstate and a fire truck was parked in the high-occupancy vehicle lane with its emergency lights on. When another vehicle in front of the Tesla changed lanes to the right, the Tesla did not change lanes. Instead, the Tesla accelerated and struck the back of the fire truck. No one was inside the fire engine at the time of the crash, and no injuries were reported.

In Case C [16], on March 23, 2018, in Mountain View, Santa Clara County, California, a 2017 Tesla Model X (Level 2) struck a previously damaged and non-functional crash attenuator (a protecting equipment that slows the vehicle to reduce the impact of a crash) on the highway, causing the front and rear structures to separate. Then, the Tesla collided with two other vehicles, a Mazda 3 and an Audi A4. The Tesla driver died.

In Case D [17], on March 1, 2019, in Delray Beach, Palm Beach County, Florida, a truck traveling east attempted to cross the southbound lane, then turned left into the northbound lane. A 2018 Tesla Model 3 vehicle (Level 2) was traveling south at 69 mph and did not slow down or take any other action to avoid the truck before the crash. As a result, the Tesla driver died.

In Case E [1], on March 18, 2018, in Tempe, Arizona, a vehicle equipped with an Uber developmental automated driving system (Level 3) struck a pedestrian pushing a bicycle across N. Mill Avenue in the northbound lanes. The crash resulted in the death of the pedestrian and the Uber test operator was not injured. It was the first pedestrian fatality caused by an AV.

In Case F [18], on November 8, 2017, in downtown Las Vegas, Clark County, Nevada, an automated shuttle (2017 Navya Arma autonomous shuttle; Level 5) was driving on a designated test site in Las Vegas, Nevada. When the shuttle detected the truck, it began to decelerate. However, the shuttle came to an almost complete stop, and the truck in front of it continued to reverse, causing a minor crash. No one was injured in the crash.

Table 1. Scenes and probable cause of each crash case

3 Results

According to accident reports from the NTSB, inappropriate using ways of vehicle automation, human distraction or disengagement, and complacency (overreliance) are believed to be probable causes in most cases.

First, in Cases A, B, C, and D, the driver used vehicle automation in ways inconsistent with guidance and warnings from the manufacturer. In these four crashes, the driver’s hands were always off the steering wheel when Autopilot was active, departing from system usage specifications. For example, in Case A, the “Autopilot hands on state” parameter remained at “Hands required not detected” for the great portion of the trip. During the trip, the system made seven visual warnings, and six of these visual warnings transitioned further to auditory warnings (i.e., chime), followed by a brief detection of manual operation that lasted one to three seconds. During the 37 mins that the ADAS was in operation, the system detected manual operation for only 25 s. Also, the driver used the system on State Road 24 (a non-preferred roadway for the use of Autopilot) even though the system was not designed for this type of road. However, although the accident reports of Cases C and D did not explicitly state that “the driver used vehicle automation in ways inconsistent with guidance and warnings from the manufacturer” in the “Probable Cause” part, a lack of steering wheel torque (force applied to the steering wheel to make it rotate about the steering column) was still detected. “No driver-applied steering wheel torque was detected by Autosteer” (p. 6) [16] in Case C and “no driver-applied steering wheel torque was detected for 7.7 s before impact, indicating driver disengagement” (p. 14) [17] in Case D. It is conceivable for hands to be just resting on the steering wheel without any torque. However, “a lack of steering wheel torque indicates to the vehicle system that the driver’s hands are not on the steering wheel” (p. 6) [16].

Next, in Cases A, B, and D, the driver was (prolonged) disengaged from the driving task; in cases C and E, the driver or operator was reported to be distracted by the cell phone, leading to a crash. Inappropriate operational design (e.g., “Despite the system’s known limitations, Tesla does not restrict where Autopilot can be used.” (p. x) [16]; the system did not prevent drivers from using it improperly) permitted driver disengagement, causing them not to realize the approaching danger (e.g., the tractor-semitrailer truck in Case A, the stationary fire truck in Case B, and the truck trying to turn left in Case D) in time. Another safety issue is human distraction. In Case C, the driver “was likely distracted by a gaming application on his cell phone before the crash” (p. x) [16], making him not realize the system had steered the vehicle into a gore area of the highway not used for vehicle. Also, the human operator was “glancing away from the roadway for extended periods throughout the trip” (p. 1) and might watch a television show according to the phone records [1] in Case E.

Accident reports stated that irrational use or inattention resulted from complacency or overreliance. The NTSB concluded that the behavior of drivers who did not follow the owner’s manual strongly indicated their overreliance on vehicle automation (e.g., [14]). For example, the driver in Case A used the system in roadways not satisfying the condition of “highways and limited-access roads with a fully attentive driver” (p. 74 in Tesla Model S Owner’s Manual [19]; cited in [14]). Driver disengagement did not meet the requirement that they must “keep hands on the steering wheel at all times” (p. 74 in Tesla Model S Owner’s Manual [19]; cited in [14]). As for Case E, the NTSB asserted that the human operator’s prolonged visual distraction was a typical result of automation complacency and prevented her from spotting pedestrians in time to prevent a crash.

In addition, other road users’ improper operation caused crashes. The truck drivers failed to yield the right of way to the vehicle (in Cases A and D) and incorrect evaluation of the shuttle stopping distance (in Case F); the pedestrian crossed N. Mill Avenue outside a crosswalk (in Case E). Similarly, vehicle system operational design, manufacturers, environments, and regulators also had some responsibility for the crash (refer to Table 1 for details).

4 Discussion

In this study, we explored and summarized causal factors of road crashes related to driving automation. Through a series of analyses of accident reports from NTSB, we found that the common causes focus on human errors, including inappropriate using ways of driving automation systems, human distraction or disengagement, and complacency (overreliance) on vehicle automation. The involved crashes indicate that human factor risks are present in human-automation interaction on current roads. Improper interaction results in an ineffective combination of human and machine strengths.

The current operational design of the driving automation system allows the person out of the loop of the dynamic driving task, which causes problems when the human has to regain control [12]. In our analysis, although the system uses the detection of steering wheel torque to measure whether the driver is in control of the steering wheel, it is difficult to ensure that the driver has maintained effective attention and is ready to perform the dynamic driving task. Additionally, in several cases (e.g., Case A), the system did not intervene (such as forced deactivation of ADAS) when the driver’s hands left the steering wheel repeatedly. This “human out of the loop” operational design is not helpful for the driver to regain control of the vehicle in time, leading to a crash (e.g., the driver is too late to brake or steer to avoid the crash in Case C). Thus, driving automation systems should put more emphasis on the “human in the loop” design, enabling drivers to notice automation problems when systems (suddenly) reach the boundaries of their capabilities [20, 21].

Another causal factor is that drivers misuse the driving automation system and such behavior is detrimental to driving safety. Misuse refers to the user’s incorrect use of automation [12]. For example, drivers used systems inconsistent with the requirement in the vehicle owner’s manual, including activating systems on some types of roads which are not designed for and hands off the steering wheel during the system is enabled.

Misuse is frequently used in conjunction with complacency or overreliance [12]. In our study, the NTSB concluded that complacency or overreliance, which caused inappropriate use of driving automation systems by drivers or operators, was a probable cause of crashes. The concept of complacency was first introduced in aviation crash investigations. It is described as “self-satisfaction, which may result in non-vigilance based on an unjustified assumption of satisfactory system state” by the NASA Aviation Safety Reporting System [22]. However, this construct has not been given enough attention in theoretical research, as compared to other constructs that are important for human-automation interaction and traffic safety in automated driving. For instance, Heikoop et al. [8] measured the frequency with which important psychological constructs or pairs of constructs were discussed in the research on automated driving in order to develop a psychological model of automated driving. A total of 15 concepts were extracted from 43 articles, including mental workload, attention, feedback, stress, situation awareness, task demands, fatigue, trust, mental model, arousal, complacency, vigilance, locus of control, acceptance, and satisfaction. They were ranked according to the number of times that a link between the construct and another construct in the model is proposed in the extracted articles. Complacency was left out of the model because it was after the cut-off point set at 10 counts [8]. It might suggest that the importance of complacency is not fully recognized in current academic research. In practice, it has been identified in all of the current accident reports involving human drivers/operator (see Table 1). Thus, there may be a gap between research and practice regarding the role of complacency in the domain of driving automation.

In addition, there are strong debates on the concept of complacency (overreliance) and its explanatory power in the literature of human factors and ergonomics. First, there is no clear definition of complacency. Some researchers considered it as a “mental state” [22] or a “psychological state” [23]. Others regarded it as insufficient monitoring and verification behaviors or their negative consequences [24, 25]. The failure to reach a consensus on the definition of complacency makes it potentially confusing to people when it is used in the crash analysis [26]. Second, it is hard to collect evidence of complacency and to measure it objectively [27]. For instance, research suggests using unreasonable usage behaviors of automation (or being distracted and missing automation failures) as an indicator of complacency while using automation. The NTSB [14] also came to the conclusion that the driver’s behavior of disregarding the owner’s handbook is clearly indicative of complacency or overreliance on driving automation. In all of the five crashes, complacency or overreliance was considered the probable driver cause. It is not a coincidence. Human drivers, if they are believed to cause crashes while using automation, will be finally blamed for their complacency with automation. However, researchers [28, 29] have argued that current evidence of so-called “complacency” (e.g., being distracted or improper automation usage behaviors) might be due to other factors. When drivers or operators are accused of being complacent or over-reliant, there are essentially system design flaws behind them [30].

In order to prevent crashes related to driving automation and find a more appropriate way to assign responsibility, future research could be directed at theoretical research as well as practical technology enhancement in the specific area of driving automation. First, the results of the crash analysis need to guide theoretical analysis of traffic crash factors in research. For example, further theoretical exploration is needed on the concept and the effect on traffic safety of automation complacency. Also, the driving automation domain has its own specificities. Drivers are ordinary consumers who may have problems when interacting with driving automation systems, including understanding the terminology in the owner’s manual and staying focused on the system’s operating conditions for long periods of time. It is important to develop driver training programs. Merriman et al. [31] stated that the occurrence of crashes related to driving automation technology is associated with driver attitudes, mental models, and trust in automation, and that automation may impair the driver’s ability to recognize and avoid hazards. Therefore, training programs for drivers can be developed from the perspectives of workload, trust, and situational awareness, which can help drivers develop a reasonable perception and use of driving automation systems. From the perspective of technology, it is possible to equip with a multimodal driver monitoring system (e.g., based on steering wheel torque, eye movements, and facial expressions information) to improve monitoring accuracy while ensuring privacy and security. Also, engineers could upgrade the technology and improve system reliability to avoid drivers from bypassing the usage restrictions of driving automation systems through deceptive behaviors, such as placing heavy objects on the steering wheel to simulate manual operation. Importantly, manufacturers need to take into account the potential for emergency hazardous situations on the road and then establish adequate safety risk assessment procedures. Relevant government departments should develop appropriate safeguards to ensure that the use of driving automation systems or the testing of AVs is strictly conformed to system design requirements in order to reduce the abuse and misuse of the systems.