Abstract
Driving automation systems are capable of continuously performing part or all of the dynamic driving task. Driving automation is intended to reduce the probability and severity of traffic crashes by minimizing manual operation. However, inadequate automation systems (including advanced driver assistance systems mounted on Level 2 vehicles and automated driving systems mounted on Level 3 or higher levels vehicles) and inappropriate human-automation interaction will threaten road safety. This study analyzed the factors of crashes related to driving automation from the perspective of human factor risks. We summarized the crashes and categorized the probable causes mentioned in six accident reports from National Transportation Safety Board. We extracted common causal factors related to human drivers, including inappropriate using ways of driving automation, human distraction or disengagement, and complacency (overreliance) on vehicle automation. Finally, we discussed the relationship between the extracted common causes and previous insights in the driving automation domain, such as the rationality of complacency as a causal factor, and provided potential countermeasures.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Keywords
1 Introduction
On March 18, 2018, in Arizona, an Uber developmental automated vehicle (AV) struck and fatally injured a pedestrian pushing a bicycle walking across N. Mill Avenue. According to the accident report from National Transportation Safety Board (NTSB) [1], 5.6 s prior to the crash, its automated driving system (ADS) first detected the pedestrian, however, recognized her as a vehicle, and then as an unknown object and a cyclist. When the system determined that the crash was inevitable and imminent, “the situation exceeded the response specifications of the ADS braking system” (p. v) [1]. It was the first AV crash involving the death of a pedestrian in the world. This crash shows that automated systems are not as safe as we think; in other words, automation may make mistakes and cause harms on roads.
It is reported that the majority of vehicle crashes (e.g., more than 90% in the USA [2] and China [3, 4]) are caused by human factors. The advent of driving automation may address this issue. The Society of Automotive Engineers (SAE) [5] identified driving automation as six levels: No Driving Automation (Level 0), Driver Assistance (Level 1), Partial Automation (Level 2), Conditional Automation (Level 3), High Automation (Level 4), and Full Automation (Level 5). Driving automation technology in Level 2 vehicles is called the advanced driver assistance system (ADAS), which requires the driver and the system to perform the dynamic driving task together. The driver must monitor the system’s behavior and make appropriate responses to guarantee safety. Vehicles equipped with ADAS are commercially available. AVs refer to vehicles with Level 3 or higher levels equipped with the automated driving system (ADS). AVs have not yet been adopted on a large scale. In the future, ADS is expected to operate a vehicle by itself.
However, as indicated by the 2018 Uber AV crash, perfect automation is impossible to guarantee and difficult to achieve [6]. It is unavoidable that automation will go wrong. Due to limited even incorrect performance, current automation cannot fully ensure driving safety. On one hand, automation with limited capabilities cannot perform all tasks. It is difficult for designers and engineers to anticipate any possible dangerous situations and thus set up the corresponding functions of automation in advance. On the other hand, automation is unreliable because of its infrequently but fatally incorrect responses (e.g., misclassifying the pedestrian in the 2018 Uber AV crash). Thus, the driver and automation need to share vehicle control at present, also known as human-machine cooperative driving or human-machine co-driving [7].
Human-machine cooperative driving compensates for the possible problems of driving automation technology since the driver or operator plays a backup role. However, safety issues occur when the human and driving automation system operate the vehicle together. First, drivers’ behavior may change compared to traditional driving (i.e., behavioral adaptation) influenced by the addition of driving automation systems; for example, they increase speed and reduce headway [8] unwittingly. Such behavioral changes, which are not conducive to traffic safety, make it difficult for automation to achieve its intended benefits. In addition, automation functions effectively in the conditions for which it was programmed, but it needs manual intervention in other circumstances [9]. The driver or operator needs to complete tasks they are not good at, and pays attention to the system operating status and the road environment for a long time; however, a human nature is that humans are not good at keeping vigilant for a long time [10, 11]. Human-machine cooperative driving brings other problems. For example, using automation may lead drivers to gradually lose their manual control abilities because they have fewer opportunities to drive manually [12]. Also, unjustified use of automation related to overtrust, overreliance, and complacency will occur [12].
Driving automation systems reduces the probability and severity of traffic crashes by minimizing manual operation [13], while road safety is still threatened by inadequate automation systems and inappropriate interaction between the system (including ADAS and ADS) and driver (or operator). This study analyzed and summarized the causal factors of crashes from the perspective of human errors. We discussed the probable causes combined with insights from the current literature, potential gaps of current focus in human-machine cooperative driving between research and practice, the controversial issue of complacency being considered as a causal factor, and finally offered potential countermeasures.
2 Driving Automation Crashes
We showed scenes and summarized probable causes of six crash cases, see Table 1. These cases had relatively complete analysis reports and were provided detailed information by NTSB, which is helpful for subsequent analysis. Brief crash event descriptions are as follows.
In Case A [14], on May 7, 2016, near Williston, Florida, a 2015 Tesla Model S (Level 2) was traveling east while a tractor-semitrailer truck was traveling west to make a left turn. The Tesla struck the right side of the semitrailer, causing extensive roof damage. After the crash with the semitrailer, the Tesla struck and broke a utility pole before coming to a halt. The Tesla driver and a passenger in the vehicle were killed in the crash. The driver of the semitrailer was not injured. It is the first reported traffic fatal crash involving Level 2 automation in the world.
In Case B [15], on January 22, 2018, in Culver City, California, a 2014 Tesla Model S (Level 2) drove on an interstate and a fire truck was parked in the high-occupancy vehicle lane with its emergency lights on. When another vehicle in front of the Tesla changed lanes to the right, the Tesla did not change lanes. Instead, the Tesla accelerated and struck the back of the fire truck. No one was inside the fire engine at the time of the crash, and no injuries were reported.
In Case C [16], on March 23, 2018, in Mountain View, Santa Clara County, California, a 2017 Tesla Model X (Level 2) struck a previously damaged and non-functional crash attenuator (a protecting equipment that slows the vehicle to reduce the impact of a crash) on the highway, causing the front and rear structures to separate. Then, the Tesla collided with two other vehicles, a Mazda 3 and an Audi A4. The Tesla driver died.
In Case D [17], on March 1, 2019, in Delray Beach, Palm Beach County, Florida, a truck traveling east attempted to cross the southbound lane, then turned left into the northbound lane. A 2018 Tesla Model 3 vehicle (Level 2) was traveling south at 69 mph and did not slow down or take any other action to avoid the truck before the crash. As a result, the Tesla driver died.
In Case E [1], on March 18, 2018, in Tempe, Arizona, a vehicle equipped with an Uber developmental automated driving system (Level 3) struck a pedestrian pushing a bicycle across N. Mill Avenue in the northbound lanes. The crash resulted in the death of the pedestrian and the Uber test operator was not injured. It was the first pedestrian fatality caused by an AV.
In Case F [18], on November 8, 2017, in downtown Las Vegas, Clark County, Nevada, an automated shuttle (2017 Navya Arma autonomous shuttle; Level 5) was driving on a designated test site in Las Vegas, Nevada. When the shuttle detected the truck, it began to decelerate. However, the shuttle came to an almost complete stop, and the truck in front of it continued to reverse, causing a minor crash. No one was injured in the crash.
3 Results
According to accident reports from the NTSB, inappropriate using ways of vehicle automation, human distraction or disengagement, and complacency (overreliance) are believed to be probable causes in most cases.
First, in Cases A, B, C, and D, the driver used vehicle automation in ways inconsistent with guidance and warnings from the manufacturer. In these four crashes, the driver’s hands were always off the steering wheel when Autopilot was active, departing from system usage specifications. For example, in Case A, the “Autopilot hands on state” parameter remained at “Hands required not detected” for the great portion of the trip. During the trip, the system made seven visual warnings, and six of these visual warnings transitioned further to auditory warnings (i.e., chime), followed by a brief detection of manual operation that lasted one to three seconds. During the 37 mins that the ADAS was in operation, the system detected manual operation for only 25 s. Also, the driver used the system on State Road 24 (a non-preferred roadway for the use of Autopilot) even though the system was not designed for this type of road. However, although the accident reports of Cases C and D did not explicitly state that “the driver used vehicle automation in ways inconsistent with guidance and warnings from the manufacturer” in the “Probable Cause” part, a lack of steering wheel torque (force applied to the steering wheel to make it rotate about the steering column) was still detected. “No driver-applied steering wheel torque was detected by Autosteer” (p. 6) [16] in Case C and “no driver-applied steering wheel torque was detected for 7.7 s before impact, indicating driver disengagement” (p. 14) [17] in Case D. It is conceivable for hands to be just resting on the steering wheel without any torque. However, “a lack of steering wheel torque indicates to the vehicle system that the driver’s hands are not on the steering wheel” (p. 6) [16].
Next, in Cases A, B, and D, the driver was (prolonged) disengaged from the driving task; in cases C and E, the driver or operator was reported to be distracted by the cell phone, leading to a crash. Inappropriate operational design (e.g., “Despite the system’s known limitations, Tesla does not restrict where Autopilot can be used.” (p. x) [16]; the system did not prevent drivers from using it improperly) permitted driver disengagement, causing them not to realize the approaching danger (e.g., the tractor-semitrailer truck in Case A, the stationary fire truck in Case B, and the truck trying to turn left in Case D) in time. Another safety issue is human distraction. In Case C, the driver “was likely distracted by a gaming application on his cell phone before the crash” (p. x) [16], making him not realize the system had steered the vehicle into a gore area of the highway not used for vehicle. Also, the human operator was “glancing away from the roadway for extended periods throughout the trip” (p. 1) and might watch a television show according to the phone records [1] in Case E.
Accident reports stated that irrational use or inattention resulted from complacency or overreliance. The NTSB concluded that the behavior of drivers who did not follow the owner’s manual strongly indicated their overreliance on vehicle automation (e.g., [14]). For example, the driver in Case A used the system in roadways not satisfying the condition of “highways and limited-access roads with a fully attentive driver” (p. 74 in Tesla Model S Owner’s Manual [19]; cited in [14]). Driver disengagement did not meet the requirement that they must “keep hands on the steering wheel at all times” (p. 74 in Tesla Model S Owner’s Manual [19]; cited in [14]). As for Case E, the NTSB asserted that the human operator’s prolonged visual distraction was a typical result of automation complacency and prevented her from spotting pedestrians in time to prevent a crash.
In addition, other road users’ improper operation caused crashes. The truck drivers failed to yield the right of way to the vehicle (in Cases A and D) and incorrect evaluation of the shuttle stopping distance (in Case F); the pedestrian crossed N. Mill Avenue outside a crosswalk (in Case E). Similarly, vehicle system operational design, manufacturers, environments, and regulators also had some responsibility for the crash (refer to Table 1 for details).
4 Discussion
In this study, we explored and summarized causal factors of road crashes related to driving automation. Through a series of analyses of accident reports from NTSB, we found that the common causes focus on human errors, including inappropriate using ways of driving automation systems, human distraction or disengagement, and complacency (overreliance) on vehicle automation. The involved crashes indicate that human factor risks are present in human-automation interaction on current roads. Improper interaction results in an ineffective combination of human and machine strengths.
The current operational design of the driving automation system allows the person out of the loop of the dynamic driving task, which causes problems when the human has to regain control [12]. In our analysis, although the system uses the detection of steering wheel torque to measure whether the driver is in control of the steering wheel, it is difficult to ensure that the driver has maintained effective attention and is ready to perform the dynamic driving task. Additionally, in several cases (e.g., Case A), the system did not intervene (such as forced deactivation of ADAS) when the driver’s hands left the steering wheel repeatedly. This “human out of the loop” operational design is not helpful for the driver to regain control of the vehicle in time, leading to a crash (e.g., the driver is too late to brake or steer to avoid the crash in Case C). Thus, driving automation systems should put more emphasis on the “human in the loop” design, enabling drivers to notice automation problems when systems (suddenly) reach the boundaries of their capabilities [20, 21].
Another causal factor is that drivers misuse the driving automation system and such behavior is detrimental to driving safety. Misuse refers to the user’s incorrect use of automation [12]. For example, drivers used systems inconsistent with the requirement in the vehicle owner’s manual, including activating systems on some types of roads which are not designed for and hands off the steering wheel during the system is enabled.
Misuse is frequently used in conjunction with complacency or overreliance [12]. In our study, the NTSB concluded that complacency or overreliance, which caused inappropriate use of driving automation systems by drivers or operators, was a probable cause of crashes. The concept of complacency was first introduced in aviation crash investigations. It is described as “self-satisfaction, which may result in non-vigilance based on an unjustified assumption of satisfactory system state” by the NASA Aviation Safety Reporting System [22]. However, this construct has not been given enough attention in theoretical research, as compared to other constructs that are important for human-automation interaction and traffic safety in automated driving. For instance, Heikoop et al. [8] measured the frequency with which important psychological constructs or pairs of constructs were discussed in the research on automated driving in order to develop a psychological model of automated driving. A total of 15 concepts were extracted from 43 articles, including mental workload, attention, feedback, stress, situation awareness, task demands, fatigue, trust, mental model, arousal, complacency, vigilance, locus of control, acceptance, and satisfaction. They were ranked according to the number of times that a link between the construct and another construct in the model is proposed in the extracted articles. Complacency was left out of the model because it was after the cut-off point set at 10 counts [8]. It might suggest that the importance of complacency is not fully recognized in current academic research. In practice, it has been identified in all of the current accident reports involving human drivers/operator (see Table 1). Thus, there may be a gap between research and practice regarding the role of complacency in the domain of driving automation.
In addition, there are strong debates on the concept of complacency (overreliance) and its explanatory power in the literature of human factors and ergonomics. First, there is no clear definition of complacency. Some researchers considered it as a “mental state” [22] or a “psychological state” [23]. Others regarded it as insufficient monitoring and verification behaviors or their negative consequences [24, 25]. The failure to reach a consensus on the definition of complacency makes it potentially confusing to people when it is used in the crash analysis [26]. Second, it is hard to collect evidence of complacency and to measure it objectively [27]. For instance, research suggests using unreasonable usage behaviors of automation (or being distracted and missing automation failures) as an indicator of complacency while using automation. The NTSB [14] also came to the conclusion that the driver’s behavior of disregarding the owner’s handbook is clearly indicative of complacency or overreliance on driving automation. In all of the five crashes, complacency or overreliance was considered the probable driver cause. It is not a coincidence. Human drivers, if they are believed to cause crashes while using automation, will be finally blamed for their complacency with automation. However, researchers [28, 29] have argued that current evidence of so-called “complacency” (e.g., being distracted or improper automation usage behaviors) might be due to other factors. When drivers or operators are accused of being complacent or over-reliant, there are essentially system design flaws behind them [30].
In order to prevent crashes related to driving automation and find a more appropriate way to assign responsibility, future research could be directed at theoretical research as well as practical technology enhancement in the specific area of driving automation. First, the results of the crash analysis need to guide theoretical analysis of traffic crash factors in research. For example, further theoretical exploration is needed on the concept and the effect on traffic safety of automation complacency. Also, the driving automation domain has its own specificities. Drivers are ordinary consumers who may have problems when interacting with driving automation systems, including understanding the terminology in the owner’s manual and staying focused on the system’s operating conditions for long periods of time. It is important to develop driver training programs. Merriman et al. [31] stated that the occurrence of crashes related to driving automation technology is associated with driver attitudes, mental models, and trust in automation, and that automation may impair the driver’s ability to recognize and avoid hazards. Therefore, training programs for drivers can be developed from the perspectives of workload, trust, and situational awareness, which can help drivers develop a reasonable perception and use of driving automation systems. From the perspective of technology, it is possible to equip with a multimodal driver monitoring system (e.g., based on steering wheel torque, eye movements, and facial expressions information) to improve monitoring accuracy while ensuring privacy and security. Also, engineers could upgrade the technology and improve system reliability to avoid drivers from bypassing the usage restrictions of driving automation systems through deceptive behaviors, such as placing heavy objects on the steering wheel to simulate manual operation. Importantly, manufacturers need to take into account the potential for emergency hazardous situations on the road and then establish adequate safety risk assessment procedures. Relevant government departments should develop appropriate safeguards to ensure that the use of driving automation systems or the testing of AVs is strictly conformed to system design requirements in order to reduce the abuse and misuse of the systems.
References
National Transportation Safety Board: Collision between vehicle controlled by developmental automated driving system and pedestrian, Tempe, Arizona, 18 March 2018. Highway Accident Report NTSB/HAR-19/03. Washington, DC: NTSB (2019)
Dingus, T.A., et al.: Driver crash risk factors and prevalence evaluation using naturalistic driving data. Proc. Nat. Acad. Sci. 113(10), 2636–2641 (2016). https://doi.org/10.1073/pnas.1513271113
Huang, H., Chang, F., Schwebel, D.C., Ning, P., Cheng, P., Hu, G.: Improve traffic death statistics in China. Science 362(6415), 650 (2018). https://doi.org/10.1126/science.aav5117
Wang, X., Liu, Q., Guo, F., Fang, S., Xu, X., Chen, X.: Causation analysis of crashes and near crashes using naturalistic driving data. Accid. Anal. Prev. 177(10), 106821 (2022). https://doi.org/10.1016/j.aap.2022.106821
SAE International: Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Society of Automotive Engineering, USA (2021)
Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2), 230–253 (1997). https://doi.org/10.1518/001872097778543886
Marcano, M., Díaz, S., Pérez, J., Irigoyen, E.: A review of shared control for automated vehicles: theory and applications. IEEE Trans. Hum. Mach. Syst. 50(6), 475–491 (2020). https://doi.org/10.1109/THMS.2020.3017748
Heikoop, D.D., de Winter, J.C.F., van Arem, B., Stanton, N.A.: Psychological constructs in driving automation: A consensus model and critical comment on construct proliferation. Theor. Issues Ergon. Sci. 17(3), 284–303 (2016). https://doi.org/10.1080/1463922X.2015.1101507
Cummings, M.L., Clare, A., Hart, C.: The role of human-automation consensus in multiple unmanned vehicle scheduling. Hum. Factors 52(1), 17–27 (2010). https://doi.org/10.1177/0018720810368674
Mackworth, N.H.: The breakdown of vigilance during prolonged visual search. Q. J. Exp. Psychol. 1(1), 6–21 (1948). https://doi.org/10.1080/17470214808416738
Molloy, R., Parasuraman, R.: Monitoring an automated system for a single failure: vigilance and task complexity effects. Hum. Factors 38(2), 311–322 (1996). https://doi.org/10.1177/001872089606380211
de Winter, J.C.F., Petermeijer, S.M., Abbink, D.A.: Shared control versus traded control in driving: a debate around automation pitfalls. Ergonomics. in press (2022). https://doi.org/10.1080/00140139.2022.2153175
Wang, J., Zhang, L., Huang, Y., Zhao, J.: Safety of autonomous vehicles. J. Adv. Transp. 2020, 8867757 (2020). https://doi.org/10.1155/2020/8867757
National Transportation Safety Board: Collision between a car operating with automated vehicle control systems and a tractor-semitrailer truck near Williston, Florida, 7 May 2016. Highway Accident Report NTSB/HAR-17/02. Washington, DC: NTSB (2017)
National Transportation Safety Board: Rear-end collision between a car operating with advanced driver assistance systems and a stationary fire truck, Culver City, California, 22 January 2018. Highway Accident Report NTSB/HAB-19/07. Washington, DC: NTSB (2019)
National Transportation Safety Board: Collision between a sport utility vehicle operating with partial driving automation and a crash attenuator, Mountain View, California, 23 March 2018. Highway Accident Report NTSB/HAR-20/01. Washington, DC: NTSB (2020)
National Transportation Safety Board: Collision between car operating with partial driving automation and truck-tractor semitrailer Delray Beach, Florida, 1 March 2019. Highway Accident Report NTSB/HAR-20/01. Washington, DC: NTSB (2020)
National Transportation Safety Board: Low-speed collision between truck-tractor and autonomous shuttle, Las Vegas, Nevada, 8 November 2017. Highway Accident Report NTSB/HAR-19/06. Washington, DC: NTSB (2019)
Tesla Inc.: Tesla Model S Owner’s Manual (2016)
Abbink, D. A., et al.: A topology of shared control systems—finding common ground in diversity. IEEE Trans. Hum. Mach. Syst. 48(5), 509–525 (2018). https://doi.org/10.1109/THMS.2018.2791570
Abbink, D.A., Mulder, M.: Exploring the dimensions of haptic feedback support in manual control. J. Comput. Inf. Sci. Eng. 9(1), 011006 (2009). https://doi.org/10.1115/1.3072902
Billings, C. E., Lauber, J. K., Funkhouser, H., Lyman, G., Huff, E. W.: NASA aviation safety reporting system. NASA-TM-X-3445 (1976). https://ntrs.nasa.gov/citations/19760026757
Weiner, B.: Social motivation, justice, and the moral emotions: an attributional approach. Lawrence Erlbaum Associates, Mahwah, NJ (2006). https://doi.org/10.4324/9781410615749
Mouloua, M., Ferraro, J. C., Kaplan, A. D., Mangos, P., Hancock, P. A.: Human factors issues regarding automation trust in UAS operation, selection, and training. In Mouloua, M., Hancock, P. A. (eds.), Human Performance in Automated and Autonomous Systems: Current Theory and Methods, pp. 169–190. CRC Press, London (2019). https://doi.org/10.1201/9780429458330-9
Parasuraman, R., Molloy, R., Singh, I.L.: Performance consequences of automation-induced “complacency.” Int. J. Aerosp. Psychol. 3, 1–23 (1993). https://doi.org/10.1207/s15327108ijap0301_1
Liu, P.: Automation complacency as causal to traffic crashes: Fact or fallacy? Accident Analysis and Prevention
Drnec, K., Marathe, A.R., Lukos, J.R., Metcalfe, J.S.: From trust in automation to decision neuroscience: Applying cognitive neuroscience methods to understand and improve interaction decisions involved in human automation interaction. Front. Hum. Neurosci. 10, 290 (2016). https://doi.org/10.3389/fnhum.2016.00290
Boos, A., Feldhütter, A., Schwiebacher, J., Bengler, K.: Mode errors and intentional violations in visual monitoring of Level 2 driving automation. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–7 (2020). https://doi.org/10.1109/ITSC45102.2020.9294690
Feldhütter, A., Härtwig, N., Kurpiers, C., Hernandez, J.M., Bengler, K.: Effect on mode awareness when changing from conditionally to partially automated driving. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds.) Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). Advances in Intelligent Systems and Computing, vol. 823, pp. 314–324. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-96074-6_34
Miranda, A.T.: Misconceptions of human factors concepts. Theor. Issues Ergon. Sci. 20(1), 73–83 (2019). https://doi.org/10.1080/1463922X.2018.1497727
Merriman, S.E., Plant, K.L., Revell, K.M.A., Stanton, N.A.: What can we learn from automated vehicle collisions? A deductive thematic analysis of five automated vehicle collisions. Saf. Sci. 141, 105320 (2021)
Acknowledgments
This research was supported by the National Natural Science Foundation of China (Grant No. 72071143 and T2192933).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chu, Y., Liu, P. (2023). Human Factor Risks in Driving Automation Crashes. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2023. Lecture Notes in Computer Science, vol 14048. Springer, Cham. https://doi.org/10.1007/978-3-031-35678-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-35678-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35677-3
Online ISBN: 978-3-031-35678-0
eBook Packages: Computer ScienceComputer Science (R0)