1 Introduction

The introduction of autonomous vehicles into the road transportation network is most frequently justified in terms of enhanced safety, enhanced productivity from multitasking, and enhanced efficiency from reduced travel times (Gkartzonikas and Gkritza 2019). However, the development of automated functions appears to have been piecemeal. Rather than being guided by the need to support those three values, it appears to have been driven largely by a design philosophy of replacing selected human driving functions with automated functions wherever technologically feasible. Nowhere in the corpus of literature we reviewed for our analysis as reported in this paper could we find an explicit reference to a values-based rationale for the development of automated vehicles. In this paper, we examine the functional structure of autonomous driving systems to assess whether current designs support realization of the three values identified by Gkartzonikas and Gkritza (2019).

1.1 Work domain analysis

We use Work Domain Analysis to map the functional structure of autonomous driving and the immediate driving environment onto an Abstraction–Decomposition Space, where the term functional refers to an activity-independent capability to accomplish a specific outcome (Lintern 2013; Naikar 2013). This functional structure has both intentional and physical properties where the term intentional refers to the purposeful properties and the term physical refers to physical objects. Work Domain Analysis identifies the purposes to be achieved with the available physical resources as consistent with dominant values. It maps technical functions afforded by physical resources through domain functions to system values that constrain the manner in which system purpose is realized (Naikar 2013).

An Abstraction–Decomposition Space, as the analytic product of Work Domain Analysis, depicts that mapping. Work Domain Analysis is one stage of Cognitive Work Analysis (Vicente 1999). Work Domain Analysis models the functional structure of the workspace in which cognitive work is undertaken. A poorly designed workspace will interfere with the proper execution of cognitive work. Work Domain Analysis can be used to explore how well a workspace is designed.

Figure 1 shows a minimalist but illustrative Abstraction–Decomposition Space for the driving domain. The vertical dimension is defined by a functional abstraction whereby means-ends relationships between levels of abstraction show how resources or constraints at one level support the resources or constraints available at the next level up. A valid Abstraction-Decomposition Space is internally coherent in the sense that all Physical Resources are connected to System Purpose via means-ends links through the intervening levels. It should be possible to read the sequence from bottom to top as in the following example: a map (Physical Resource), shows route information (Technical Function) that supports route planning (Driving Function), which enhances operational efficiency (Driving Value), which contributes to personal mobility (System Purpose). It should also be possible to reverse this reading from System Purpose at the top level to Physical Resource at the bottom level.

Fig. 1
figure 1

A preliminary abstraction-decomposition space for manual driving (for illustrative purposes)

The Abstraction-Decomposition Space maps interdependencies explicitly in a manner that is unique among analytic techniques for human systems. For example, referencing Fig. 1, although route information primarily supports route planning, it also supports maneuvering by encouraging the driver to transition to the appropriate lane in anticipation of an exit from a multi-lane carriageway. Figure 1 shows a means-ends link from route information (Technical Function) to vehicle control (Driving Function) to reflect that.

Furthermore, an Abstraction-Decomposition Space offers a comprehensive systems view of the functional structure of a work domain (in this case, the driving domain). Domain values that are not supported adequately by suitable Physical Resources (via appropriate Technical Functions and Driving Functions) cannot be satisfied. Empirical testing can be brought to bear on specific issues (e.g., does automatic cruise control enhance safety) but it is never possible to test all possible situational variations and nuances. An Abstraction-Decomposition Space supports a comprehensive assessment of whether the system is designed in a way that, in principle at least, could support the desired human values.

1.2 Work domain modelling of automated systems

In this paper, we report an analysis of in-vehicle automation as described by Society of Automotive Engineers (SAE) Levels (Table 1). Following a strategy pioneered by Li and Burns (2017), we develop a base Abstraction-Decomposition Space for an SAE Level 0 Manual Driving vehicle followed by Abstraction-Decomposition Spaces for an SAE Level 2 Partial Automation vehicle and an SAE Level 4 High Automation vehicle. Our analytic sequence largely tracks the dominant engineering design philosophy of replacing selected human driving functions with automated functions.

Table 1 Description of SAE levels, adapted from SAE (2021)

Our analysis centers on how the functions of automated driving interact with a human driver within urban traffic. While rural travel could benefit enormously from automated driver-assist functions, current automated-driving research and development is largely ignoring the challenges posed by the types of driving conditions (e.g., unmade roads and limited signage) frequently found in rural areas (Peiris et al. 2020). Subsequently, we excluded consideration of these conditions from our analytic framework.

1.3 Scenario mapping

In this paper, we mapped accident scenarios onto relevant Abstraction-Decomposition Spaces. Naikar (2013) has argued that scenario mapping offers a strategy for validating a work domain model. By mapping scenarios onto the model, it is possible to establish whether the model is consistent with documented activities or issues in use of the modelled system. Scenario mapping can thereby identify the functions or systems responsible for any problematic behaviours. Following development of the Abstraction-Decomposition Spaces for SAE Level 2 Partial Automation and SAE Level 4 Full Automation vehicles, we mapped accidents documented in National Transportation Safety Board reports for both levels of automation onto the relevant Abstraction-Decomposition Spaces to validate our models and to clarify issues that could guide redesign.

2 Method

2.1 Analysis scope and procedure

The analysis was undertaken with reference to a single on-road vehicle driven in an urban, real-world traffic setting. The frame of reference for the analysis included nonprofessional driving such as commuting, shopping, and personal business, but excluded professional activities such as passenger and goods transportation.

We first developed an Abstraction–Decomposition Space for a system in which the driver performs all driving tasks. We then developed Abstraction–Decomposition Spaces for SAE Level 2 Partial Automation and SAE Level 4 High Automation vehicles.

  • For SAE Level 2 Partial Automation, prominent examples are market-available vehicles like Tesla model 3 (Tesla 2023) or Cadillac CT6 (Cadillac 2020) which implement Adaptive Cruise Control with Auto-Lane Following and Auto-Lane Change within limited driving constraints (e.g., speed range, road types, lane curvatures, environmental conditions). With a clear road ahead, Adaptive Cruise Control maintains a speed set by the driver. If the vehicle closes on another vehicle from the rear, it slows to maintain a set distance behind the slower vehicle. Auto-Lane Following maintains the vehicle between two lane markings or, if only one lane is marked, it maintains a set distance from that lane. With Auto-Lane Following active, Auto-Lane Change moves the vehicle into the adjacent lane when the driver activates the turn signal. The driver retains responsibility for ensuring that the lane is clear before activating Auto-Lane Change.

  • SAE Level 4 vehicles automate all aspects of driving under specific roadway and environmental conditions (e.g., road type, geographical range, speed). The system may request intervention, but the driver-occupant need not supervise or respond. If an intervention request remains unanswered, the system will enter a minimal risk condition (e.g., move to the side of the road and park in a safe area). SAE Level 4 vehicles are still under test and are not available for on-road use in urban, real-world traffic settingsFootnote 1.

The analysis excluded consideration of vehicle parts such as engines, tires, and transmission, that do not directly shape on-the-road maneuvering, tactics and strategies for vehicle control.

2.2 Analytic strategy

Primary source documents for this analysis were engineering, sales, and driver licensing documents (Appendix – source documents for Work Domain Analysis). As both authors of this paper are experienced urban drivers, we used our own knowledge to fill out gaps in the documented information.

Development of the Abstraction–Decomposition Space for a non-automated vehicle was guided by our understanding that driving is predominantly a control task involving both action constraints (laws of motion related to force, inertia, resistance) as described by Jagacinski and Flach (2003) and information requirements (observability, feedback) as described by Flach and Voorhorst (2016) and Mole Lappi, Giles et al. (2019). This task is overlaid with elements relating to tactics and strategy (Michon 1985; SAE International 2021).

Our development was also guided by contemporary views of situation awareness as it applies to extraction and processing of information for driving (Banks et al. 2018; Shinar 2017). Endsley (1995) has defined situation awareness as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status into the near future. Based on this definition, to maintain situation awareness and take appropriate action, a driver would have to detect things (e.g. roadway geometry, other vehicles), their relationship to own vehicle and the dynamics of all vehicles (own and other), comprehend the significance of all relationships to own vehicle in light of own goals, and comprehend the future implications of those relationships. Merat et al. (2019) suggest that the situation awareness model can be used to characterize the monitoring involved in driving tasks as described by SAE (2021).

Vehicle resources, instruction manuals, and point-of-sale briefings provide technical functionalities in support of the driving functions associated with manual control, tactical control, and strategic management. Externalities such as roadway infrastructure, weather, other traffic, and physical objects modify some of the technical functionalities. In a non-automated vehicle, the driver contributes important technical functionality via sensors, actuators, and more generally, via their own physical and cognitive systems. Within automated vehicles, automated sub-systems substitute for resources provided by the driver, depending on the level of automation. An automated vehicle requires an automation management system not found in a manual vehicle.

2.3 Content of the abstraction–decomposition space

Labels for levels of abstraction are as shown in Fig. 1. Type of content for each of the abstraction levels was referenced against descriptions provided by Naikar (2013, p 182). Based on our frame of reference as described above, we specified System Purpose as mobility in an urban environment (specific to use of an automobile).

A review of more than 40 published surveys on autonomous vehicles (Gkartzonikas and Gkritza 2019) identified safety, productivity, and efficiency as the three dominant benefits of driving automation. We used these as the driving values for our analysis. We drew the Driving Functions of operational (manual) control, tactical (maneuvering) control, and strategic (planning) management from Michon (1985; also see Mole et al. 2019 and SAE International 2021) after assessing that these functions were decomposable and that, as a set, they were comprehensive of the driving domain. In addition, these functions parse the driving domain in conceptually distinctive ways; they are associated with different time scales (Michon 1985) in that operational activity is responsible for moment-to-moment control of the vehicle, tactical activity is responsible for decisions that lead to adjustments in operational activity in response to situational opportunities and hazards, and strategic activity is responsible for trip planning and scheduling (SAE International 2021).

Physical Resources and Technical Functions were identified largely from the document analysis, although the relevant documents rarely distinguished resources from technical functions. Because an essential requirement for a valid Abstraction-Decomposition Space is an unbroken chain of means-ends relationships from Physical Resources through Technical Functions to System Purpose, we had to infer content at both of the lower levels based on our knowledge of how technical systems work. In addition, documents rarely contain any explicit discussion of anything that could be interpreted as a means-ends relationship. We inferred these at the two lower levels based on our knowledge of how technical systems work and at the three higher levels based on our knowledge of how socio-technical systems work.

Vehicle automation functionality was drawn from owner’s manuals provided by manufacturers (see Appendix). We developed an Abstraction–Decomposition Space for an SAE Level 2 vehicle by substituting automation elements for some driver sensors and actuators, supplemented with essential automation management resources. We developed an Abstraction–Decomposition Space for an SAE Level 4 vehicle by complementing SAE Level 2 automation elements with further automation elements that satisfied the need for fully autonomous travel. For the SAE Level 4 Abstraction–Decomposition Space, we deleted all means-ends links from the driver and most means-ends links from the controls and displays used for manual driving.

2.4 Representation strategy

An Abstraction-Decomposition Space for an operational system can become cluttered and crowded. Following Lintern (2013), we employed two representational strategies to reduce clutter and crowding to the extent possible. As illustrated in Fig. 1, we made more efficient use of space by nesting decompositions within aggregated functions as an alternative to the standard form of depicting decompositions in columns. In addition, where all decompositions of an aggregate function have means-ends relations with a function at another level, the link is shown from the aggregate function rather than individually from the decompositions (e.g., in Fig. 1, see links from physical resources of externalities and driver to the technical function of vehicle operation).

2.5 Scenario mapping

We mapped issues identified by the reports US National Transportation Safety Board (NTSB, 2017, 2019c, 2020a, 2020b) onto the Abstraction-Decomposition Spaces for SAE Level 2 and NTSB (2019a, 2019b) onto the Abstraction-Decomposition Spaces for SAE Level 4 Vehicles.

3 Results

Here we report the results of work domain analyses for manual, partial automation, and high automation driving systems as those terms are defined by SAE International (2021) and described in Table 1 and the Method section of this paper. We also report the results of the scenario mapping exercises for partial automation and high automation driving systems.

3.1 SAE level 0 manual driving

Figure 2 depicts the Abstraction–Decomposition Space for manual driving. The essential technical functions are vehicle operation, detection of in-vehicle constraints, and detection of constraints on vehicle passage. In addition, navigation and planning information is important and the driver will often find a need to signal their intent to other road users. These technical functions are decomposed to a more fine-grained level of analysis. For example, the technical function of detection of in-vehicle constraints is decomposed into detection of vehicle state, vehicle dynamics, power state, and driver state of alertness and competence.

Fig. 2
figure 2

Abstraction-decomposition space for an SAE level 0 manual driving system

Technical functions are enabled by the physical resources of documents and briefings, vehicle, driver, externalities, and navigation and planning resources. Vehicle resources support vehicle operation. Externalities constrain vehicle passage, while documents and briefings guide vehicle operation. These physical resources are also decomposed to a more fine-grained level of analysis. In some cases, not all subsystems of a physical resource enable all subsystems of a technical function, in which case, means-ends links connect the appropriate subsystems. However, to reduce clutter in the figure, means-ends links are connected to a whole physical or technical resource wherever possible.

In this analysis, where the ultimate goal is to replace the driver with automated systems, it is imperative to identify what the driver does in manual operation. Here we depict the driver as an information processor (including decision and judgment) with actuators and sensors. As shown by means-ends links, those capabilities provide critical support for all technical functions. There can be no vehicle operation, and no external communication without driver involvement. In addition, the driver must detect the constraints imposed by vehicle systems and externalities and must process the information provided by navigation and planning resources.

Following Michon (1985), driving functions are classified as operational (manual) control, tactical (maneuvering) control, and strategic (planning) management (also see Mole et al. 2019). These functions are decomposed into constituent sub-functions. For example, operational (manual) control is decomposed into speed management, lane following, and operational maneuvering. Many of the means-ends links from technical functions connect to more than one driving function.

Notably, and as consistent with the arguments of Mole et al. (2019), Fig. 2 shows that effective operational (manual) control demands attunement of vehicle operation to in-vehicle constraints and constraints on vehicle passage. At this level, drivers are deciding between courses of action and developing situation awareness (Endsley 1995; Merat et al. 2019). As depicted in Fig. 2, drivers must be aware of obstructions, other traffic, and own state. They must also be attuned to the control dynamics of the vehicle, which will change with variations in vehicle speed and road surfaces (Mole et al. 2019).

Following Gkartzonikas and Gkritza (2019), Fig. 2 shows the values as safety, productivity (multitasking), and operational efficiency. None of the driving functions of a manual vehicle support productivity (multitasking). In this analysis, we take productive multi-tasking to include high-attentional demand activities such as cell phone conversations, but not low-attentional demand activities related to listening passively to music or news. Safety is supported by operational (manual) control and tactical (maneuvering) control while operational efficiency is supported by strategic (planning) management. These two values reflect the general desires we infer for most urban drivers to arrive at and return from destinations in a timely matter while avoiding accidents. The question marks used to annotate the means-ends links reflect the belief implied in the Gkartzonikas and Gkritza (2019) analysis that there is a need for better support for these values.

3.2 SAE level 2 partial automation

Figure 3 shows the Abstraction–Decomposition Space for an SAE Level 2 Partial Automation vehicle while its driver-assist automation is active. The activation function from manual operation to driver-assist automation is also shown. With automatic systems inactive, the appropriate Abstraction–Decomposition Space otherwise conforms to that of an SAE Level 0 vehicle (Fig. 2).

Fig. 3
figure 3

Abstraction-decomposition space for an SAE level 2 partial automation (adaptive cruise control, auto lane following, auto lane change) driving system

The substantive addition to Physical Resources in comparison to manual driving is an automation module (with sensors, actuators, and a processor) that assumes some driver roles. Automation management is undertaken via the mode selector resource. The Level 2 automation takes over operational control, although the driver must retain a supervisory role to ensure they are ready for tactical or strategic demands. The alerting system reminds any driver who does not maintain active supervision to reengage with the driving task. A mode annunciator shows when the driver-assist resources are active.

Those nodes in the Abstraction–Decomposition Space that are managed by automation are shown in dark fill with those managed by driver or automation shown in dark-light fill. In-vehicle control, resources like the brake pedal, accelerator pedal, and steering wheel are no longer used continuously or to their full extent. Instead, drivers may use these controls to adjust or deactivate the automation as needed. Notably, the design of automation controls differs between manufacturers. For instance, Tesla 2023 users can deactivate the automation by moving the driver stalk once, while Cadillac (2020) users can achieve the same effect by pressing a button located on the steering wheel. Also, contemporary interactive design allows users to adjust the automation settings through a touchscreen (Tesla 2023).

The automation registers constraints on vehicle passage to manage operational control while the driver detects those constraints for tactical control. Subsequently, the driver no longer has to engage with externalities on a moment-to-moment basis, although a new means-ends link from documents and briefings reflects instructions that specify the driver is to maintain continuous engagement with those externalities. Notably, with traditional pedals and steering wheel no longer in continuous use under SAE Level 2 automation, the active feedback loop of vehicle dynamics that enables the driver to respond quickly to an ever-changing environment during manual driving is disrupted (Mole et al. 2019). Consequently, the driver may not be able to perceive changes in vehicle dynamic, such as those generated by changes in road surface.

Lane Following automation relies crucially on detection of lane and roadway constraints as revealed by well-articulated lane markings. Those markings can be concealed in adverse conditions such as heavy rain or snow cover. Most troubling, they will generally be concealed to the driver in conditions that also conceal them to the automation sensors (Endsley 2017).

Historically, new automobile features have been introduced incrementally without posing any substantial challenge for workability. Drivers have typically become familiar with how things work by observing other drivers or by a brief introduction. Vehicle automation, in contrast, poses a more substantive challenge (Casner and Hutchins 2019). Workability of many automation features is not self-evident from inspection or manipulation of the interface or by observation of other drivers using them (Banks et al. 2018; Endsley 2017). Instruction manuals and sales outlet briefings become more important in the development of operational knowledge and are subsequently shown in Fig. 3 with a dark fill.

Support for driving values changes only marginally from a fully manual vehicle. SAE Level 2 Partial Automation does not provide any support for productivity (multitasking) or any additional support for operational efficiency. Adaptive Cruise Control, Auto Lane Following, and Auto Lane Change can control speed, prevent lane departure, and reduce blind-spot risk, and may thereby mitigate speeding and reckless driving which Stewart (2022) has identified as major causes of traffic incidents. Also, as observed by Endsley (2017), SAE Level 2 automation can allow the driver to develop better situation awareness relating to events external to the vehicle. However, the use of SAE Level 2 automation is associated with an elevated risk of drivers engaging in non-driving related tasks, which could potentially compromise safety by impairing driver awareness of critical roadway information and traffic (Dunn et al. 2019). As indicated by the question marks used to annotate the means-ends links, it remains uncertain whether these systems enhance or degrade safety.

3.3 SAE level 4 high automation

Figure 4 shows the Abstraction–Decomposition Space for an SAE Level 4 Full Automation vehicle while it is in full-automation mode. An SAE Level 4 High Automation system performs all aspects of the driving task with the exception that the driver remains responsible for selecting the destination and any waypoints. The driver is not required to supervise a High Automation system during the trip and may even sleep while the trip is in progress (SAE International 2021). The driver is able to intervene or take full control, and the system may even request intervention in the face of an unexpected situation. However, the driver is not required to respond to an intervention request, in which case the system will adapt in a manner that will ensure continued safety.

Fig. 4
figure 4

The abstraction-decomposition space for an SAE level 4 high automation driving system

In developing the Abstraction-Decomposition Space shown in Fig. 4, we imagined a scenario in which the driver enters a destination into the system and then activates it so that the vehicle proceeds without further driver intervention. During the trip, the driver may decide to stop at an unplanned location, thereby establishing a new waypoint. The driver may or may not attend to events external to the vehicle, but if they do so, they may engage in adaptive replanning if a need is detected.

Within specific conditions, SAE Level 4 Full Automation supports all Technical Functions required for driving (SAE International 2021). In comparison to Fig. 3, the only nodal addition to the Automation in Fig. 4 is the Navigation System, although the automation has far more functional capability than a Partial Automation system. Except for the external lights and horn, all non-automated vehicle resources are redundant. Externalities are now registered by the automation with no detection responsibility left to the driver. Now redundant Physical Resources are shown as faded in Fig. 4 and their means-ends links to Technical Functions have been deleted. Notably, the Technical Function of driver state detection (shown as faded) is no longer needed.

Driver sensors and actuators as components of the driver cognitive system play no operational or tactical roles in a High Automation system. The Driver remains responsible for selection of destination and waypoints but plays no active role during the driving event, absent any need to adjust waypoints or destination. For support of in-travel waypoint or destination adjustment, the driver cognitive system retains an active means-ends link to destination and waypoint options, with that means-ends chain continuing to destination and waypoint selection.

High Automation is designed to enhance productivity by relieving the driver of all operational and tactical control responsibilities under specific conditions. In that the driver now does not have to attend to any driving duties during the trip (and might thereby be better designated as the occupant), there is no need for multitasking. The driver-occupant can devote their full attention to any non-driving task or interest. Provided that High Automation operates under the intended conditions, the system seemingly provides adequate support for the other two values of safety and efficiency.

3.4 Scenario mapping; partial automation

We mapped several accidents with Partial Automation vehicles onto the Abstraction-Decomposition Space for SAE Level 2 Partial Automation (Fig. 5). All involved Tesla vehicles with Autopilot engaged (National Transportation Safety Board 2017, 2019c, 2020a, b). With such a small number of accidents, is not possible to make any strong generalizations, but these accidents do raise some troubling issues that mesh with implications of our Work Domain Analysis.

Fig. 5
figure 5

Accident scenarios mapped onto the abstraction-decomposition space for SAE level 2 partial automation

National Transportation Safety Board (2017, 2020b) report accidents in which Tesla vehicles struck semitrailer trucks, one turning and the other crossing in front of the Tesla path of travel. On impact, each of the Tesla vehicles was traveling at the cruise speed set by their driver and neither the Tesla automation nor either of the drivers executed any evasive manoeuvre in advance of the accidents. National Transportation Safety Board (2019c, 2020a) report accidents in which Tesla vehicles had slowed from set cruise speed to follow another vehicle but had then increased speed, even with a collision imminent, when that vehicle had changed lanes so that it was no longer leading the Tesla. Apparently, the lane following logic failed to register an object ahead once the vehicle it had registered moved out of the lane.

In all four of these accidents, the automation sensors had failed to detect the collision potential of objects ahead and the driver had not maintained their attention on the driving environment as required by documents and briefings. Detection of objects ahead had possibly been compromised in two of these accidents (National Transportation Safety Board 2019c, 2020a) by the lane-following logic when an automobile ahead had moved out of the lane occupied by the Tesla vehicle. Furthermore, the inability of human drivers to regain control when automation fails highlights the conflicts between human operators and driving automation (Vanderhaegen 2021). Specifically, the automated vehicle failed to react to the object ahead, while the human driver remained unaware of the impending hazard.In reference to Fig. 5, Tesla designed Autopilot to relieve the driver of some operational control responsibilities and to thereby enhance driver comfort but did not design it to support multitasking. The system requires an attentive driver who remains responsible for safety (Tesla 2023). Possibly, some drivers do not appreciate the need, which might represent an inadequacy with documents and briefings. Alternatively, it might be a consequence of the limited interface between automation and driver, which for Tesla vehicles currently consists of an in-dash mode symbol and an aural signal, limited actuator functionality, and an alarm that will sound if the driver does not maintain active contact with the steering wheel. Some drivers apparently choose to ignore the constraint that they engage with the driving task even under Autopilot control and seek to defeat the alarm function by moving the steering wheel with sufficient frequency while they timeshare with a nondriving activity.

In the reported accidents, the functionality of the automation sensors was limited to the extent that they did not fully compensate for driver inattention. These sensors proved to be unreliable for detection of some complex or unusual shapes. There were two types of sensors (camera and radar), but the detection algorithm required agreement between the two. Independence would possibly result in more reliable and faster detection of critical objects, although at the likely cost of a higher false alarm rate. Although documents pointed to sensor limitations (National Transportation Safety Board 2017, 2019c), it is not clear that buyers of new Level 2 vehicles become fully aware of those limitations.

Given the issues raised by these accidents, the status of the means-ends supports for the value of safety remains questionable in Fig. 5.

3.5 Scenario mapping; high automation

We mapped two accidents related to High Automation test vehicles onto the Abstraction-Decomposition Space for SAE Level 4 High Automation (Fig. 6). Again, it is not possible to make strong generalizations from this small number of accidents, but the implications deserve consideration.

Fig. 6
figure 6

Accident scenarios mapped onto the abstraction-decomposition space for SAE level 4 high automation

In one accident (National Transportation Safety Board 2019a), the autonomous vehicle struck a pedestrian who was crossing the driving lane. In the other (National Transportation Safety Board 2019b), the autonomous vehicle came to a safe stop behind a stationary truck, but the truck then reversed slowly into the autonomous vehicle. In that these vehicles were being used to test high automation in normal traffic, both had on-board attendants with the responsibility to intervene if the automation was unable to handle a situation safely. Although in both cases, the on-board attendants failed to avoid the accidents, our analysis focuses on the failure of the autonomous system to maintain safety.

In reference to Fig. 6, High Automation should relieve the driver-occupant of all operational and tactical control responsibilities. As with Partial Automation systems, automation sensors proved to be unreliable for detection of complex or unusual shapes. An additional problem, not found with Partial Automation, became apparent in one accident. The operational control functionality of the system did not cope with the unexpected maneuver of another vehicle. This raises a general concern about how well High Automation systems will maintain safety within traffic that contains a mix of manual, partial automation, and high automation vehicles. Given this concern, the status of the means-ends supports for the value of safety are returned to questionable in Fig. 6.

4 Discussion

4.1 Safety

Although safety is possibly the most important of the values identified by Gkartzonikas and Gkritza (2019), there is no clear statistical evidence bearing on the safety of automated versus manual vehicles (Kalra and Paddock 2016) even as serious accidents involving automated vehicles are accumulating (Blumenthal and Fraade-Blanar 2020). Such evidence would have to take account of the fact that automated vehicles are currently operated only under almost ideal conditions (McCarthy 2022) and as yet have not accumulated the driven distances required to provide statistically sound evidence of autonomous vehicle safety relative to manual vehicle safety (Kalra and Paddock 2016).

Conceivably, the sensor systems of automated vehicles could enhance safety by eliminating accidents that result when human drivers fail to perceive seemingly obvious critical events. For example, rear-end accidents are generally viewed as resulting from driver inattention (National Transportation Safety Board 2015). Another common pattern of collision occurs when an automobile turns across the path of the oncoming motorcycle (Caird and Hancock 1994; Pammer et al. 2018; Wulf et al. 1989). Post-accident, the automobile driver often states that they did not see the seemingly visible motorcycle until too late to avoid collision. Although we could find no recorded cases of such accident patterns with Level 2 or Level 4 vehicles, there were cases in which automation sensor packages failed to detect other types of critical objects as collision became imminent (see National Transportation Safety Board 2017, 2019a, c, 2020a, b). At this stage, it is not possible to establish that sensor systems are a safer alternative to human perceptual systems, but it does seem that the critical objects missed by sensor systems differ from those missed by human perceptual systems.

It is tempting to see the solution to the safety problem in a function allocation method that plays to the strengths of automation versus human. That type of substitution-based function allocation method has, however, already revealed its limitations in design of a variety of other types of systems in which automation has been used to replace human functionality (Dekker and Woods 2002). As revealed by the failure to detect oncoming motorcycles (Pammer et al. 2018) and the failure to respond to a reversing truck (National Transportation Safety Board 2019b), this is not just a sensory registration problem. Rather, it is a failure of situation awareness as described by Endsley (1995); a failure to become aware of the object that is impinging on the human or the technological sensor, a failure to appreciate the current state of that object, and a failure to anticipate its near-future state. Subsequently, instead of optimizing sensor capabilities by distributing tasks between human and technological agents according to some differential performance standard, we need to develop systems that enhance situation awareness.

4.1.1 Safety: level 2 vehicle

Figure 3 shows that lane following under automated control with Partial Automation is supported by detection of lanes and roadway. There are, however, many situations in which automated lane following becomes unusable, sometimes with little warning (e.g., where road construction temporarily obliterates or diverts lanes). In addition, Endsley (2017) has noted that Partial Automation does not perform satisfactorily where lanes split or merge (also see National Transportation Safety Board 2020a). Sometimes lane following becomes unusable when drivers most need assistance (e.g., lanes concealed by snow or heavy rain). This presents as a problem of clumsy automation (assistance from automation is unavailable when it is most needed) of a type widely acknowledged within aviation (Sarter et al. 1997; Wiener 1989).

An additional concern is that to assume full control after assisted driving, the driver must be attuned to the vehicle dynamics. In manual systems, detection of vehicle dynamics is supported concurrently by driver actuators and driver sensors in dynamic engagement with the vehicle dynamics. Under automated control with Partial Automation, only driver actuators are employed and then only to deactivate Lane Following or Adaptive Cruise Control. There is no continuous detection of vehicle dynamics as there is in manual driving. Deskilling as caused by a long-term absence from active control is a recognized issue in aviation (Casner et al. 2014). However, even a short-term absence can create control problems if vehicle dynamics change (National Transportation Safety Board 1994). In a road vehicle, the same snow cover that would conceal lane markings might also ice the roadway surface, thereby challenging the driver of a partially automated vehicle with a sudden change in vehicle dynamics just as an important source of information for automated control became inaccessible.

Figure 3 reveals other contingent complications in the way vehicle resources are used under automated control with a Partial Automation system, where many resources crucial for manual driving play a diminished or ambiguous role. For example, operational control is largely automated whereas tactical control is not. Technical Functions such as detection of obstructions and events, lanes and roadway, and traffic can be supported exclusively by automation until a requirement for tactical control emerges. At that point, driver sensors plus a number of in-vehicle resources become important. Of concern is that the driver can ignore such resources on a moment-to-moment basis but must be aware of them when there is a need for tactical maneuvering. In executing a lane change for example, the driver who does not anticipate the need, may find it challenging to detect traffic proximity at a moment when that detection becomes critical.

The contingent pattern of work sharing in a Partial Automation system also sometimes involves a more crucial emphasis on resources that play a minor role in manual driving. Documents and briefings offer an example. Those provided for SAE level 2 Systems have generally been modeled on the documents and briefings that have been provided over decades for manual vehicles. Casner and Hutchins (2019) have noted their inadequacies, while Endsley (2017) has remarked on the casual familiarization offered her on delivery of a new SAE level 2 automobile. Whether or not driving manuals in general conform to sound usability principles is questionable but Casner and Hutchins (2019) have argued that automated vehicles need something better. They suggest that standards developed over many years in aviation offer an important guide for redesigning automobile manuals for an automation age (Casner and Hutchins 2019) while Vanderhaegen (2021) inverts this problem by suggesting that autonomous driving systems could be designed to develop sensitivity to habits of human drivers.

Interface issues create other types of challenges. Endsley (2017) observed that she, as a driver of a Tesla Autopilot, was not always clear about which automated mode was active or how an active mode would behave. The in-dash activation information generally provided in Level 2 automobiles (see Fig. 3, the means-ends link between driver physical resources and the technical function of detection of automation state) is apparently not fully adequate. In one driving incident, Endsley (2017) had thought the Adaptive Cruise Control had disengaged when she had taken over steering control. She was subsequently surprised when she entered a curve at an uncomfortably high speed and had to brake to disengage the Adaptive Cruise Control. Although disengagement of Cruise Control can generally be accomplished with relatively light pressure, a startled braking reaction at uncomfortably high speed in a tight curve situation could conceivably exacerbate an already precarious situation.

Furthermore, the strategies implemented by Tesla for control of Autosteer can generate uncertainty. Lane Following disengages if a driver moves the steering wheel. That action could be inadvertent or could be seamless with other activity as in the case, for example, where a driver takes over momentarily to avoid a small object on a highway surface. A chime sounds and the Autosteer icon changes from blue to gray when Autosteer disengages (Tesla 2023), but the change is not one that a driver will always notice.

The Cadillac Super Cruise system, in contrast, does not disengage Lane Following if the driver takes over momentarily. It remains unclear whether or not the Cadillac Super Cruise strategy is preferable, but the fact that two leading manufacturers have different strategies raises the potential for negative transfer where a driver may establish a habit with one of these vehicles but then, on transition to the other, find that the established habit continues to interfere. The potential for negative transfer, widely appreciated within the manual control literature (Lintern and Gopher 1978), constitutes one dimension of behavioral adaptation as discussed by Smiley and Rudin-Brown (2020), by Blanco et al. (2015), and by Skottke et al. (2014). In fact, there are likely to be many negative transfer traps confronting drivers who switch between different types of automated vehicles or switch from automated to manual vehicles.

More generally, the human interfaces for Level 2 automated driving do not appear to have been guided by an appeal to well-known human-interface design principles or subjected to rigorous testing to ensure that they properly serve the needs for operational and tactical control. Most critically, there is an obvious need to develop better alarm and monitoring systems for guiding the attention of the human driver (Vanderhaegen 2021).

4.1.2 Safety: level 4 vehicle

One documented accident with a high automation vehicle resulted from a lack of capability for the automation to respond tactically to an unanticipated maneuver by the driver of another vehicle (National Transportation Safety Board 2019b). In any traffic mix foreseeable in the near term, high automation vehicles will need to respond adaptively to a diverse array of spontaneous and unusual maneuvers by drivers of manual and partially automated vehicles.

4.2 Productivity and multi-tasking

The Gkartzonikas and Gkritza (2019) review identified enhanced productivity, often with reference to multi-tasking, as the second most frequently cited benefit of automated vehicles. It is questionable however, whether multi-tasking, in the sense that we attend to two or more demanding activities at the same time, is a real phenomenon (Hadlington 2017). Rather, what is generally taken to be multi-tasking is most likely a switching whereby activities are interleaved by alternating attentional focus between them. This results in frequent suspension and resumption of activities, with momentary focus on one at the expense of the other. There are cognitive costs associated with resuming a suspended task, which can be cumulative (Chen et al. 2024). If any one of these tasks is safety-critical, so-called multitasking could increase the risk of accident (Hadlington 2017). Some think they are immune to the danger, but research suggests that those who think they are good at multitasking are less able to do it than those who choose to avoid it (Hadlington 2017).

Our analyses reveal that productivity is enhanced in a high automation system without the need for multi-tasking (see Fig. 4). Our analyses also reveal that multi-tasking is not supported in a partial automation system, although some drivers appear to believe it is (see Fig. 5). There is anecdotal evidence from accident reports that safety is compromised when drivers with more discretion in attentional focus under Partial Automation divert their attention from critical elements of the driving task (National Transportation Safety Board 2017, 2019c, 2020a, b). Furthermore, some drivers do not appear to appreciate the risks associated with diverting attention from driving under partial automation or the risks associated with actively defeating the alerting system (National Transportation Safety Board 2017, 2019c, 2020a, b). Blanco et al. (2015) designate this as a Primary Task Reversal effect that emerges when the appearance of the system suggests that the automation can cope fully with the driving task so that the driver can now give primary task status to non-driving tasks. Blanco et al. (2015) do not offer any solution to this problem but most generally it seems that the current messaging regarding capabilities of partially automated vehicles must be revised.

4.3 Efficiency and navigation

The Gkartzonikas and Gkritza (2019) review identified enhanced efficiency from reduced travel times as a primary benefit of automated vehicles. The reason that automated vehicles might reduce travel times was not specified, and it is not clear that they do, but for this discussion, we assume that better navigation to unfamiliar destinations would be a major benefit. Our own driving experience suggests that current driving navigation systems are not error-free, but for this paper, we envisioned a near future in which current issues related to inefficient routing and updating of temporary and permanent changes to traffic flows and traffic infrastructure are largely resolved.

We should recognize, however, that automated planning can limit development of skills and knowledge for dynamic replanning (Cook et al. 1996). Even high automation vehicles do not react adaptively or creatively to unanticipated, occasional events like traffic jams. In such cases, the human driver needs to engage in dynamic replanning, which requires an updated appreciation of potential options and conflicts. Cook et al. (1996) have argued that automated planning can limit the development of the essential situation awareness for replanning. Conceivably, the occupant-driver who relies exclusively on automated route planning may never develop a competent appreciation of the layout and routing options even within a frequently traversed neighborhood. It remains uncertain whether general degradation in this currently common competence is a matter of concern.

4.4 Model validation

Model validation refers to the process of confirming that the model satisfies the goals set for it. A model, can be assessed in terms of construct validity (does the model faithfully represent the essential modeling formalisms) and content validity (does the model capture the essential properties of the system being modelled).

For a representational model such as the Abstraction-Decomposition Space, construct validity refers to good form and internal consistency or, as noted in our Introduction, internal coherence. Node entries and the labels attached to them need to be appropriate to the definition of the level of description. As noted in the method section, we extracted content for our Work Domain Analysis from domain reviews, domain standards, and other source documents (see appendix), and then assigned the content against the descriptions for the five levels as provided by Naikar (2013). In addition, the chains of means-ends links must be continuous from the lowest to the highest level of abstraction. All nodes, except for those at the highest level, must be linked to (must support) something in the adjacent level above. All nodes, except for those in the lowest level, must be linked to (must be supported by) something in the adjacent level below.

In our analysis, we leveraged departures from this formal requirement to highlight system anomalies. In Fig. 3, the Domain Value of productivity (multi-tasking) is shown as unsupported because there are no functions within a partial automation system that have been designed to support it. Furthermore, we used coding of means-ends links to identify support we judged to be weak or questionable. For example, in Fig. 5 and in Fig. 6 we show the value of safety as having questionable support from the technical functions of operational and tactical control.

Content validity refers to whether purpose, values, domain functions, technical functions and physical resources and their means-ends links (as entered into the Abstraction-Decomposition Space) accurately and comprehensively represent the target system. Naikar (2013) proposed model review with experienced operators and scenario mapping as suitable content validation methods. As experienced drivers, we validated the model content by reference to our own driving experience. We then further validated model content by mapping descriptive scenarios from accident reports onto the models and then assessing whether the model can be used to reason about the responses of agents (i.e., human and automation) in those scenarios. This scenario mapping strategy recognizes that an Abstraction-Decomposition Space is event independent and accordingly, that it should be possible to use such a model to reason about the behavior of the agents in any situation, including novel or unanticipated ones (Burns et al. 2001; Naikar 2013). A scenario-mapping exercise of this type can verify the content of the model or can otherwise reveal problems with the model, such as missing or inaccurate information.

5 Conclusion

Prior to our analysis, we could not identify any systems-based rationale for the development of automated vehicles within the research literature or within manufacturers’ documents. Somewhat troubling is that documents such as SAE International (2021) imply that the overall aim is to develop an autonomous vehicle. Automation is thereby posed as an over-arching design requirement and also as the ultimate design solution. This approach violates a fundamental principle of Systems Engineering; a requirement should not be specified in terms of a design solution. Also troubling is that the documents we consulted in building our Abstraction-Decomposition Spaces treated values such as safety and productivity as independent entities without discussion of potential conflicts or compromises.

Work Domain Analysis, leading to development of an Abstraction-Decomposition Space, offers a different way of viewing the problem. Given the constraint that personal mobility in an urban environment will continue to rely heavily on personal automobiles, the upper three levels of the Abstraction-Decomposition Space can be viewed, in Systems Engineering terms, as the problem space, with the lower two levels representing the solution space. From this perspective, the design challenge becomes one of establishing how driving functions can be enabled to promote satisfaction of driving values while ensuring that efforts directed at satisfying one value do not compromise satisfaction of other values.

In undertaking our analysis, we accepted standard claims relating to benefits of automation with regard to improved safety and enhanced productivity via multi-tasking. Our analyses did not, however, offer unequivocal support for those claims. Tentatively, we suggest that automated systems reduce the risk of some types of accidents but increase the risk of other types. Furthermore, automated systems are typically unusable in some of the more challenging driving conditions that compromise safety. In addition, we suggest that the general beliefs around multi-tasking as it relates to productivity are misguided. Rather than being a timesharing activity where drivers are engaged in two cognitive activities simultaneously, multi-tasking involves task switching where the switching adds a cognitive cost. Such task switching compromises safety in partially automated vehicles and is unnecessary in fully automated vehicles.

Most generally, the common automated vehicle developmental strategy of direct substitution not only fails to resolve many of the safety and performance challenges inherent in manual driving, but it also leaves the human driver less prepared to deal with them than would be the case under full manual control. One major contribution of Work Domain Analysis leading to development of an Abstraction-Decomposition Space is that it can help identify these issues in advance and can stimulate the development of appropriate design solutions.