Keywords

1 Introduction

Living longer poses many challenges including maintaining independence and mobility. Changes in the physical and cognitive abilities of adults as they age presents a safety challenge [1]. Health deterioration is a key contributory factor to driving cessation among older adults. Mobility in the form of driving is very important for older adults. Mobility constraints and/or loss of mobility impacts on older adult independence and opportunities for social participation [2] Nondrivers rely on their family, friends and/or transport services, to enable them to travel to social and recreational activities [3].

Driverless cars are being introduced by several car manufacturers. Automated cars draw upon diverse sensors, cameras, radar and artificial intelligence (AI) and machine learning technology to travel between locations without requiring a human driver. The available automation technologies follow the guidance of National Highway Traffic Administration in terms of ‘six levels of automation’ [4].

Events, obstacles and potential collisions external to the vehicle are by the very nature of the environment, omnidirectional. The potential to exploit spatial audio within the vehicle as part of an advanced driver assistance system (ADAS) offers scope to present localized and detailed information to the driver about the type, direction and proximity of the hazard, using a less invasive modality that reduces driver distraction. In addition, there is a considerable body of evidence in the literature on the fatiguing effects of vibratory stimuli to the human musculoskeletal complex. Novel research has advanced the understanding of how vibrations or mechanical stimuli in general are processed by the human body, how muscles react to these stimuli and how muscle stimulation can trigger muscle activation [5]. This could help the driver to maintain optimum driving performance for longer periods of time.

Driver assistance solutions need to be carefully thought out in relation to promoting successful ageing, driver persistence and self-efficacy for older adults. Further, issues pertaining to ethics and user acceptability must be considered. This paper reports on research pertaining to the development of a novel and ethically responsible assisted driving solution for older adults. Specifically, it provides a summary overview of the methodologies adopted and the emerging driving assistance concept.

2 Objective and Method

The purpose of this research is to identity the framework and preliminary needs for a novel assisted driving system which would enable safe driving for older adults with different cognitive and physical abilities.

This research has been structured in two phases. The first phase of research has involved the specification of a preliminary concept using a combination of analyses. This research has been mostly theoretical. A multidisciplinary analysis of relevant literature pertaining to (1) older adults and models of successful ageing, (2) the driving task, (3) the classification of older adult drivers, (4) medical and age-related conditions that effect safe driving, (5) the detection and analysis of driver physical, cognitive and emotional state using sensors and machine learning procedures, (6) novel driver communication methods and (7) new driver monitoring, task support and feedback systems was undertaken. A secondary analysis of data from the Longitudinal Study on ageing in Ireland (TILDA) was also undertaken [6]. User profiles were decomposed into specific persona and scenarios. The combined outputs were further analyzed, using the ‘Human Factors and Ethics Canvas’ [7]. This resulted in the specification of an overall system concept and underlying ethical principles.

The driver interface solution was further specified using personae-based design [8] and scenario-based design [9] approaches. Nine driver profiles were identified, based on a segmentation of older adult drivers from the perspective of driver persistence, health and physical and cognitive ability [10].

These proposed nine profiles reflect ‘ideal classes or types and are derived from the project goals – to promote driver persistence, while also supporting road safety and ensuring an enjoyable driving experience. A series of personae corresponding to the nine user profiles was then advanced. The persona defined the older adult’s goals, health, cognitive and physical ability, medications, driving habits and behaviors, and specific challenges [10].

In parallel, several scenarios were defined. Scenario definition involved a mix of a top down and bottom up approach. That is, it reflected the project goals and older adult driver behaviors and issues, as defined in the literature review [10]. These are defined in Table 1 below.

Table 1. Scenarios

The different scenarios were classified in terms of six specific interpretation challenges (IC’s). These include activation/flow, distraction, fatigue and drowsiness, intoxication, medical event (e.g. heart attack/stroke) and task support. Lastly, the scenarios were further elaborated in terms of a text narrative which provided information about the events as they elapsed in time and the allied behavior of car sensors and driver communication system [10].

To date, a preliminary workflow and driver communications concept has been defined in terms of the above scenarios. Subsequent phases of research will further validate the multimodal solution. This will involve mixed methods including participatory design/evaluation and testing in a driving simulator.

3 Results

The design problem is conceptualized from the perspective of positive ageing and promoting older adult ability and enablement. The system logic is underpinned by concepts of driver capability and adaption and not full automation. A collaborative system underpinned by a partnership concept and personalization is recommended. To achieve this, the system provides different levels of assistance, as is appropriate to the drivers (1) health, (2) physical and cognitive ability, (3) sensory function and (4) the real-time psychological/emotional state [10]. The three levels of assistance are: (1) no response, (2) task support and (3) safety critical intervention (i.e. entailing automation of either some or all of the driving task). The ‘co-pilot’ is conceptualized a supportive and vigilant friend, who works with the driver to ensure a safe drive. The co-pilot is constantly monitoring the driver state, the driver’s behavior, the car state and the driving the environment/road context. The proposed sensing system enables the co-pilot to gather evidence about the above. Using data fusion, the available evidence is integrated to form a coherent and predictive picture of the combined team state. The co-pilot then assesses the overall situation using artificial intelligence (AI) and deep learning techniques. This overview is structured in relation to the six interpretation challenges. Having interpreted the situation, the co-pilot determines the best course of action. In line with the proposed partnership approach, the co-pilot provides feedback to the driver about this decision (and associated driving assistance and automation levels), using the multi-modal HMI.

The older adult driver is in charge and chooses the appropriate assistance. Further, the system proposes different levels of assistance based on the driver’s ability and real time situation. Although the driver is in charge, the system authority moves to the co-pilot in certain predefined situations. This includes situations where the system detects that the driver is in impaired state (i.e. alcohol or medications), that there is a potential for a safety event or if the driver is incapacitated.

Data-gathering and interpretation is, of course, only part of the co-pilot’s task. The other part is to enable driver interaction with the system (i.e. multimodal driver input and output), considering the co-pilots interpretation of the current driver factors, driver task, vehicle state and driving environment/context. The co-pilot is typically silent – if (1) the driver state is normal, (2) driver behavior is normal, (3) the car state is normal. As such, the driver is making decisions and not obtaining alerts/warnings from the co-pilot via the multi-modal HMI. Depending on (1), (2), (3) and the associated risk rating (major/minor), the co-pilot provides task support/feedback via the multi-modal HMI in manner that is tailored to the driver’s profile and ability. As such, this feedback considers the driver’s functional ability (i.e. reach, strength), sensory ability (sight, hearing), cognitive ability, emotional/psychological health (i.e. if anxious or stressed), and associated preferences. Critically, the co-pilot is continuously assessing the state of the joint system.

In relation to HMI communication methods, innovation is defined in relation to the use of haptic interaction (steering wheel and seat vibrations), earcons and advanced spatial audio, and combining different modalities.

New haptic technologies (for example, seat vibrations and steering wheel vibrations) can be deployed to enhance co-pilot and driver communications, particularly in situations where the driving task is already using visual and auditory resources.

New auditory technologies provide huge opportunities in relation to optimizing communication between the co-pilot and the driver (for example, earcons that are spatially located). An advanced feedback system could use spatially located earcons to convey external object location, including rising intervals to communicate proximity and haptic feedback, potentially from the seat, to indicate the magnitude/type of object (e.g. car, cyclist, pedestrian, bollard etc.), which also associates with urgency.

There are many opportunities in relation to advancing new auditory and visual technology in the context of older drivers, where the driver communication system (i.e. design of the visual and auditory information) is optimized according to the person’s sensory ability. The integration of different feedback modalities when done correctly will deliver a positive and safe driving experience. Combining advanced audio alerts (including spatial information), with other feedback modalities such as visual cues (visual display/augmented reality) and haptics (i.e. vibrations in car seat and/or steering wheel), will provide a less distracting and more context-rich feedback system for the driver. Also, making use of the latest advances in touch, gesture and voice interaction (i.e. driver input) will yield a more enjoyable driving experience.

The proposed multimodal logic has been worked out in relation to addressing the six IC’s while considering concepts of older driver efficacy, assistance and adaptive automation. Several high-level user profiles and associated personae have been advanced and associated with the given IC’s and scenarios. Figure 1 provides an example of the multimodal solution in relation to a specific IC, personae and scenario. Further research will address the detailed design of the driver communication system.

Fig. 1.
figure 1

Example scenario: IC5, heart attack/stroke (failure to respond)

4 Discussion

In general, drivers are very diverse in terms of size, strength and driving competency [10]. Older drivers present even greater diversity when sensory, physical and cognitive abilities are concerned. The focus on the older driver is important as they are a growing market segment internationally with a wider range of medical and psychological challenges for driving. In addition, if a system is designed to address the range of challenges facing older drivers, is should also encompass those faced by other drivers.

The driving assistance system reconciles the ostensible conflict between (1) ensuring road safety and (2) promoting driver persistence. The purpose of the driving assistance system is to interpret the implications of the driver’s state in terms of their ability to drive the car safely. Most of the relevant medical events (i.e. heart attack, stroke, etc.) can be detected by a combination of just six symptoms (for example, dizziness, drowsiness, syncope/cardiovascular disturbances, attention deficit/distraction, reaction time and muscle strength). By sensing these six symptoms a system should be able to determine if the driver is conscious and capable of driving, and how the vehicle should respond.

An intelligent co-pilot affords possibilities of personalization not previously feasible since it can recognize the driver and configure the vehicle for the stored driver profile. In tailoring the assistance level and multimodal communications to the driver’s ability and state, personalization enables a more positive driver experience.

Several crucial human factors issues need to be effectively addressed in the design of the multi-modal driver/system. Transitions are a critical safety issue with semi-autonomous systems. Both the driver and the intelligent co-pilot need to be clear which functions are being handed over and when. Transitions are part of the broader issue of situational awareness – the interface needs to support the driver in maintaining a clear picture of whom is in control of which functions but also of the external and internal driving context, other road users and vehicle state. Workload is also a critical issue. The HMI needs to provide information in a format that aids the driving task without overloading the driver with information that interferes with the driving task. Societal acceptability of this novel driving assistance system depends upon how it treats issues pertaining to driver rights – including driver autonomy and protection of personal information [10].

5 Conclusions

Supporting driver persistence is a societal issue, and not just an issue for older adults. It is important that older adults continue to drive safety while they age and that this is an enjoyable experience. The use of novel car-based sensors, supported by AI and machine learning processes, along with new driver communications systems, fosters driver persistence and safety, while also promoting positive ageing. New multi-modal driver input and feedback technologies can enable successful communications between the driver and the co-pilot system while at the same time enriching the driver experience. In relation to HMI communication methods, innovation is defined in relation to the use of (1) haptic interaction (steering wheel and seat vibrations), (2) earcons and advanced spatial audio, and (3) combining different modalities. The driving assistance technology, logic and communications system will be further validated using co-design techniques and testing in a driving simulator.