Keywords

1 Introduction

With the high-speed of the development of human-centered applications of the Internet of Things (IoT) and the comprehensive application of different kinds of sensor nodes which is represented by mobile nodes, the application of IoT based on mobile sensing is becoming increasingly popular, especially in healthcare.

According to World Health Organization (WHO), approximately 1.3 million people die each year on the world’s roads, and between 20 and 50 million sustain non-fatal injuries [1]. One of the main reasons is driver’s fatigue driving and negative mood which decrease their vigilance level, because of which about 600 people die on roads everyday [2]. Thus, it is of great importance to detect the mood and fatigue of drivers in real time while driving and take actions according to the vigilance level.

For safe driving, noninvasive techniques are at present being employed to assess a driver’s alertness level through the visual observation of his/her physical condition using a camera and related computer vision [3] technologies. Also, corresponding incentive scheme can be taken to make drivers happier and less fatigue, e.g., previous research pointed out that listening to suitable music would not impair driving performance but enhance it [2]. To achieve the aim of assess a driver’s mood-fatigue state, two key techniques are of great importance.

The first is mobile sensing which collects necessary data of current driving, including the driver’s health data, the car sensors, and the road information. The health data such as the heart rate can be collected by the heart rate sensor, while the car data (e.g., speed, temperature, fuel consumption) can be read by advanced onboard diagnostic port scanners. The environmental data can be collected from the roadside and in-car sensors, news summary, and updates from other drivers. All the input data will be fed into the real-time state analysis module to infer the mood of the driver and the current driving condition [4]. Another is facial expression recognition based mood detection. By catching and processing the real-time photos of drivers’ faces, the emotion can be classified into different categories.

In the rest of this paper, Sects. 2 and 3 introduces the current solutions and techniques of mood-fatigue detection and some common solutions, respectively. Then, Sect. 4 describes current platforms for safe driving. Section 5 discusses some technical challenges and future research directions. Section 6 concludes this paper.

2 Mood-Fatigue Detection of Drivers

Mental fatigue is a frequent phenomenon in our daily life, and is defined as a state of cortical deactivation, which reduces mental performance and decreases alertness [5]. The fatigue detection system is critical to prevent the car accident by sounding the alarm to the driver. Besides, the techniques used can also be applied to other similar applications, such as Computer Vision Syndrome [6], human-computer interaction, etc. On the other hand, facial expressions always play a significant role in human communication. Consequently, the technology of mood detection, detect a motion of people by sensors and computer, is essential in HCI (human-computer interaction). As a result, there has been considerable works done on the recognition of emotional expressions. The applications of those researches can be beneficial in improving HCI and applied in various fields like digital camera, Curative effect observation in hospital, etc.

2.1 Fatigue Detection

In earlier years, researchers took direct measures of fatigue, involve self-report of internal states, however there are a number of problems in using any self-report measure due to the influence of demand effects or motivational influences [7]. Therefore, in the recent years, many researchers have focused on the development of monitoring systems using various techniques which are based on physiological signal, such as brain waves, heart rate, pulse rate and respiration. Among these signals, electroencephalogram (EEG) signal was regarded the best way to detect fatigue level of a person [8]. But it is regrettable that these techniques are intrusive. They need to attach some electrodes on the test objective [9, 10] which is not a pleasant experience. Consequently, another technique, which are based on physiological reaction, including head pose, mouth shape, eyes state, increasingly got attentions of researcher. In addition to this, there are some other Fatigue Detection methods based on driver performance, with a focus on the vehicle’s behavior including position and headway.

2.1.1 Techniques Based on Physiological Signal

Studies show that physiological signals of a fatigue person are abnormal [11]. For instance, with the increase of fatigue, θ and δ brain waves will increase meanwhile α and β brain waves will decrease [12, 13]. Yeo et al. [14] built an automatic EEG detection system of drowsiness by classifying brain waves based on SVM. Zhao et al. [15] use electroencephalograph to collect the EEG in driving simulation, calculate the distribution of frequency by power spectrum and build a system by neural network. Besides, electrocardiogram signal including HR (Heart Rate) and HRV (Heart Rate Variability) is another important figure in fatigue detection. Patel et al. [16] use HRC as the main parameter to detect the fatigue base on BP neural network and obtained 90 % accuracy. In the researches in Tokyo University, scholars can detect fatigue of test people by contents of alcohol, ammonia and lactate in their sweat.

2.1.2 Techniques Based on Physiological Reaction

Sharma et al. [17] used the amount of pixels in the eye image to calculate the eye state. Sharma converted face images to the YCbCr color configuration. The average and standard deviation of the pixel number in the image is calculated. Then, fuzzy rules [18] are used to identify the eye state. Liu et al. [19] and Tabrizi et al. [20] proposed methods to detect the upper and lower eyelids based on the edge map. The distance between the upper and lower eyelids is used to analyze the eye state. Besides, the head position sensor system MINDS (Micro-Nod Detection System) proposed by ASCI is conceptually designed to detect micro-sleep events occurring in association with head nodding by assessing the x, y, and z coordinates of the head through conductivity measurements [21].

2.1.3 Techniques Based on Driving Behavior

CBerglund et al. [22] established fatigue detection modal which collects data including steering wheel angle, Lane offset, Vehicle lateral sway angle, extract 17 features in a simulative driving experiment and obtained 87 % accuracy. Friedrichs et al. [23] extract 11 fatigue estimative figures from the steering wheel turn angle and the variable of the vehicle crosswise position in their fatigue driving experiments and built several systems by fisher discriminant, k nearest neighbor, Bayes algorithm and neural network.

2.2 Mood Detection

The automated recognition of facial expressions is still a challenge despite its rapid development these years. A mood detection system usually is built on the basic framework of pattern recognition which has three main steps: (1) detection and location of the face, (2) feature extraction and facial representation, (3) classification of emotions. Therefore deriving effective facial representation from original face images is a vital step for successful facial expression recognition. There are two approaches to extract facial features: geometric feature-based methods and appearance-based methods [24].

Authors in [25] proposed a geometric system for breaking down 20 facial appearances with the particular objectives of contrasting, matching, and averaging their shapes. Local feature based algorithms [26] exploits curvelet change in two routes i.e. as a key point finder to extract salient points on the face region. Sparse representation based Face Expression Recognition (FER) strategy [27] lessen the intra-class variety while accentuating the facial expression in a query face image. The authors in [28] propose a deformable 3-D facial expression model and D-Isomap based classification. An enhancement of was proposed by authors in [29]. In this method a novel multi view facial expression recognition technique is displayed.

2.3 Mood-Fatigue Detection for Safe Driving

Since now the mobile sensing can use the growing hardware of sensors to catch the valid data for mood-fatigue detection, there are an increasing number of solutions of mood and fatigue detection for safe driving. Currently, there already have been some in-vehicle technological approaches for safe driving and even addressed on mood and fatigue.

The Driver Fatigue Monitor [30] is a real-time on-board video-based detection system that measures the drowsiness degree of the drivers. It uses the camera catching for the slow eyelid closure. This device shows the driver’s drowsiness level with PERCLOS estimation and has a three-state warning mechanism. But it is just good in usage of the night environment because the detection is about the pupil of the eyes, which is easily interfered with the IR illumination by the bright spots from the light reflection.

Thus, the Driver State Monitor [31] improves the estimation skill from PERCLOS to AVECLOS, which is less complex than the PERCLOS. The faceLAB [32] enlarges the detection range from the eyelid to driver’s whole behavior including head pose, gaze direction, and eyelid closure. The Drowsy Driver Detection System [33] collects data with all kinds of conditions. It also considers the heartbeat, the pulse rate apart from the eyelid. It has a more integrated detection but still does not go beyond the original detection skills. So as for the algorithm breakthrough on parameters, the RPI [30] uses the Bayesian Networks model to firstly combine the parameters of the face data with algorithm. However, the Artificial Neural Network [34] collects and analyzes the vehicle data only. Without the manual interface and the uncontrollability of the personal behavior, its accuracy of detection can be up to 90 %.

Also, the faceLAB [33] is taken on the eyelid position rather than the bright pupil, but it still does not use the vehicle data, which is just feasible for simulation environment. Then the Smart Eye [35] uses the 3D technology for driver’s head model. The 3D location, the face of the driver’s can be captured more accurately with real-time tracking.

As for the nighttime range problem, SMI InSight [36] determines a tracking area on the face for head location, which makes the 24 h operation come true by widely decreasing the reflection influence when in the head and face detection for mood and fatigue judgment. It also conducts researches on the difference between the simulator and real-road environment. The ASL ETS-PC II [37] prefers the bright pupil images, applying itself to the all driving conditions.

Figure 1 shows the structure of data from the camera, sensors and OBD (On-Board Diagnostic) for analyzing mood and fatigue.

Fig. 1.
figure 1

The structure of data for analyzing mood and fatigue

3 Common Solutions

To support mood-fatigue detection of drivers, several key techniques are necessary. Here we discuss mobile sensing and mobile cloud computing.

3.1 Mobile Sensing

3.1.1 Sensors

Mobile sensing collects data from useful sensors for mood-fatigue detection. These sensors include acceleration sensors, temperature sensors, heart rate sensors and so on. Then the data will be transmitted to upper layer for further processing.

By measuring the acceleration caused by gravity, it can further figure out the angle of inclination relative to horizontal plane; by analyzing the dynamic acceleration, you can work out the way it moves. Acceleration sensors are now necessities in mobile phones. For example, step software like Nike + use the acceleration sensor to calculate the journey while the carrier of the mobile phone is walking or running.

The heart rate is closely connected to one’s mood. Heart rate sensors can monitor one’s heart rate to track the exercise intensity or different motor and training pattern, and estimate the health data such as the sleep cycle according to the statistics. There are two kinds of heart rate sensors. First is the photoelectric heart rate sensor which makes use of the reflection of the light. Another is the electrode heart sensor which measures the electric potential of the different parts of the human body. Although the former has a relatively low accuracy, it is now used in all mobile terminals due to its small volume. For example, the new Apple Watch embeds four photoelectric heart sensors inside to monitor the wearer’s heart rate more accurately.

Just as its name implies, temperature sensors are used for temperature measure and converting it to output signals. In a mood-fatigue detection system, it can help detect the degree of fatigue of drivers.

The OBD can also be viewed as sensors which collect the data of RPM, speed and other state of the car, which will also affect the mood-fatigue state of drivers.

The battery-powered scattered sensor nodes are deployed in the sensor field randomly and form a self-organization network. The sink’s abilities of processing, storing and communication are relatively strong. It communicate with the external network like the Internet and the sensor [38].

The sensor nodes are usually scattered in a sensor field as shown in Fig. 2.

Fig. 2.
figure 2

Sensor nodes in a sensor field

3.1.2 Communication Technology

One of the most important steps of mobile sensing is to collect and transmit the sensing data from the sensors mentioned above. The wireless communication technique plays an important role, especially for IoT, because it gets rid of wiring and is probably the only choice for mobile nodes. Currently common wireless communication techniques include ZigBee, Bluetooth, RFID, NFC, UWB, WiFi and so on.

Table 1 shows the comparisons among different wireless network standards.

Table 1. Comparisons among different wireless network standards

Especially, there are some communication protocols designed for vehicles, like Local Interconnect Network (LIN), Controller Area Network (CAN), FlexRay, Media Oriented System Transport (MOST), Low Voltage Differential Signaling (LVDS) and so on.

Table 2 shows current automotive physical layer technologies [39].

Table 2. Current automotive physical layer technologies

3.2 Mobile Cloud Computing for Big Data Analysis

The elastic and customized cloud computing platform can help to finish the processing of big data analysis in an efficient manner. As the statistic about the history of different drivers’ mood and fatigue history, then based on the analyzed results, we can provide different customized mobility services to drivers based on their real-time mood-fatigue, e.g., recommend preferable music, recommend shortest routes or most enjoyable routes (e.g., if drivers are not fatigue and enjoying traveling), recommend suitable nearby places for taking a rest (e.g., accommodation, restaurants) that fit for the drivers’ preferences.

However, despite the wide range application of cloud computing, it is still in its infancy. Many issues need to be explored. For example, the storage technologies and data management. Current technologies of data management systems are not able to satisfy the needs of big data, and the increasing speed of storage capacity is much less than that of data, thus a revolution re-construction of information framework is desperately needed [40]. Also, current databases are not suitable for data from real world. So it is important to re-organize the data for further use.

Besides, reducing energy consumption is another important issue in cloud computing. It has been estimated that the cost of powering and cooling accounts for 53 % of the total operational expenditure of data centers [41]. In 2006, data centers in the US consumed more than 1.5 % of the total energy generated in that year, and the percentage is projected to grow 18 % annually [42]. The goal is not only to cut down energy cost in data centers, but also to meet government regulations and environmental standards [43].

4 Current Platforms for Safe Driving

Lee and Chung [44] proposed a method (SDSM) for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are mapped as input variables to an inference analysis framework. A Fuzzy Bayesian framework is intended to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smart phone device. Once the evaluation metric reaches 75 % (in FBN probability expressed as 0.75), a fake call service is initiated along with a loud ringtone and maximum vibration strength to alert the driver of his/her current dangerous driving state.

Suk, and Prabhakaran [45] presented a mobile application for real time facial expression recognition running on a smart phone with a camera (RFER). In order to handle the lower processing power in mobile devices (compared to their desktop counterparts), it proposed some approaches based on a set of SVM classifiers that use ASM features for identifying neutral and peak expression video frames. The accuracy of real-time mobile emotion recognition is about 72 % when this application is running on Galaxy S3 with 2.4 fps.

Hu et al. [3] built a Mood-Fatigue Analyzer (MFA) which collects the real-time sensing data from driver behaviors, cars situation, and outside environment by the mobile device tier, cloud tier, and network tier. Then it analyses the data to get the driver’s mood and fatigue degree. Finally, it takes measures to ameliorate the driver’s emotion and fatigue status such as play suitable music or give positive reminder.

Other in-vehicle solutions for safe driving include V-Cloud [46], CrashHeip [47], and LCCA System [48]. Compared to solutions which only realize the function of detection and alarming, MFA has an incentive to relieve fatigue and raise a better driving experience when drivers are in a negative situation.

Table 3 shows the comparison among these platforms.

Table 3. Comparisons among different platforms for safe driving

5 Technical Challenges and Future Research Directions

5.1 Energy Constraint of Mobile Sensing Network

The energy consumption is the main problem of the sensor network because the sensor nodes are under an unattended condition for a long time. Thus, effective energy saving strategies are necessary.

At present, the sleeping scheme is the most popular strategy, which means the nodes will sleep when they are spare to save the energy. However, when spare nodes convert to normal state, they will consume much energy. So it is important to convert the state at the appropriate time.

Data fusion is another energy-saving technique. Compared to the energy consumed by computing, communication costs more of it. Adjacent nodes often collect similar or same information and sending this redundant information will impose undue burden on the system. By local computing and fusion, the raw data can be processed in accordance with certain procedures and only necessary information will be sent.

5.2 Facial Expression Recognition

Recognizing facial expression with computers is a very complex problem. Precise facial expression recognition is still a task involving much difficulty. First, existing or user-built facial expression databases are tightly constrained, such as single background, no interference of ornament, little rotation of the face, and no exaggerated expression, etc. The six expressions based on Cascade Classifier can’t describe human’s complicated and changeable expression accurately. To find a more accurate descriptive approach is a vital problem. Also, current facial expression databases are based on single race. Considering that different race and culture have considerable influence on the understanding of expression. The facial expression databases should vary with different areas.

Currently, there are two main facial expression recognition technique: Feature based techniques and Model based techniques [49]. For feature based techniques, authors in [25] proposed a geometric system for breaking down 20 facial appearances with the particular objectives of contrasting, matching, and averaging their shapes. For model based techniques, authors in [28] propose a deformable 3-D facial expression model and D-Isomap based classification. Different strategies like finding key fiducial point inborn geometries are used. These state-of-art progress has made facial expression recognition more precise and thus improve the performance of mood-fatigue detection system,

5.3 Security and Privacy

As the healthcare related data like the history of the drivers’ mood and fatigue would be sensitive, it is of great significance to ensure the data safety and privacy. Unlike traditional security method, security in big data is mainly in the form of how to process data mining without exposing sensitive information of users [40]. Besides, existing security solutions are mostly for static data, while in this case data changes dynamically. To ensure the security and privacy, it is critical to build trust mechanisms at every architectural layer of the cloud [45]. Firstly, the hardware layer must be trusted using hardware trusted platform module (TPM). Secondly, the virtualization platform must be trusted using secure virtual machine monitors [50]. VM (virtual machine) migration should only be allowed if both source and destination servers are trusted. Recent work has been devoted to designing efficient protocols for trust establishment and management [50, 51].

6 Conclusions

The rapid development of IoT has brought revolutionary changes to human life, almost covering every aspect of daily life. More and more IoT applications, systems and services are being deployed. The idea of mobile sensing based mood-fatigue detection of drivers takes the advantage of IoT and potentially provides a seamless solution for safe driving. In this paper, we first introduce the current solutions for mood-fatigue detection. After that, we summarize some common sensors and wireless communication techniques in mobile sensing network and conduct a comparison between them. Furthermore, we specifically describe the way to detect the mood-fatigue status of drivers, and introduce some up to date platforms.

In the future, many challenges need to be explored and addressed, so as to enable the detection of facial expression to be in a more precise manner, e.g., adding the detection of other feature like the mouth [3]. Also, the outside environmental conditions, user behaviors and other factors should be taken into consideration to infer the drivers’ mood-fatigue status, and the vigilance level more of them comprehensively and accurately. Furthermore, incentive schemes could be explored to facilitate the collaboration of drivers to promote safe driving together. Moreover, social network can be employed to further improve driving experience [52]. Drivers can share their driving experience or radio music with others who have similar music preferences or experience similar traffic conditions [4].