Keywords

1 Introduction

Communication based on eye movements was playing a vital role in developing communication devices for the patients with amyotrophic lateral sclerosis and other motor neuron degenerative diseases like quadriplegia, Guillain-Barre syndrome, spinal cord injury, and hemiparesis. Such types of diseases attack all the controllable movements including the speech, writing, walk, etc., except the eye movement activities. They need some assist to move from one place to other. Statistical survey showed that motor neuron diseases were increased day by day and reached 15–18% of the population. People affected by such diseases are also in progress. To avoid the condition, we need help from rehabilitation device to overcome the biological channel in natural way. Recently many works based on EOG-based HCI have been taken place to reestablish the communication channels in the absence of biological channels for people with severe motor disabilities. Currently some of the input devices used for communication were mouse, computer, keyboard, touch screen, touchpad, and track ball. The following devices need manual control and cannot be controlled by the people with disability. So the need of alternative method of communication between man and machines to communicate with caretakers. One such technology was developing rehabilitation device that helps the disabled person to control the ecosystems and exchange the thoughts more efficiently. These devices were encouraging the persons to perform tasks normally by associating help from technology. A small potential that appear in between the front and back of the eye is called electrooculogram. The technique of making communication between man and machine is called HCI. By combining these two methods it created new pathway for the people with elderly disabled by creating the rehabilitative aids. By making this combination EOG-based HCI plays a vital role for developing assistive device for the people with motor neuron disease [1,2,3,4,5].

Some of the EOG-based interfaces created by several researchers are eye reading system [6], EOG speller [7], security system [8], speech interaction system [9], multimedia control system [10], robotic wheelchair [11], indoor positioning [12], cursor controller [13], DC motor controller [14], eye gestures [15], virtual keyboard [16], and hospital alarm system [17]. In this experimentation we compared right-handers performance with left-handers to analyze the chances of creating nine states HCI for disabled individuals to convey their thoughts without some others help.

2 Previous Work Done

Many techniques have been already available to execute the rehabilitation devices with the help of eye movements. Some of the necessary studies were explained below. Tangsuksant et al. (2012) designed virtual keyboard using voltage threshold algorithm. Signals were collected in both horizontal and vertical eye movement tasks by placing six electrodes nearby eyes. Collected signals were classified using voltage threshold algorithm. The proposed methodology shows an average accuracy of 95.2% with a speed of 25.94 s/letter [18]. Swami Nathan et al. (2012) designed virtual keyboard for motor-disabled people using signal-processing algorithm for both vertical and horizontal eye movements and acquired an improved accuracy of 95.2% with typing speed of 15 letters/min [19]. Souza and Natarajan (2014) designed EOG-based interface for elderly disabled using nonparametric method. Data were collected from 40 subjects and applied to nonparametric method and classified with Elman network and acquired the mean classification accuracy of 99.95% for both vertical and horizontal tasks [20]. Pratim Banik et al. (2015) created virtual keyboard for physically disabled persons by selecting the key using EOG signals collected from five subjects. Collected signals were sending to microcontroller through serial communication. Graphical user interface was designed to collect the output. The system obtained an average classification accuracy of 95%, with average button selection time of 4.27 and 4.11 s for selecting ten buttons respectively [21]. Rakshit et al. (2016) developed assistive device for speech disabled due to brain stroke or spinal cord injury. Twelve subjects were participated in the study. Collected signals were applied with power spectral density to extract the features. Features were classified using support vector machine with multilayer perceptron kernel function with an average accuracy of 90% [22]. Hossain et al. (2017) developed cursor controller for disabled using vertical and horizontal eye movement tasks by using instrumental amplifier AD620 and operational amplifier LM741 for four tasks. SVM and LDA classifiers were used to classify the online data [23]. Ramkumar et al. (2017) developed EOG-based HCI for ALS patients using nine movements from six subject. Features were extracted by using Euclidean norm and trained with neural network technique to categorize the signals. The system shows an average classification accuracy of 87.72% using dynamic network [24]. The literature survey shows that parametric and nonparametric methods were more suitable for obtaining the features using neural network classifier. Through this survey we concluded that neural classifier outperforms other classifiers used in the previous study. So we planned to conduct our study by using parametric method using neural network.

3 Methods

3.1 Experimental Protocol

Master data set were acquired from ten healthy participants for 2 s with five electrodes system and ADT26 bio amplifier. Signals were divided into two Hz from 0.1 to 16 Hz and sampled at 100 Hz. The Signal acquisition and preprocessing were previously enlightened by same authors in his earlier work [25]. Raw signal acquired from subject S4 is shown in Fig. 15.1.

Fig. 15.1
figure 1

Raw EOG signal acquired from subject S4 for 11 tasks

3.2 Feature Extraction

Features were extracted from cleaned signal using periodogram method. Periodogram states that it is a mathematical tool to calculate the differences in the periodic signals. It calculates the significance of different frequencies in time-series data to recognize any essential periodic signals. The feature extraction method contains the following steps.

  • Step 1: S = Sample data of two channel EOG signal for 2 s.

  • Step 2: S was partitioned into 0.1 s windows.

  • Step 3: Band-pass filters were applied to extract eight frequency bands from S.

  • Step 4: Apply Fourier Transform to the frequency band signal to extract the features.

  • Step 5: Extract the absolute values and sum of the power values is extracted.

  • Step 6: Take the average values from each frequency band.

  • Step 7: Repeat steps 1–6 for each trial for all tasks and for ten subjects.

  • Step 8: Sixteen features were obtained for each one tasks per trial shown in Fig. 15.2 and repeat for ten such trials for 11 tasks.

  • Step 9: 110 data samples were obtained from each subject individually to train and test the neural network.

  • Step 10: Repeat steps 1–9 for ten subjects to collect master dataset.

Fig. 15.2
figure 2

Feature extracted signal for 11 different eye movements for subject S4 using periodogram

4 Classification Method

Prominent features selected from abovementioned steps were applied to neural network to categorize the signals. In this study we focused on probabilistic neural network (PNN). PNN was based on statistical principles derived from Bayes decision strategy and Gaussian kernel-based estimators of probability density function (PDF). Every input neuron communicates to an element of an input vector and is fully coupled to the n hidden layer neurons. Again, each of the hidden neuron is fully connected to the output neurons. Input layer simply feed the inputs into the classifier. Hidden layer will be able to compute the distance between the input vector and the training input vectors using Bayes decision strategy and Gaussian kernel function, and produce PDF features whose elements show the closeness between the input data points and the training vector points. As the last step, output layer will pick the summing output of the hidden layer with weight and apply Bayes decision learning; it will select the most of the probabilities on the hidden layer and also supply a one for that class and a zero for the other classes [26,27,28,29,30]. The network design used during this experiment is shown in Fig. 15.3.

Fig. 15.3
figure 3

Probabilistic neural network

5 Outcome of the Study

To evaluate the proposed method finally we designed ten network models to categorize the signal acquired for five left-handers and five right-handers through AD Instrument. T26 labchart was connected to a computer with the help of wires. Five electrodes were placed near to the eye to measure the horizontal and vertical movements. List of different subjects participated in the study is shown in Table 15.1 and their age was between 20 and 36. We analyzed the individual subjects performance throughout the study to analyze the performance.

Table 15.1 List of different subjects participated in the study

Table 15.2 shows the average classification performance of left-hander subjects using periodogram features with PNN model. Table 15.2 particularly shows the result of mean, minimum, maximum, testing time and training time to determine the performance of the left-handers participated in this study. From Table 15.2 we identify that subject S4 performance was marginally high compared with other left-handed subjects participated in this experiment with an maximum classification accuracy of 96.70% and minimum classification accuracy of 89.24% and average classification accuracy of 94.67% with a training and testing time of 13.59 and 0.61 s for ten trials per tasks. Next maximum classification accuracy was obtained for Subject S1 with maximum classification accuracy of 96.35% and minimum classification accuracy of 88.33% and average classification accuracy of 93.70% with a training and testing time of 13.26 and 0.68 s for ten trials per tasks. The minimum classification accuracy of left-handed subject was obtained for Subject S2 with maximum classification accuracy of 95.83% and minimum classification accuracy of 87.50% and average classification accuracy of 92.45% with a training and testing time of 13.42 and 0.64 s for ten trials per tasks. From the individual performance stated in Table 15.2 we found that Subject S4 performance was appreciable compared with other left-handers participated in this study which is shown in Fig. 15.4.

Table 15.2 Average performance accuracy for left-handers using periodogram and PNN
Fig. 15.4
figure 4

Performance of left-hander using periodogram with PNN

Table 15.3 express the average classification performance of right-hander subjects using periodogram features with PNN model. Table 15.3 particularly shows the result of mean, minimum, maximum, testing time and training time to determine the performance of the right-handers participated in this study. From Table 15.2 we identify that subject S9 performance was marginally high compared with other right-handers participated in this experiment with a maximum classification accuracy of 95.30% and minimum classification accuracy of 90.00% and average classification accuracy of 92.10% with a training and testing time of 13.88 and 0.72 s for ten trials per tasks. Next maximum classification accuracy for right-hander was obtained for Subject S8 with maximum classification accuracy of 94.16% and minimum classification accuracy of 86.80% and average classification accuracy of 91.78% with a training and testing time of 13.76 and 0.71 s for ten trials per tasks. The minimum classification accuracy of right-handed subject was obtained for Subject S7 with maximum classification accuracy of 91.64% and minimum classification accuracy of 86.67% and average classification accuracy of 90.39% with a training and testing time of 13.92 and 0.70 s for ten trials per tasks. From the individual performance stated in Table 15.3 we found that Subject S9 performance was appreciable compared with other right-handers participated in this study which is shown in Fig. 15.5.

Table 15.3 Average performance accuracy for right-handers using periodogram and PNN
Fig. 15.5
figure 5

Performance of right-hander using periodogram with PNN

From Tables 15.2 and 15.3, we concluded that Subject S4 and S9 performances were high compared with all the left-handers and right-handers who took part in this study using periodogram features with PNN model as is illustrated in Fig. 15.6. After individual comparison between left-hander performance and right-hander performance, we concluded that left-handed subjects performance were marginally greater than right-handed subjects participated in this setup and also we found that left-handed subjects took less time during the training period than that of right-handed subjects participated in this study. The average results obtained from this experiment was exceed compared with our previous study [25] in terms of classification accuracy. The reason we identified was, we divide the ten subjects (five left-handers, five right-handers) equally and also we changed the feature extraction and classification techniques to analyze the performance. Through this experiment we found that making the HCI was possible using the left-handed subjects and also right-handed subject need some training to achieve this event.

Fig. 15.6
figure 6

Overall task classification for left-handers and right-handers using periodogram with PNN

6 Conclusion

The experiment was conducted with ten subjects (five left, five right) using bio amplifier by placing five electrodes to identify the task performed by the different subjects using periodogram with probabilistic neural network. From the study we found that average performance of left-hander subjects was appreciable compared to the right-hander subjects with an mean accuracy of 93.38% and 91.38%. Throughout the study we analyzed that all the left-handed subjects average performance were greater than that of the right-handed subjects participated in this experiment. From these analysis we concluded that creating HCI is possible by using abovementioned technique. In future we are planned to conduct this experiment in online phase to check the possibility of designing HCI.