Abstract
Earlier day’s people with disability face lot of difficulty in communication due to neuromuscular attack. They are unable to share ideas and thoughts with others, so they need some assist to overcome this condition. To overcome the condition in this paper we discussed the capabilities of designing electrooculogram (EOG)-based human computer interface (HCI) by ten subjects using power spectral density techniques and Neural Network. In this study we compare the right-hander performance with left-hander performance. Outcomes of the study concluded that left-hander performance was marginally appreciated compared to right-hander performance in terms of classification accuracy with an average accuracy of 93.38% for all left-hander subjects and 91.38% for all the right subjects using probabilistic neural network (PNN) and also we analyzed that during the training left-handers were interestingly participated and also they can able to perform the following 11 tasks easily compared with right-handers. From this study we concluded that potentiality of creating HCI was possible by means of left-handers and also study proves that right-hander need some more training to achieve this. Finally the experiment outperforms our previous study in terms of performance by changing the subjects from right-hander to left-handers.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Communication based on eye movements was playing a vital role in developing communication devices for the patients with amyotrophic lateral sclerosis and other motor neuron degenerative diseases like quadriplegia, Guillain-Barre syndrome, spinal cord injury, and hemiparesis. Such types of diseases attack all the controllable movements including the speech, writing, walk, etc., except the eye movement activities. They need some assist to move from one place to other. Statistical survey showed that motor neuron diseases were increased day by day and reached 15–18% of the population. People affected by such diseases are also in progress. To avoid the condition, we need help from rehabilitation device to overcome the biological channel in natural way. Recently many works based on EOG-based HCI have been taken place to reestablish the communication channels in the absence of biological channels for people with severe motor disabilities. Currently some of the input devices used for communication were mouse, computer, keyboard, touch screen, touchpad, and track ball. The following devices need manual control and cannot be controlled by the people with disability. So the need of alternative method of communication between man and machines to communicate with caretakers. One such technology was developing rehabilitation device that helps the disabled person to control the ecosystems and exchange the thoughts more efficiently. These devices were encouraging the persons to perform tasks normally by associating help from technology. A small potential that appear in between the front and back of the eye is called electrooculogram. The technique of making communication between man and machine is called HCI. By combining these two methods it created new pathway for the people with elderly disabled by creating the rehabilitative aids. By making this combination EOG-based HCI plays a vital role for developing assistive device for the people with motor neuron disease [1,2,3,4,5].
Some of the EOG-based interfaces created by several researchers are eye reading system [6], EOG speller [7], security system [8], speech interaction system [9], multimedia control system [10], robotic wheelchair [11], indoor positioning [12], cursor controller [13], DC motor controller [14], eye gestures [15], virtual keyboard [16], and hospital alarm system [17]. In this experimentation we compared right-handers performance with left-handers to analyze the chances of creating nine states HCI for disabled individuals to convey their thoughts without some others help.
2 Previous Work Done
Many techniques have been already available to execute the rehabilitation devices with the help of eye movements. Some of the necessary studies were explained below. Tangsuksant et al. (2012) designed virtual keyboard using voltage threshold algorithm. Signals were collected in both horizontal and vertical eye movement tasks by placing six electrodes nearby eyes. Collected signals were classified using voltage threshold algorithm. The proposed methodology shows an average accuracy of 95.2% with a speed of 25.94 s/letter [18]. Swami Nathan et al. (2012) designed virtual keyboard for motor-disabled people using signal-processing algorithm for both vertical and horizontal eye movements and acquired an improved accuracy of 95.2% with typing speed of 15 letters/min [19]. Souza and Natarajan (2014) designed EOG-based interface for elderly disabled using nonparametric method. Data were collected from 40 subjects and applied to nonparametric method and classified with Elman network and acquired the mean classification accuracy of 99.95% for both vertical and horizontal tasks [20]. Pratim Banik et al. (2015) created virtual keyboard for physically disabled persons by selecting the key using EOG signals collected from five subjects. Collected signals were sending to microcontroller through serial communication. Graphical user interface was designed to collect the output. The system obtained an average classification accuracy of 95%, with average button selection time of 4.27 and 4.11 s for selecting ten buttons respectively [21]. Rakshit et al. (2016) developed assistive device for speech disabled due to brain stroke or spinal cord injury. Twelve subjects were participated in the study. Collected signals were applied with power spectral density to extract the features. Features were classified using support vector machine with multilayer perceptron kernel function with an average accuracy of 90% [22]. Hossain et al. (2017) developed cursor controller for disabled using vertical and horizontal eye movement tasks by using instrumental amplifier AD620 and operational amplifier LM741 for four tasks. SVM and LDA classifiers were used to classify the online data [23]. Ramkumar et al. (2017) developed EOG-based HCI for ALS patients using nine movements from six subject. Features were extracted by using Euclidean norm and trained with neural network technique to categorize the signals. The system shows an average classification accuracy of 87.72% using dynamic network [24]. The literature survey shows that parametric and nonparametric methods were more suitable for obtaining the features using neural network classifier. Through this survey we concluded that neural classifier outperforms other classifiers used in the previous study. So we planned to conduct our study by using parametric method using neural network.
3 Methods
3.1 Experimental Protocol
Master data set were acquired from ten healthy participants for 2 s with five electrodes system and ADT26 bio amplifier. Signals were divided into two Hz from 0.1 to 16 Hz and sampled at 100 Hz. The Signal acquisition and preprocessing were previously enlightened by same authors in his earlier work [25]. Raw signal acquired from subject S4 is shown in Fig. 15.1.
3.2 Feature Extraction
Features were extracted from cleaned signal using periodogram method. Periodogram states that it is a mathematical tool to calculate the differences in the periodic signals. It calculates the significance of different frequencies in time-series data to recognize any essential periodic signals. The feature extraction method contains the following steps.
-
Step 1: S = Sample data of two channel EOG signal for 2 s.
-
Step 2: S was partitioned into 0.1 s windows.
-
Step 3: Band-pass filters were applied to extract eight frequency bands from S.
-
Step 4: Apply Fourier Transform to the frequency band signal to extract the features.
-
Step 5: Extract the absolute values and sum of the power values is extracted.
-
Step 6: Take the average values from each frequency band.
-
Step 7: Repeat steps 1–6 for each trial for all tasks and for ten subjects.
-
Step 8: Sixteen features were obtained for each one tasks per trial shown in Fig. 15.2 and repeat for ten such trials for 11 tasks.
-
Step 9: 110 data samples were obtained from each subject individually to train and test the neural network.
-
Step 10: Repeat steps 1–9 for ten subjects to collect master dataset.
4 Classification Method
Prominent features selected from abovementioned steps were applied to neural network to categorize the signals. In this study we focused on probabilistic neural network (PNN). PNN was based on statistical principles derived from Bayes decision strategy and Gaussian kernel-based estimators of probability density function (PDF). Every input neuron communicates to an element of an input vector and is fully coupled to the n hidden layer neurons. Again, each of the hidden neuron is fully connected to the output neurons. Input layer simply feed the inputs into the classifier. Hidden layer will be able to compute the distance between the input vector and the training input vectors using Bayes decision strategy and Gaussian kernel function, and produce PDF features whose elements show the closeness between the input data points and the training vector points. As the last step, output layer will pick the summing output of the hidden layer with weight and apply Bayes decision learning; it will select the most of the probabilities on the hidden layer and also supply a one for that class and a zero for the other classes [26,27,28,29,30]. The network design used during this experiment is shown in Fig. 15.3.
5 Outcome of the Study
To evaluate the proposed method finally we designed ten network models to categorize the signal acquired for five left-handers and five right-handers through AD Instrument. T26 labchart was connected to a computer with the help of wires. Five electrodes were placed near to the eye to measure the horizontal and vertical movements. List of different subjects participated in the study is shown in Table 15.1 and their age was between 20 and 36. We analyzed the individual subjects performance throughout the study to analyze the performance.
Table 15.2 shows the average classification performance of left-hander subjects using periodogram features with PNN model. Table 15.2 particularly shows the result of mean, minimum, maximum, testing time and training time to determine the performance of the left-handers participated in this study. From Table 15.2 we identify that subject S4 performance was marginally high compared with other left-handed subjects participated in this experiment with an maximum classification accuracy of 96.70% and minimum classification accuracy of 89.24% and average classification accuracy of 94.67% with a training and testing time of 13.59 and 0.61 s for ten trials per tasks. Next maximum classification accuracy was obtained for Subject S1 with maximum classification accuracy of 96.35% and minimum classification accuracy of 88.33% and average classification accuracy of 93.70% with a training and testing time of 13.26 and 0.68 s for ten trials per tasks. The minimum classification accuracy of left-handed subject was obtained for Subject S2 with maximum classification accuracy of 95.83% and minimum classification accuracy of 87.50% and average classification accuracy of 92.45% with a training and testing time of 13.42 and 0.64 s for ten trials per tasks. From the individual performance stated in Table 15.2 we found that Subject S4 performance was appreciable compared with other left-handers participated in this study which is shown in Fig. 15.4.
Table 15.3 express the average classification performance of right-hander subjects using periodogram features with PNN model. Table 15.3 particularly shows the result of mean, minimum, maximum, testing time and training time to determine the performance of the right-handers participated in this study. From Table 15.2 we identify that subject S9 performance was marginally high compared with other right-handers participated in this experiment with a maximum classification accuracy of 95.30% and minimum classification accuracy of 90.00% and average classification accuracy of 92.10% with a training and testing time of 13.88 and 0.72 s for ten trials per tasks. Next maximum classification accuracy for right-hander was obtained for Subject S8 with maximum classification accuracy of 94.16% and minimum classification accuracy of 86.80% and average classification accuracy of 91.78% with a training and testing time of 13.76 and 0.71 s for ten trials per tasks. The minimum classification accuracy of right-handed subject was obtained for Subject S7 with maximum classification accuracy of 91.64% and minimum classification accuracy of 86.67% and average classification accuracy of 90.39% with a training and testing time of 13.92 and 0.70 s for ten trials per tasks. From the individual performance stated in Table 15.3 we found that Subject S9 performance was appreciable compared with other right-handers participated in this study which is shown in Fig. 15.5.
From Tables 15.2 and 15.3, we concluded that Subject S4 and S9 performances were high compared with all the left-handers and right-handers who took part in this study using periodogram features with PNN model as is illustrated in Fig. 15.6. After individual comparison between left-hander performance and right-hander performance, we concluded that left-handed subjects performance were marginally greater than right-handed subjects participated in this setup and also we found that left-handed subjects took less time during the training period than that of right-handed subjects participated in this study. The average results obtained from this experiment was exceed compared with our previous study [25] in terms of classification accuracy. The reason we identified was, we divide the ten subjects (five left-handers, five right-handers) equally and also we changed the feature extraction and classification techniques to analyze the performance. Through this experiment we found that making the HCI was possible using the left-handed subjects and also right-handed subject need some training to achieve this event.
6 Conclusion
The experiment was conducted with ten subjects (five left, five right) using bio amplifier by placing five electrodes to identify the task performed by the different subjects using periodogram with probabilistic neural network. From the study we found that average performance of left-hander subjects was appreciable compared to the right-hander subjects with an mean accuracy of 93.38% and 91.38%. Throughout the study we analyzed that all the left-handed subjects average performance were greater than that of the right-handed subjects participated in this experiment. From these analysis we concluded that creating HCI is possible by using abovementioned technique. In future we are planned to conduct this experiment in online phase to check the possibility of designing HCI.
References
F. Fang, T. Shinozaki, Electrooculography-based continuous eyewriting recognition system for efficient assistive communication systems. PLoS One 13(2), 1–20 (2018)
C. Mondali, Md. Kawsar Azami, M. Ahmadi, S.M. Kamrul Hasani, Md. Rabiul Islam, Design and implementation of a prototype electrooculography based data acquisition system, in International Conference on Electrical Engineering and Information & Communication Technology, 2015, pp. 1–6
A. Krolak, P. Strumiłło, Eye-blink controlled human-computer interface for the disabled. Hum. Comput. Syst. Interact. 60, 123–133 (2009)
R. Hajare, M. Gowda, S. Jain, P. Rudraraju, A. Bhat, Design and development of voice activated intelligent system for elderly and physically challenged, in International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques, 2016, pp. 372–346
Y.M. Nolan, Control and communication for physically disabled people, based on vestigial signals from the body. PhD thesis, Paper submitted to Natl. Univ. Ireland, Dublin, 2005, pp. 7–18
A. Banerjee, A. Rakshit, D.N. Tibarewala, Application of electrooculography to estimate word count while reading text, in International Conference on Systems in Medicine and Biology, 2016, pp. 174–177
N. Barbara, T.A. Camilleri, Interfacing with a speller using EOG glasses, in: International Conference on Systems, Man, and Cybernetics (SMC), 2016, pp. 001069–001074
Md. Shazzad Hossain, K. Huda, S.M. Sadman Rahman, M. Ahmad, Implementation of an EOG based security system by analyzing eye movement patterns, in International Conference on Advances in Electrical Engineering (ICAEE), 2015, pp. 149–152
M. Katore, M.R. Bachute, Speech based human machine interaction system for home automation, in IEEE Bombay Section Symposium (IBSS), 2015, pp. 1–6
L. Li, X. Wu, Design and implementation of multimedia control system based on bluetooth and electrooculogram (EOG), in International Conference on Bioinformatics and Biomedical Engineering, 2011, pp. 1–4
A. Banerjee, S. Datta, P. Das, A. Konar, D.N. Tibarewala, R. Janarthanan, Electrooculogram based online control signal generation for wheelchair, in International Symposium on Electronic System Design, 2012, pp. 251–255
X. Li, D. Luo, F. Zhao, Y. Li, H. Luo, Sensor fusion-based infrastructure independent and agile real-time indoor positioning technology for disabled and elderly people, in International Symposium on Future Information and Communication Technologies for Ubiquitous HealthCare (Ubi-HealthTech), 2015, pp. 1–5
B. Akan, A.O. Argunsah, A human-computer interface (HCI) based on electrooculogram (EOG) for handicapped, in International Conference on Signal Processing and Communications Applications, 2007, pp. 1–3
Z.-H. Wang, Hendrick, Y.-F. Kung, C.-T. Chan, S.-H. Lin, G.-J. Jong, Controlling DC motor using eye blink signals based on LabVIEW, in International Conference on Electrical, Electronics and Information Engineering (ICEEIE), 2017, pp. 61–65
M. Lin, G. Mo, Eye gestures recognition technology in human-computer interaction, in International Conference on Biomedical Engineering and Informatics (BMEI), 2011, pp. 1316–1318
D.R. Lingegowda, K. Amrutesh, S. Ramanujam, Electrooculography based assistive technology for ALS patients, in IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 2017, pp. 36–40
S. Venkataramanan, P. Prabhat, S.R. Choudhury, H.B. Nemade, J.S. Sahambi, Biomedical instrumentation based on electrooculogram (EOG) signal processing and application to a hospital alarm system, in Proceedings of the Second International Conference on Intelligent Sensing and Information Processing, 2005, pp. 535–540
W. Tangsuksant, C. Aekmunkhongpaisal, P. Cambua, T. Charoenpong, T. Chanwimalueang, Directional eye movement detection system for virtual keyboard controller, in International Conference on Biomedical Engineering, 2012, 1–5
D. Swami Nathan, A.P. Vinod, K.P. Thomas, An electrooculogram based assistive communication system with improved speed and accuracy using multi-directional eye movements, in International Conference on Telecommunications and Signal Processing, 2012, pp. 18–21
S.D. Souza, S. Natarajan, Recognition of EOG based reading task using AR features, in International Conference on Circuits, Communication, Control and Computing, 2014, pp. 113–117
P.P. Banik, Md. Kawsar Azam, C. Mondal, Md. Asadur Rahman, Single channel electrooculography based human-computer interface for physically disabled persons, in International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), 2015, pp. 1–6
A. Rakshit, A. Banerjee, D.N. Tibarewala, Electro-oculogram based digit recognition to design assistive communication system for speech disabled patients, in International Conference on Microelectronics, Computing and Communications, 2016, pp. 1–5
Z. Hossain, Md. Maruf Hossain Shuvo, P. Sarker, Hardware and software implementation of real time electrooculogram (EOG) acquisition system to control computer cursor with eyeball movement, in International Conference on Advances in Electrical Engineering, 2017, pp. 32–37
S. Ramkumar, K. Sathesh Kumar, G. Emayavaramban, A feasibility study on eye movements using electrooculogram based HCI, in International Conference on Intelligent Sustainable Systems (ICISS), 2017, pp. 380–383
S. Ramkumar, K. Sathesh Kumar, G. Emayavaramban, EOG signal classification using neural network for human computer interaction. Int. J. Comput. Theory Appl. 9(24), 223–231 (2016)
T. Gandhi, B.K. Panigrahi, S. Anand, A comparative study of wavelet families for EEG signal classification. Neurocomputing 74, 3051–3057 (2011)
M. Hariharan, M.P. Paulraj, S. Yaccob, Time-domain features and probabilistic neural network for the detection of vocal fold pathology. Malaysian J. Comput. Sci. 23(1), 60–67 (2010)
M. Hariharan, M.P. Paulraj, S. Yaccob, Detection of vocal fold paralysis and oedema using time-domain features and probabilistic neural network. Int. J. Biomed. Eng. Technol. 6(1), 46–57 (2011)
D.F. Specht, Probabilistic neural networks. Neural Networks 3(1), 109–118 (1990)
T. Sitamahalakshmi, A. Vinay Babu, M. Lagadesh, K.V.V. Chandra Mouli, Performance of radial basis function networks and probabilistic neural networks for Telugu character recognition. Global J. Comput. Sci. Technol. 11, 9–16 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ramkumar, S., Emayavaramban, G., Sathesh Kumar, K., Macklin Abraham Navamani, J., Maheswari, K., Packia Amutha Priya, P. (2020). Task Identification System for Elderly Paralyzed Patients Using Electrooculography and Neural Networks. In: Haldorai, A., Ramu, A., Mohanram, S., Onn, C. (eds) EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-19562-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-19562-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-19561-8
Online ISBN: 978-3-030-19562-5
eBook Packages: EngineeringEngineering (R0)