Abstract
The diagnosis and treatment of facial paralysis is timely in the first seventy-two hours to avoid implications on the return of motor functions of facial muscles. The proposed solution is a software made up of two modules for the analysis and rehabilitation of facial paralysis: (i) The analysis module detects and quantifies the level of asymmetry in the face, this is achieved with the inclusion of techniques for calculating the divergence between the vital points of the face located in the eyes, nose and mouth, measurements optimized with the gamma correction method; (ii) The rehabilitation module stimulates the facial nerves with physical exercises and monitors the patient’s progress during therapy, for this purpose, a 3D virtual environment was developed that tracks the patient’s facial gestures and, through these, directs a series of activities supported in the physical rehabilitation processes.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Facial paralysis is a condition characterized by degeneration of the motor and sensory function of the facial nerve [1]. This condition may be caused by causes such as facial nerve infections, head trauma, metabolic disorders [2, 3]. It is a mildly prevalent disorder in children under 10 years of age, the elderly; and high in pregnant women, diabetics and people with a previous history of the disease, 67% of patients exhibit excessive tearing of the eyes, 52% have postauricular pain, 34% have taste disorders and 14% may develop phonophobia [4].
The brief treatment of facial pathology is essential to prevent the clinical picture of the patient decline, the patient’s recovery varies from 15 to 45 days or can even extend to 4 years, depending on the level of affection of the patient. Facial paralysis has a great impact on the quality of life of the sufferer, afflicts the capacity for facial-emotional expression, interferes with activities such as eating, ingesting liquids and speaking [2], loss of movement in the eyelid, dysfunctional tearing, nasal dysfunction and obstruction [5].
Among the methods for clinical evaluation of facial function are: (i) The House-Brackmann scale, which establishes 6 degrees of facial phase shift: Grade I: Normal functioning, Grade II: Mild dysfunction, Grade III: Moderate dysfunction, Grade IV: Moderately severe dysfunction, Grade V: Severe dysfunction and Grade VI: Total paralysis [6, 7], while (ii) The Burres-Fisch system provides a linear measurement index on a continuous graduated scale. All methods quantify the distances between reference points, both at rest and during voluntary movement [5, 8]; (iii) The scoring method is performed according to the movement of the points extracted as reference from the affected side and relates them to the healthy side.
In recent years, the widespread use of cameras equipped with stereo vision (Kinect) has increased security in facial recognition systems, using stereo images in clinical applications with great success, aimed at functions for rehabilitation of the patient. This research proposes a facial recognition system that uses passive stereoscopic vision to capture facial information. So far, face recognition techniques assume the use of active measurement to capture facial features [9]. The main problem in research carried out when using stereo vision for facial detection is its low accuracy, and therefore a treatment of the images extracted from the sensor is carried out applying the methods studied for their correction. Several investigations have been conducted on the use of Kinect for virtual rehabilitation in recent years. (i) The pecado and Lee operate the Kinect device for rehabilitation in clinics where they demonstrate significant improvements in the motor function of stroke patients [10, 11]. (ii) Chang et al. employs Kinect-based rehabilitation for people with motor disabilities, where extracted results show greater motivation of patients towards treatment and correction of their movement [12]. (iii) Gonzalez-Ortega et al. developed a Kinect-based 3D computer vision system for evaluation and cognitive rehabilitation, which has been successfully tested as a percentage of surveillance [13, 14].
A virtual environment supports two important concepts: (i) Interaction, the user is directly involved with the virtual environment in real time, (ii) Immersion, through hardware devices the patient has the feeling of being physically in the virtual environment, while for a virtual rehabilitation are repetition, feedback and patient motivation [7]. Therefore, a virtual environment was used that simulates a recreation context directed directly at the patient and his rehabilitation.
It is imperative for facial nerve recovery that the patient follow a traditional exercise guide to overcome these limitations and regain sufficient control of their movements. However, physical rehabilitation is a complex, long-term process with repetitive instructions that can diminish the patient’s motivation and interest in moving forward with their treatment [15].
The proposed virtual rehabilitation system can be used for rehabilitation under medical supervision or at home. The advantages include the recreation of different treatment exercises in a virtual way, the configuration of the characteristics of the rehabilitation exercise according to the obtaining of patient data, providing a more familiar, comfortable and attractive environment, with tasks based on games. With all these advantages, virtual clinical rehabilitation increases patient motivation, interest and adherence to treatment.
eHealth (medical informatics) involves the insertion of technology into medicine, aims at preventive diagnosis and, above all, maintains a certain prominence when making medical decisions. mHealth covers the use of information technologies in a safe and effective way, with the support of mobile devices in tasks such as: the evaluation and monitoring of the patient’s condition, surveillance and control of health risks [16].
In this research work, we present a medical diagnosis and rehabilitation system directly dedicated to patients affected with facial paralysis using the Microsoft Kinect sensor, a device that allows the transmission of facial information to the virtual environment through the use of gestures and face mapping; the information extracted goes through an image correction to reduce error percentages. The first analysis quantifies the degree of paralysis of the patient shown in the interface dynamically. The second analysis presents a criterion for patient recovery, a proposal in the form of a virtual game.
2 System Structure
The system developed in Unity3D aims to operate autonomously and as a support to medical specialists in the diagnosis of facial paralysis, with the compilation of the patient’s facial characteristics and the calculation of the symmetry index in the areas of the eyebrows, eyes and mouth. In addition, it contains an alternative module for the rehabilitation of the disease, based on the gestures that the patient can perform during his clinical picture. Figure 1 presents the general scheme of operation of the medical system segmented into three parts: (i) The inputs, which manage the acquisition of facial data; it is proposed to use the Kinect device for the detection of the patient’s facial coordinates used in the diagnosis and recognition of gestures used in rehabilitation, (ii) The Unity interfaces, contain the virtual environments of the medical system, operated by several scripts that manage the facial data, the calculation of the symmetry index and the control of the game, (iii) The outputs, displayed by screen; they expose the results in real time of the percentage of affection of the pathology and the score obtained in the virtual rehabilitation process.
GY MEDIC system consists of the modules (i) The analysis module, described in Fig. 2 and (ii) The rehabilitation module, described in Fig. 3.
The module one subdivided in three sublayers (a) Facial characteristics, it processes the Kinect detection to capture the data, besides it maps the face in a model of visible points in screen for the final users, (b) Error, this section is in charge of filtering the facial coordinates used to estimate the damage of the disease, the software collects data only when the user is in frontal position with respect to the Kinect with a margin of error less than 2%, (c) Symmetry, responsible for quantifying the degree of facial paralysis, applies correction methods to deliver accurate results of the analysis carried out.
Module two comprises the following structure: (a) Rehabilitation area, in sequence to the diagnostic process and immersed in the rehabilitation environment the user is assigned an exercise routine, this depends on the area affected by facial paralysis, (b) Start Game, the module begins with the registration of the patient in the Firebase Realtime Database of GY MEDIC housed in the cloud; This record is responsible for keeping a clinical picture of the progress of the user’s treatment; below are presented series of physiotherapeutic exercises controlled by a time counter and distributed in series of times of 30, 60 and 90 s depends on the degree found in the user all this focused on three areas: eyebrows, eyes and mouth, (c) Gesture detection, the presence of the Kinect hardware is evident, the gestures detected in the patient are recognized and validated, (d) Score Game, in charge of weighing the results achieved by the patient, the score increases or decreases according to the rehabilitation process.
3 Facial Features Detection
Facial recognition with 2D cameras involves a series of problems related to gestures, occlusion between extremities and lighting changes. The developed system uses 3D facial recognition, the methods applied to the software algorithm use images of color, depth and space, which allow a high performance against the use of accessories and involuntary facial expressions, absence of light and noise tolerance. The software developed with this technology has a great impact on people with physical disabilities as well as on the diagnosis and rehabilitation of pathologies; their cost, performance and robustness make them very effective in medical support [13, 17]. The software solution incorporates an RGB panel provided by the ColorFrameReader attribute for the visualization of the patient’s face; a 3D model superimposed on the real image is generated from the 2D reference points mentioned [9].
The tracking of the face in real time, the obtaining of coordinates and the decrease of errors when acquiring data implies the use of the following attributes: (a) CameraSpacePoint, represents a 3D point in the space of the camera (in meters); (b) DepthSpacePoint, represents pixel coordinates of an image of depth inside [18,19,20]. The tracking of the face is a process that allows the visualization and tracking of the facial points superimposed on the image of the patient, offering a better interaction with the system in terms of movement, detection of features and diagnosis by a medical professional.
For the initial quantification of the pathology, sixteen reference points located in specific areas of the face detailed in Fig. 4 were captured, points extracted using the Microsoft.Kinect.Face library referring to the HighDetailFacePoints attribute that provides up to 1300 facial points [3].
The process of estimating the symmetry index (IS) includes calculating the distances between the reference points of the face (see Table 1) and their differences expressed in percentages; these variations decrease or increase according to the patient’s level of affection [3].
(i) Gamma correction method
The gamma correction method consists of a non-linear mathematical operation that specifies the relationship between a pixel and its luminance; it is applied to images of modern visualization systems to improve contrast [21]. The proposed system adopts this method to reduce the error of quantification and magnify the measurements in the event that the patient presents the disease [3].
The gamma function with \( V_{output} \) represents the corrected output value, \( K \) represents the constant, \( V_{input}^{{}} \) represents the value to be corrected, \( \gamma \) = constant gamma. The readjustment of the symmetry indices is carried out with the aim of specifying the results obtained and the correction is focused on the areas: (a) Eyes: as they have an elliptical shape, the areas can be calculated with the measurements of the major axis A and the least axis B by applying (2). (b) Mouth, patients with absence of pathology their vectors \( u \) (Nose - Chin) and \( v \) (left - right points of the lips) are perpendicular, this measure varies according to the condition (3) [3].
These modifications suppose a correction to the initial values and give precision to the system, being more sensitive to changes in the detected facial coordinates.
4 Virtual Environment
The presence of support systems in clinical decision making is insufficient in medical centers and rehabilitation programs, a system of this type considers some issues: cost, size, operation and automation. Rehabilitation software reduces the cost of personal therapy, increases patient motivation, and evaluates patient progress with satisfactory results [13].
One of the problems faced in the development of the application is the GUI, due to the patient-computer interaction. In particular, the concept of usability of software development maintains that the essential factor to be considered is the ease of use, parallel to the actions performed by the user that are described as in Fig. 5, this affects the precision requirements of the system in the mechanism of processing functions [22, 23].
The SketchUp software has the ability to design any 3D environment or virtual object and even allows assigning properties such as textures, colors and animations that can be used in Unity. The process for the development of the virtual environment in Unity, is guided by the steps: (i) Import, in Unity are used the models 3D * .fbx, generated in SketchUp or other modeling software, (ii) Properties, you have the option to scale, modify their textures and add new features of Unity, (iii) Control, the structure of the system depends on the configuration and programming of the entries, the process of exporting a model to Unity is described in Fig. 6 [24, 25].
5 Experimental Results
5.1 Module I: Analysis
The following results allow to observe the performance that the user shows when using “GY MEDIC”. The analysis module uses the coordinates of the facial points provided by the Kinect device, these coordinates are used in the calculation of the symmetry of the face. The main interface displays a window that allows to visualize the face of the user during his examination, superimposed we have sixteen points of green color that carry out the tracking of the face in coordination with the real image of the user, finally there are three text boxes with their respective indicators that correspond to the percentage of asymmetry of the analyzed zones, described in Fig. 7.
The process of analysis for the detection of facial paralysis begins with the preparation of the scenario that involves the use of Kinect V2, a computer and the Kinect V2 adapter for computer. In sequence the patient’s location with respect to the Kinect suggested in 1.4 m and the height of the Kinect with respect to the floor with a minimum of 0.6 and a maximum of 1.8 m. The system is switched on, which validates the presence of the sensor and initiates the tracking of the face. Finally, the analysis is carried out under medical assistance and the results of the examination are displayed on the screen in real time as described in Fig. 8.
In order to obtain experimental data in the analysis phase, samples from several participants have been used, the percentages are expected to vary according to the positioning with respect to the sensor, according to their facial characteristics and if they suffer any indication of facial paralysis. “GY MEDIC” uses a function to validate the input data; when the user is out of the recommended position the values are completely discarded to avoid erroneous results, this validation considers an error less than 2%. The values shown in Table 2 have been classified on the House-Brackmann scale which considers 6 degrees of dysfunction. In addition, the results showed some susceptibility of the system to changes in user gestures mainly by blinking.
5.2 Module II: Rehabilitation
The rehabilitation module implements a total of twelve physiotherapeutic exercises, divided into three levels; that is, four games for each area: eyebrows, eyes and mouth. The game starts with the (i) Initial Interface, the data of the users are stored in the Firebase database of real time in the cloud; necessary record for the follow-up of the clinical picture of the patient (Figs. 9 and 10); (ii) Main Menu, through a menu of options of the three facial areas the user chooses with which option he wants to start the game, described in Fig. 11; (iii) Indications Interface, interface where the user in a period of three seconds receives indications of the exercise to be performed; it should be noted that the levels are being assigned according to their degree of affection (Fig. 12); (iv) Game Interface, dynamic interface contained of animations, sound effects where the user in a lapse of thirty seconds must collect thirty stars one for each second. The upper part of the game shows information about the muscular area that is being rehabilitated (Fig. 13). It should be pointed out that the user will only be able to go to the next level if the proposed exercise is carried out correctly in such a way that in all the interfaces of the virtual environment it contains a camera that shows in real time the tracking of the Kinect sensor.
The module of rehabilitation of patients with facial paralysis through interaction with a semi immersive virtual environment is put to validation with a group of users (FP) with facial paralysis Grade I and Grade II; and physicians (FM) specialists in the areas of Physiotherapy, Traumatology and Homeopathy. To determine the simplicity of the operation, a usability test is elaborated under several criteria of: learning, efficiency, user experience among others; through a bank of questions in Table 3, in a validation factor on ten points representing as a positive answer on the contrary a validation factor of zero as negative.
6 Conclusion
This research presents a medical system for the analysis and rehabilitation of facial paralysis developed in Unity 3D, which can be used autonomously by the user or as a software that assists a medical specialist. The analysis module processes face depth coordinates in real time that optimizes the results when faced with problems such as involuntary movement of the patient and lack of light; the final quantification yields a percentage in the eyebrows, eyes and mouth area, with the highest values determining the presence of the pathology. The rehabilitation module presents a minimalist design according to the concepts of software usability, with an entertaining environment, intuitive and easy to use, which preserves the patient’s attention for the treatment, the virtual rehabilitation exercises suggested by the system are based on current techniques used by physiotherapists, designed for each affected area, the final result of the game is an increase according to the recovery of the movement of the facial nerve of the patient.
References
Pérez, E., et al.: Guía Clínica para rehabilitación del paciente con parálisis facial perférica. IMSS. 5, 425–436 (2004)
Meléndez, A., Torres, A.: HOSPITAL GENERAL Perfil clínico y epidemiológico de la parálisis facial en el Centro de Rehabilitación y Educación Especial de Durango. México Hosp. Gen. México 69, 70–77 (2006)
Gaber, A., Member, S., Taher, M.F., Wahed, M.A.: Quantifying facial paralysis using the Kinect v2. In: Proceedings of Annual International Conference IEEE Engineering in Medicine and Biology Society EMBS, pp. 2497–2501 (2015)
Park, J.M., et al.: Effect of age and severity of facial palsy on taste thresholds in Bell’s Palsy patients. J. Audiol. Otol. 21, 16–21 (2017)
Cid Carro, R., Bonilla Huerta, E., Ramirez Cruz, F., Morales Caporal, R., Perez Corona, C.: Facial expression analysis with kinect for the diagnosis of paralysis using Nottingham grading system. IEEE Lat. Am. Trans. 14, 3418–3426 (2016)
Fonseca, K.M.D.O., Mourão, A.M., Motta, A.R., Vicente, L.C.C.: Scales of degree of facial paralysis: analysis of agreement. Braz. J. Otorhinolaryngol. 81, 288–293 (2015)
Holtmann, L.C., Eckstein, A., Stähr, K., Xing, M., Lang, S., Mattheis, S.: Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis. Eur. Arch. Oto-Rhino-Laryngol. 278, 3241–3249 (2017)
Brenner, M.J., Neely, J.G.: Approaches to grading facial nerve function. Semin. Plast. Surg. 18, 13–22 (2004)
Uchida, N., Shibahara, T., Aoki, T., Nakajima, H., Kobayashi, K.: 3D face recognition using passive stereo vision. In: IEEE International Conference on Image Processing 2005, ICIP 2005, vol. 2, p. II-950-3 (2005)
Sin, H., Lee, G.: Additional virtual reality training using Xbox Kinect in stroke survivors with hemiplegia. Am. J. Phys. Med. Rehabil. 92, 871–880 (2013)
Webster, D., Celik, O.: Systematic review of Kinect applications in elderly care and stroke rehabilitation. J. Neuroeng. Rehabil. 11, 1–24 (2014)
Chang, Y.J., Chen, S.F., Huang, J.D.: A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities. Res. Dev. Disabil. 32, 2566–2570 (2011)
González-Ortega, D., Díaz-Pernas, F.J., Martínez-Zarzuela, M., Antón-Rodríguez, M.: A kinect-based system for cognitive rehabilitation exercises monitoring. Comput. Methods Programs Biomed. 113, 620–631 (2014)
Zhao, L., Lu, X., Tao, X., Chen, X.: A kinect-based virtual rehabilitation system through gesture recognition. In: 2016 International Conference on Virtual Reality and Visualization, pp. 380–384 (2016)
Young, J., Forster, A.: Rehabilitation after stroke. N. Engl. J. Med. 334, 86–90 (2007)
Silano, M.: La Salud 2.0 y la atención de la salud en la era digital. Rev. Médica Risaralda 19, 1–14 (2013)
Arenas, Á.A., Cotacio, B.J., Isaza, E.S., Garcia, J.V., Morales, J.A., Marín, J.I.: Sistema de Reconocimiento de Rostros en 3D usando Kinect. In: XVII Symposium of Image, Signal Processing Artificial Vision (2012)
Microsoft: CameraSpacePoint Structure. https://msdn.microsoft.com/en-us/library/windowspreview.kinect.cameraspacepoint.aspx
Microsoft: DepthSpacePoint Structure. https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthspacepoint.aspx
Microsoft: ColorFrameReader Class. https://msdn.microsoft.com/en-us/library/windowspreview.kinect.colorframereader.aspx
Cao, Y., Bermak, A.: An analog gamma correction method for high dynamic range applications, pp. 318–322 (2011)
Villaroman, N.H., Rowe, D.C.: Improving accuracy in face tracking user interfaces using consumer devices. In: Proceedings of the 1st Annual Conference on Research in Information Technology - RIIT 2012, p. 57 (2012)
Andaluz, V.H., et al.: Transparency of a bilateral tele-operation scheme of a mobile manipulator robot. In: De Paolis, L.T., Mongelli, A. (eds.) AVR 2016. LNCS, vol. 9768, pp. 228–245. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40621-3_18
Carvajal, C.P., Proaño, L., Pérez, J.A., Pérez, S., Ortiz, J.S., Andaluz, V.H.A.: Robotic applications in virtual environments for children with autism. In: Third International Conference, AVR 2016 Lecce, Italy, 15–18 June 2016 Proceedings, part II, vol. 1, pp. 402–409 (2016)
Andaluz, V.H., et al.: Unity3D virtual animation of robots with coupled and uncoupled mechanism. In: De Paolis, L.T., Mongelli, A. (eds.) AVR 2016. LNCS, vol. 9768, pp. 89–101. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40621-3_6
Acknowledgements
In addition, the authors would like to thanks to the Corporación Ecuatoriana para el Desarrollo de la Investigación y Academia–CEDIA for the financing given to research, development, and innovation, through the CEPRA projects, especially the project CEPRA-XI-2017-06; Control Coordinado Multi-operador aplicado a un robot Manipulador Aéreo; also to Universidad de las Fuerzas Armadas ESPE, Universidad Técnica de Ambato, Escuela Superior Politécnica de Chimborazo, Universidad Nacional de Chimborazo, and Grupo de Investigación ARSI, for the support to develop this paper.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Guanoluisa, G.M., Pilatasig, J.A., Andaluz, V.H. (2019). GY MEDIC: Analysis and Rehabilitation System for Patients with Facial Paralysis. In: Seki, H., Nguyen, C., Huynh, VN., Inuiguchi, M. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2019. Lecture Notes in Computer Science(), vol 11471. Springer, Cham. https://doi.org/10.1007/978-3-030-14815-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-14815-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-14814-0
Online ISBN: 978-3-030-14815-7
eBook Packages: Computer ScienceComputer Science (R0)