Abstract
Biometric traits such as iris and face can help for the elementary assessment of human diseases and healthcare monitoring. It has several advantages such as increased patient and staff (doctors and nurses) safety, the accuracy and quality of the healthcare system, reduction of the healthcare fraud. In addition, it provides a secure way to detect the inhabitant’s mood and ocular pathologies in order to treat them. The paper introduces a prototype for biometric-based healthcare monitoring. In the proposed prototype, the patient/user seeking for healthcare assistance can send a request by his/her biometric traits. The biometric traits are processed in the cloud management. The caregiver with valid identification/verification can receive the request and analyze it in order for further treatment. This paper also introduces an efficient multibiometrics fusion framework based on Aczél-Alsina triangular norm. The proposed approach utilizes the 1D-log Gabor iris features, two-directional two-dimensional modified fisher principal component analysis (\((2\hbox {D})^{2}\)MFPCA) and Complex Gabor Jet Descriptor face features to be used for healthcare monitoring. Results show that the multibiometrics fusion approach has better performance compared with the previous fusion approaches.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Biometric system provides a powerful technique for unique verification and identification of an individual identity [1]. It has a great potential healthcare and patients monitoring [2,3,4,5,6,7] to increase the safety for the patients [3, 7], nurses and doctors by several ways such as the detection of ocular pathologies [1], the detection of patient emotion [2, 6] and so on. In healthcare monitoring, biometric traits such as iris and face [8] play a vital role for the assessment of patients. Indeed, recognizing the biometric traits from iris can be a great help for the elementary assessment of diseased eyes [1]. Moreover, the inhabitant’s mood detected from a face for healthcare monitoring can reveal the negative emotion or pain [2] caused by diseases and thereafter plays the preferable music [4] or rescue in order to change his negative emotion [2] in to the positive one.
The unimodal biometric trait has been demonstrated its efficacy and efficiency to develop the identification performance [7]. However, they suffer from various practical issues such as distinctiveness lack, non-universality, noisy, unacceptable error rates, spoof attacks, and [9,10,11,12]. The multibiometrics, which consolidating information from multiple biometric traits, have been developed for overcoming the above problems in order to meet the requirement of healthcare monitoring applications [13]. Recently, the benefits of multibiometrics traits have attracted a significant attention from the academia, research and industry [14,15,16,17].
In multibiometrics, the successful fusion scheme is needful for collecting features from several single unimodal biometric systems. In recent years, several multibiometrics fusion approaches have been proposed, and the core techniques are divided into three categories: (1) Transformation-based fusion [18,19,20,21,22,23], (2) Classifier-based fusion [24,25,26], and (3) Density-based fusion [27,28,29,30]. The previous transformation-based methods failed to adapt the distribution features of genuine and imposter scores as well as neglecting the closer intra-class and a farther inter-class distance. In addition, the classifier-based fusion methods require data training and then classification. The unbalanced training leads to an unbalanced genuine and imposter score and hence low performance recognition. On the other hands, the density-based methods have are able to achieve optimal performance at any operating point case. It is noted, however, that the density function is failed to estimate accurately and usually unknown.
To overcome the weakness of the former works on fusion of multibiometrics, this paper proposes an adaptive score level multibiometrics fusion scheme based on triangular norm for healthcare monitoring. The proposed approach has four phases: utilization of the Aczél-Alsina triangular norm, feature extraction, matching score, normalization and score level fusion. The simulation and analysis based on selected databases demonstrated that the proposed multibiometrics fusion approach has better performance with closer distance in the genuine and larger distance in the imposter. The numerical tests demonstrate that higher recognition results are in favor of the proposed approach.
The remaining of this work is as follows. Section 2 presents the proposed biometric-based healthcare monitoring. Section 3 displays the preliminary knowledge. The proposed multibiometrics fusion approach is introduced in Sect. 4. Section 5 is devoted to the simulation results and analysis of the proposed approach and comparison to other approaches. Section 6 highlights the main findings of this paper. The conclusion and future work are drawn in Sect. 7.
2 The biometric-based healthcare monitoring
Here, we propose a prototype for multibiometrics enabling healthcare monitoring. Fig. 1 shows the proposed biometric-based healthcare monitoring framework. The biometric traits of the patient/ user is captured via biometric device then enrolled into the template database for further processing. The biometric templates are sent to the cloud for management.
The traits are managed and processed in the cloud and data service. The caregiver or any physician with true identification can receive and analysis the fused traits in order to evaluate and assesses the patient/user and sends convenient instructions to him via cloud service. The patient/user receives the instruction via his/her smart phone or any wearable device. In the case the caregiver wants to consult the specialized physician, it sends the demand to the cloud, and then the cloud manger informs it to that specialized physician.
3 Preliminary knowledge
In this section, we review the T-norms utilized in the proposed approach. The T-norm is a mathematical function that maps the unit square \([0, 1]^{2}\) into [0.1] defined as
The \(T_{norm}\) satisfy the following rules:
-
(1)
\(T_{norm} (T_{norm} (u,v),w)=T_{norm} (u,T_{norm} (v,w));\)
-
(2)
\(T_{norm} (u,v)=T_{norm} (v,u);\)
-
(3)
if \((u\le \alpha )\,\,and\,\,(v\le \beta )\,\,\hbox {then}\,\,T_{norm} (u,v)\le T_{norm} (\alpha ,\beta );\)
-
(4)
\(T_{norm} (0,0)=0\,\,\hbox {and}\,\,T_{norm} (u,1)=u\).
In what follows, we enumerate some of the t-norms that are implemented in our work:
-
(1)
Hamacher: \(\frac{uv}{u+v-uv}\);
-
(2)
Einstein product: \(\frac{uv}{2-(u+v-uv)}\);
-
(3)
Yager (\(\delta >0)\): \(max(1-((1-u)^{\delta }+(1-v)^{\delta })^{1/\delta },0)\).
-
(4)
Frank (\(\delta >0)\): \(\log _\delta (1+\frac{(\delta ^{u}-1)(\delta ^{v}-1)}{\delta -1})\);
Based on the formulation of T-norms, we can conclude the following [31]:
-
The T-norms of Hamacher with \(\delta =0.5\), Frank with \(\delta =0.4\), and Yager with \(\delta =0.5\), all they can minimize the fusion of genuine scores compared to Sum rule while they failed to farther the imposter scores.
-
This leads to lower unaccepted error rates and more difficult to have good recognition performance.
To overcome the above issue, Aczél-Alsina (AA) T-norm, \(T^{AA}\), [32] is applied to enhance the multibiometrics fusion efficiency. It is defined for \(0<\delta <+\infty \) by
Equation (2) satisfies the following properties.
and
The critical parameter \(r\in [ts-\lambda ,ts+\lambda ]\), and ts can be represented as:
where \(ts_1 \) and \(ts_2 \) are thresholds of the biometric approach adapted to the fusion step, and \(\lambda \) is a bias obtained from the tests.
Figure 1 simulates Eqs. (3) and (4) and shows the performance of the Aczél-Alsina T-norm (AA T-norm). A good performance is achieved by choosing the appropriate parameters, r and \(\delta \), for AA T-norm. We note that, when the score for fusion is small, the fusion result is smaller than other T-norms. When the fusion score is large, the result is larger than other T-norms. We shall discuss this observation in detail later in this paper (see Sect. 5).
4 The proposed approach
Here, we introduce a new approach for fusion of multibiometrics modal based on Aczél –Alsina triangular norm. The proposed framework is given in Fig. 2. The proposed approach is consists of four basic procedures: feature extraction, matching score, normalization, and fusion of the different modalities. For the feature extraction phase, the features of the dual iris (left and right iris) are extracted based on 1D-log Gabor technique [33], and the features of thermal and visible faces are extracted based on \((2\hbox {D})^{2}\) MFPCA [34] or CGJD [35] approach. The matching score procedure is applied for different biometric traits, as shown in Fig. 2, in order to score genuine and imposter features. Then, all the matched scores from different modalities are normalized to the domain [0,1]. Finally, the fusion approach for the four biometric traits is applied based on AA T-norm.
For a meaningful combination, different modalities scores \((MS_R ,MS_L ,MS_T ,MS_V )\) are transformed into the domain [0, 1]. The normalization criterion is given as:
where Ms is refer to the matching score, whether imposter or genuine.
The associative commutative property of T-norm is applied in the proposed approach (i.e., amalgamate the output fusion of the dual iris with the thermal model, and thereafter with visible modal, until all modalities are processed. The associative commutative rule of the proposed fusion approach is given by Eq. (7),
5 Simulation results and analysis
5.1 Databases and simulation setup
This study utilized two virtual multibiometrics databases: CASIA-Iris-Thousand [36], and (NVIE) database [37]. The CASIA-Iris-Thousand database includes hundreds of noisy iris images. The NVIE database has appropriate thermal and visible face images with six diverse expressions [37]. In the proposed algorithm, we choose randomly ninety classes to every ten samples. The well known biometric metrics such as FMR, FNMR [38], (ROC) curve [39] and d’ [40] are applied to examine the effectiveness of the proposed fusion scheme.
In this section, we perform several simulation tests to demonstrate the eligibility of the introduced multibiometrics fusion approach compared to other approaches. To facilitate the comparison of the proposed approach with other approaches, Table 1 gives the abbreviations and the description of different approaches. In addition, I2D is used to express the fusion framework based on dual iris and \((2\hbox {D})^{2}\) MFPCA face feature, and IGJD express the fusion framework based on dual iris and CGJD face feature.
5.2 Recognition performance based on SVM and GMM approaches
Here, we examine the recognition performance of proposed fusion approach based on two main classification approaches SVM and GMM. Table 2 show the recognition efficacy of the proposed method based on SVM and GMM classifications methods. From the results of Tables 2 and 3, we can summarize the following:
-
LR2 has lower FMR with elevated FNMR. The lowest FNMR goes to the multibiometrics VTLR4. For a biometric system application, the lower FMR is preferred [38]. Thus, it is important to decrease the FNMR at specific FMR values instead of minimizing the overall error rate.
-
The multibiometrics VTLR4 in IGJD is the best one for SVM classification (Table 2). The influence of the classification by SVM is affected by the unbalanced numbers of scores (genuine/ imposter). In SVM, we select some of score result for training, and the left of them for testing.
-
In Table 2, we can see that the performance is better when the number of biometrics model increasing from the framework of I2D. By comparing the I2D and ICGJD, we can see that the feature is very important for performance.
-
The score level fusion based on SVM and GMM need to train the inner and intra matched score data. Because of the number of inner and intra data, it will take the non-balance problem.
-
The GMM model needs the inner and intra score data belonging to some Gauss distribution. However, if the data belonging to some Gauss distribution or not is difficult to acquire from theory.
Figure 3 shows the SVM classification of iris score level fusion. Fig. 4 displays the Norm2 classification for feature vector in TLR3 of I2D while Fig. 5 shows theNorm2 classification for feature vector in fusion strategy TLR3 of ICGJD. As observed from Figs. 3, 4 and 5, we demonstrate that the recognition performance is better at high dimension method and the inner and intra fusion result has larger distance. For the reader of electronic version, the “+” refer to the imposter values and “*” refer to the genuine values. The red “+” refer to the imposter values and blue “.” refer to the genuine values.
5.3 The EER performance
Table 3 gives the performance for the proposed multibiometrics fusion approach and other several approaches. From the results of Table 3, we can see that the recognition performance is in favor of the proposed frameworks I2D and IGJD at the fusion of four biometric traits (VTLR4). The fusion scheme of VTLR4 avoids the forgery attack besides its recognition performance. All the approaches almost display the best EER with 0.93 for LR2. That is due to the high discrimination in the iris data.
It is noted that VT2 avoids the spoof attack, however, cannot give better EER than iris. By comparing VT2, VTL3 and VTLR4, LR2, and LTR3, the effectiveness of the overall recognition performance turns preferable for the increased biometrics modal. As observed also from Table 3, the methods, Sum, N2 and \(\hbox {AA}_\mathrm{pro}\) show superior performance, while Max and Min schemes don’t show better performance as they should.
5.4 ROC performance
The ROC curves of I2D and IGJD for VTLR4 are shown in Figs. 6 and 9. Figure 6 shows the ROC curve of lower performance approaches for multibiometrics system VTLR4 in I2D. Figure 7 shows the ROC curve of higher performance approaches for multibiometrics system VTLR4 in I2D. Figure 8 shows the ROC curve of lower performance approaches for multibiometrics system VTLR4 in IGJD. Figure 9 shows the ROC curve of higher performance approaches for multibiometrics system VTLR4 in IGJD. From these figures, we can see that the proposed method has high recognition performance than others.
To illustrate the performance of \(\hbox {AA}_\mathrm{pro}\), the genuine and imposter distributions are shown for I2D framework (Fig. 2) in Figs. 10, 11, 12, 13 and 14, which are stands for Hm (\(\delta =0.5\)), Fk (\(\delta =0.4\)), Yg (\(\delta =0.5\)), Ep and \(\hbox {AA}_\mathrm{pro.}\) The x-label stands for the distance after normalize, smaller the better. The y-label stands for the percent of genuine and imposter data. All the simulations are based on multibiometrics database, including 90 classes, ten samples for every class, 5 for training, 5 for testing, and are 450 inner-comparisons and 36,550 inters-comparisons. Therefore, the imposter data number is large from the figure shown in the right of the figures. The genuine data number is small in the left of the figures.
The distance is acquired using Eq. 8,
In each figure, the distance of LR2 is less than LRT3 and VTLR4. From Figs. 10, 11, 12, 13 and 14, we will see for every multibiometric system, the distance increasing result of \(\hbox {AA}_\mathrm{pro}\) is obvious. The highest distance of VTLR in Fig. 14, is 0.99992. Fig. 15 shows the result of hd distance in I2D framework. Thus, the proposed scheme produces excellent performance than the state-of the-art approaches.
6 Discussion
This paper introduces a framework for biometric-based healthcare monitoring. The proposed framework is able to manage and handle the healthcare situations between the patient/user, caregiver/physician, and cloud-based healthcare monitoring. The paper also introduced a new multi-biometrics fusion method based on three modalities: eyes, thermal face, and visible face. The 1D-log Gabor iris features, the \((2\hbox {D})^{2}\) MFPCA and the CGJD face features are applied to extract each modality feature vector. Then, we applied a score level fusion technique based on the Aczél-Alsin a triangular norm to identify Genuine and Imposter.
Based on the above section, we summarize the contribution of the proposed approach as follows:
-
Explore the performance of multi-modal fusion systems based on triangular norm for monitor healthcare.
-
The proposed multibiometrics fusion approach used four biometric traits: thermal and visible faces, left iris and right iris.
-
It enhances the performance of multibiometrics verification through narrows the distance of intra-class and enlarges the distance of inter-class.
-
It has better performance compared with the previous fusion approaches [29, 40,41,42]
7 Conclusion
A biometric-based healthcare monitoring framework was introduced. The proposed framework produced a secure and efficient way to monitor health using score level multibiometrics fusion approach. The possible useful application of the proposed framework is the ability to detect any malicious healthcare fraud as well as the detection of the inhabitant’s mood and ocular pathologies in order to treat them. The performance of Aczél-Alsina triangular norm-based score level fusion is examined. The biometric analysis and results of the simulation conducted show shat the proposed multi-modal fusion system achieves high performance with a nearer genuine and larger imposter distances. In comparisons with previous approaches, the high recognition performances are in favor of the proposed multibiometrics fusion approach. Our future work will be devoted to the emotion detection from multibiometrics for e-healthcare systems.
References
Trokielewicz, M., Czajka, A., Maciejewicz, P.: Implications of ocular pathologies for iris recognition reliability. Imag. Vis. Comput. 58, 158–167 (2017)
Alhussein, M.: Automatic facial emotion recognition using weber local descriptor for e-Healthcare system. Clust. Comput. 19(1), 99–108 (2016)
Hossain, M.S.: Cloud-supported cyber-physical localization framework for patients monitoring. IEEE Syst. J. 11(1), 118–127 (2017)
Muhammad, G.: Automatic speech recognition using interlaced derivative pattern for cloud based healthcare system. Clust. Comput. 18(2), 795–802 (2015)
Hossain, M.S., Muhammad, G.: Cloud-assisted industrial internet of things (IIoT)-enabled framework for health monitoring. Comput. Netw. 101, 192–202 (2016)
Hossain, M.S., Muhammad, G., Alhamid, M.F., Song, B., Almutib, K.: Audio-visual emotion recognition using big data towards 5G. Mob. Netw. Appl. 21(5), 753–763 (2016)
Hu, Y., et al.: Simultaneously aided diagnosis model for outpatient departments via healthcare big data analytics, Springer, New York (June 2016)
Wang, Y., Tan, T., Jain, A.K.: Combining face and iris biometrics for identity verification. Lect. Notes Comput. Sci. 2688, 805–813 (2003)
Choi, J., Hu, S., Young, S.S., Davis, L.S.: Thermal to visible face recognition, In: Proceedings SPIE 8371, United States, pp. 1–11 (2002)
Choi, J., Dixon, K.R., Wick, D.V., Bagwell, B.E., Soehnel, G.H., Clark, B.: Iris imaging system with adaptive optical elements. J. Electron. Imaging 21(1), 013004 (2012)
Yang, G., Xi, X., Yin, Y.: Finger vein recognition based on a personalized best bit map. Sensors 12(2), 1738–1757 (2012)
Zhang, D., Lu, G.: 3D Palmprint Capturing System. In: 3D Biometrics, Springer, New York, pp. 85–104 (2013)
Hu, L., Qiu, M., Song, J., Shamim Hossain, M., Ghoneim, A.: Software defined healthcare networks. IEEE Wirel. Commun. 22(6), 67–75 (2015)
Chan, C.H., Goswami, B., Kittler, J., Christmas, W.: Local ordinal contrast pattern histograms for spatiotemporal, lip-based speakerauthentication. IEEE Trans. Inf. Forensics Secur. 7(2), 6002–612 (2012)
Sun, X., Wang, G., Wang, L., Sun, H., Wei, X.: 3D ear recognition using local salience and principal manifold. Graphical Models 76(5), 402–412 (2014)
Hossain, M.S., El Saddik, A.: A biologically inspired multimedia content repurposing system in heterogeneous environments. Multimed. Syst. J. 14(3), 135–143 (2008)
Hossain, M.S., Muhammad, G., Rahman, S.M.M., Abdul, W., Alelaiwi, A., Almari, A.: Toward end-to-end biomet rics-based security for IoT infrastructure. IEEE Wirel. Commun. Mag. 23(5), 45–51 (2016)
Bian, W., Ding, S., Xue, Y.: Combining weighted linear project analysis with orientation diffusion for fingerprint orientation field reconstruction. Inf. Sci. 396, 55–71 (2017)
Galbally, J., McCool, C., Fierrez, J., Marcel, S., Ortega-Garcia, J.: On the vulnerability of face verification systems to hill-climbing attacks. Pattern Recognit. 43(3), 1027–1038 (2010)
Wang, G., Wu, H.: Research and realization on voice restoration technique for voice communication software, In: Proceedings of International Symposium on Information Engineering and Electronic Commererce, Ternopil, Ukraine, pp. 791–795 (2009)
Venugopalan, S., Savvides, M.: How to generate spoofed irises from an iris code template. IEEE Trans. Inf. Forensic Secur. 6(2), 385–395 (2011)
Peng, J., El-Latif, A.A., Li, Q., Niu, X.: Multimodal biometric authentication based on score level fusion of finger biometrics. Optik-Int. J. Light Electron Optics 125(23), 6891–6897 (2014)
Lumini, A., Nanni, L.: Overview of the combination of biometric matchers. Inf. Fus. 33, 71–85 (2017)
Kang, B.J., Park, K.R., Yoo, J.-H., Kim, J.N.: Multimodal biometric method that combines veins, prints, and shape of a finger. Opt. Eng. 50(1), 017201 (2011)
Chang, K.I., Bowyer, K.W., Flynn, P.J., Chen, X.: Multi-biometrics using facial appearance, shape and temperature, In: Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Republic of Korea, pp. 43–48 (2012)
Tchamova, A., Dezert, J., Smarandache, F.: A new class of fusion rules based on T conorm and T norm fuzzy operators. Inf. Secur. 20, 65–82 (2006)
Wu, H., Siegel, M., Stiefelhagen, R., Yang, J.: Sensor fusion using Dempster-Shafer theory, In: Proceedings of the 19th IEEE Conference on Instrumentation and Measurement Technology, vol. 1, pp. 7–11, Anchorage, AK, United States (2002)
Quost, B., Masson, M.-H., Denoeux, T.: Classifier fusion in the Dempster-Shafer framework using optimized t-norm based combination rules. Int. J. Approx. Reason. 52(3), 353–374 (2011)
Hanmandlu, M., Grover, J., Gureja, A., Gupta, H.: Score level fusion of multimodal biometrics using triangular norms. Pattern Recognit. Lett. 32(14), 1843–1850 (2011)
Srivastava, S., Bhardwaj, S., Bhargava, S.: Fusion of palm-phalanges print with palmprint and dorsal hand vein. Appl. Soft Comput. 47, 12–20 (2016)
Wang, N., Lu, L., Gao, G., Wang, F., Li, S.: Multibiometrics fusion using Aczél-Alsinatriangular norm. KSII Trans. Internet Inf. Syst. 8(7), 2420–2433 (2014)
Ja’nos, A., Alsina, C.: Characterizations of some classes of quasilinear functions with applications to triangular norms and to synthesizing Judgements. Aequationes Math. 25(1), 313–315 (1982)
Mayer, N., Herrmann, J.M., Geisel, T.: Signatures of natural image statistics in cortical simple cell receptive fields. Neurocomputing 38, 279–284 (2001)
Wang, N., Li, Q., El-Latif, A.A., peng, J., Niu, X.: Two-directional two-dimensional modified Fisher principal component analysis: an efficient approach for thermal face verification. J. Electron. Imaging 22(2), 023013 (2013)
Wang, N., Li, Q., El-Latif, A.A., Peng, J., Niu, X.: An enhanced thermal face recognition method based on multiscale complex fusion for Gabor coefficients. Multimed. Tools Appl. 72(3), 2339–2358 (2014)
Li, H., Sun, Z., Tan, T.: Robust iris segmentation based on learned boundary detectors. In: Proceedings of the Fifth APR International Conference on Biometrics, New Delhi, India, pp. 317–322 (2012)
Wang, S., Liu, Z., Lv, S., Lv, Y., Wu, G., Peng, P., Chen, F., Wang, X.: A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Trans. Multimed. 12(7), 682–691 (2010)
Information technology-biometric performance testing and reporting, part 1: principles and framework. In: ISO/IEC 19795-1 (2006)
Shen, W., Surette, M., Khanna, R.: Evaluation of automated biometrics-based identification and verification systems. Proc. IEEE 85(9), 1464–1478 (1997)
Daugman, J.: Biometric decision landscapes. No. UCAM-CL-TR-482. Cambridge University, Computer Laboratory, (2000)
He, M., Horng, S.-J., Fan, P., Run, R.-S., Chen, R.-J., Lai, J.-L., Khan, M.K., Sentosa, K.O.: Performance evaluation of score level fusion in multimodal biometric systems. Pattern Recognit. 43(5), 1789–1800 (2010)
Nandakumar, K., Chen, Y., Dass, S.C., Jain, A.K.: Likelihood ratio based biometric score fusion. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 342–347 (2008)
Acknowledgements
The authors are grateful to the Deanship of Scientific Research at King Saud University for funding this paper through the Vice Deanship of Scientific Research Chairs.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
El-Latif, A.A.A., Hossain, M.S. & Wang, N. Score level multibiometrics fusion approach for healthcare. Cluster Comput 22 (Suppl 1), 2425–2436 (2019). https://doi.org/10.1007/s10586-017-1287-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-017-1287-4