Fig. 1
figure 1

Cloud based framework for multibiometrics enabled healthcare monitoring

1 Introduction

Biometric system provides a powerful technique for unique verification and identification of an individual identity [1]. It has a great potential healthcare and patients monitoring [2,3,4,5,6,7] to increase the safety for the patients [3, 7], nurses and doctors by several ways such as the detection of ocular pathologies [1], the detection of patient emotion [2, 6] and so on. In healthcare monitoring, biometric traits such as iris and face [8] play a vital role for the assessment of patients. Indeed, recognizing the biometric traits from iris can be a great help for the elementary assessment of diseased eyes [1]. Moreover, the inhabitant’s mood detected from a face for healthcare monitoring can reveal the negative emotion or pain [2] caused by diseases and thereafter plays the preferable music [4] or rescue in order to change his negative emotion [2] in to the positive one.

The unimodal biometric trait has been demonstrated its efficacy and efficiency to develop the identification performance [7]. However, they suffer from various practical issues such as distinctiveness lack, non-universality, noisy, unacceptable error rates, spoof attacks, and [9,10,11,12]. The multibiometrics, which consolidating information from multiple biometric traits, have been developed for overcoming the above problems in order to meet the requirement of healthcare monitoring applications [13]. Recently, the benefits of multibiometrics traits have attracted a significant attention from the academia, research and industry [14,15,16,17].

In multibiometrics, the successful fusion scheme is needful for collecting features from several single unimodal biometric systems. In recent years, several multibiometrics fusion approaches have been proposed, and the core techniques are divided into three categories: (1) Transformation-based fusion [18,19,20,21,22,23], (2) Classifier-based fusion [24,25,26], and (3) Density-based fusion [27,28,29,30]. The previous transformation-based methods failed to adapt the distribution features of genuine and imposter scores as well as neglecting the closer intra-class and a farther inter-class distance. In addition, the classifier-based fusion methods require data training and then classification. The unbalanced training leads to an unbalanced genuine and imposter score and hence low performance recognition. On the other hands, the density-based methods have are able to achieve optimal performance at any operating point case. It is noted, however, that the density function is failed to estimate accurately and usually unknown.

To overcome the weakness of the former works on fusion of multibiometrics, this paper proposes an adaptive score level multibiometrics fusion scheme based on triangular norm for healthcare monitoring. The proposed approach has four phases: utilization of the Aczél-Alsina triangular norm, feature extraction, matching score, normalization and score level fusion. The simulation and analysis based on selected databases demonstrated that the proposed multibiometrics fusion approach has better performance with closer distance in the genuine and larger distance in the imposter. The numerical tests demonstrate that higher recognition results are in favor of the proposed approach.

The remaining of this work is as follows. Section 2 presents the proposed biometric-based healthcare monitoring. Section 3 displays the preliminary knowledge. The proposed multibiometrics fusion approach is introduced in Sect. 4. Section 5 is devoted to the simulation results and analysis of the proposed approach and comparison to other approaches. Section 6 highlights the main findings of this paper. The conclusion and future work are drawn in Sect. 7.

2 The biometric-based healthcare monitoring

Here, we propose a prototype for multibiometrics enabling healthcare monitoring. Fig. 1 shows the proposed biometric-based healthcare monitoring framework. The biometric traits of the patient/ user is captured via biometric device then enrolled into the template database for further processing. The biometric templates are sent to the cloud for management.

The traits are managed and processed in the cloud and data service. The caregiver or any physician with true identification can receive and analysis the fused traits in order to evaluate and assesses the patient/user and sends convenient instructions to him via cloud service. The patient/user receives the instruction via his/her smart phone or any wearable device. In the case the caregiver wants to consult the specialized physician, it sends the demand to the cloud, and then the cloud manger informs it to that specialized physician.

3 Preliminary knowledge

In this section, we review the T-norms utilized in the proposed approach. The T-norm is a mathematical function that maps the unit square \([0, 1]^{2}\) into [0.1] defined as

$$\begin{aligned} T_{norm} (u,v):R^{2}\rightarrow R, \forall u,v,z,\alpha ,\beta \in R. \end{aligned}$$
(1)

The \(T_{norm}\) satisfy the following rules:

  1. (1)

    \(T_{norm} (T_{norm} (u,v),w)=T_{norm} (u,T_{norm} (v,w));\)

  2. (2)

    \(T_{norm} (u,v)=T_{norm} (v,u);\)

  3. (3)

    if \((u\le \alpha )\,\,and\,\,(v\le \beta )\,\,\hbox {then}\,\,T_{norm} (u,v)\le T_{norm} (\alpha ,\beta );\)

  4. (4)

    \(T_{norm} (0,0)=0\,\,\hbox {and}\,\,T_{norm} (u,1)=u\).

In what follows, we enumerate some of the t-norms that are implemented in our work:

  1. (1)

    Hamacher: \(\frac{uv}{u+v-uv}\);

  2. (2)

    Einstein product: \(\frac{uv}{2-(u+v-uv)}\);

  3. (3)

    Yager (\(\delta >0)\): \(max(1-((1-u)^{\delta }+(1-v)^{\delta })^{1/\delta },0)\).

  4. (4)

    Frank (\(\delta >0)\): \(\log _\delta (1+\frac{(\delta ^{u}-1)(\delta ^{v}-1)}{\delta -1})\);

Based on the formulation of T-norms, we can conclude the following [31]:

  • The T-norms of Hamacher with \(\delta =0.5\), Frank with \(\delta =0.4\), and Yager with \(\delta =0.5\), all they can minimize the fusion of genuine scores compared to Sum rule while they failed to farther the imposter scores.

  • This leads to lower unaccepted error rates and more difficult to have good recognition performance.

To overcome the above issue, Aczél-Alsina (AA) T-norm, \(T^{AA}\), [32] is applied to enhance the multibiometrics fusion efficiency. It is defined for \(0<\delta <+\infty \) by

$$\begin{aligned} T_\delta ^{AA} (u,v)=e^{-(|\log u|^{\delta }+|\log v|\delta )^{1/\delta }} \end{aligned}$$
(2)

Equation (2) satisfies the following properties.

$$\begin{aligned}&F(u,v)\nonumber \\&\quad>max(Sum(u,v),Ep(u,v),Fk(u,v),Hm(u,v),Yg(u,v)),\forall u,v\nonumber \\&\quad >r, \end{aligned}$$
(3)

and

$$\begin{aligned}&F(u,v)\nonumber \\&\quad \le min(Sum(u,v),Ep(u,v),Fk(u,v),Hm(u,v),Yg(u,v)),\forall u,v\nonumber \\&\quad \le r. \end{aligned}$$
(4)

The critical parameter \(r\in [ts-\lambda ,ts+\lambda ]\), and ts can be represented as:

$$\begin{aligned} ts=\frac{ts_1 +ts_2 }{2} \end{aligned}$$
(5)

where \(ts_1 \) and \(ts_2 \) are thresholds of the biometric approach adapted to the fusion step, and \(\lambda \) is a bias obtained from the tests.

Figure 1 simulates Eqs. (3) and (4) and shows the performance of the Aczél-Alsina T-norm (AA T-norm). A good performance is achieved by choosing the appropriate parameters, r and \(\delta \), for AA T-norm. We note that, when the score for fusion is small, the fusion result is smaller than other T-norms. When the fusion score is large, the result is larger than other T-norms. We shall discuss this observation in detail later in this paper (see Sect. 5).

4 The proposed approach

Here, we introduce a new approach for fusion of multibiometrics modal based on Aczél –Alsina triangular norm. The proposed framework is given in Fig. 2. The proposed approach is consists of four basic procedures: feature extraction, matching score, normalization, and fusion of the different modalities. For the feature extraction phase, the features of the dual iris (left and right iris) are extracted based on 1D-log Gabor technique [33], and the features of thermal and visible faces are extracted based on \((2\hbox {D})^{2}\) MFPCA [34] or CGJD [35] approach. The matching score procedure is applied for different biometric traits, as shown in Fig. 2, in order to score genuine and imposter features. Then, all the matched scores from different modalities are normalized to the domain [0,1]. Finally, the fusion approach for the four biometric traits is applied based on AA T-norm.

Fig. 2
figure 2

The framework of the proposed fusion approach

For a meaningful combination, different modalities scores \((MS_R ,MS_L ,MS_T ,MS_V )\) are transformed into the domain [0, 1]. The normalization criterion is given as:

$$\begin{aligned} M{s}'=\frac{Ms-\min (Ms)}{\max (Ms)-\min (Ms)} \end{aligned}$$
(6)

where Ms is refer to the matching score, whether imposter or genuine.

The associative commutative property of T-norm is applied in the proposed approach (i.e., amalgamate the output fusion of the dual iris with the thermal model, and thereafter with visible modal, until all modalities are processed. The associative commutative rule of the proposed fusion approach is given by Eq. (7),

$$\begin{aligned}&MS\nonumber \\&\quad =T_p^{AA} \left( {T_p^{AA} \left( {T_p^{AA} \left( {MS_{ Right} ,MS_{ Left} } \right) ,MS_{ Thermal} } \right) ,MS_{ Visible} } \right) \nonumber \\ \end{aligned}$$
(7)

5 Simulation results and analysis

5.1 Databases and simulation setup

This study utilized two virtual multibiometrics databases: CASIA-Iris-Thousand [36], and (NVIE) database [37]. The CASIA-Iris-Thousand database includes hundreds of noisy iris images. The NVIE database has appropriate thermal and visible face images with six diverse expressions [37]. In the proposed algorithm, we choose randomly ninety classes to every ten samples. The well known biometric metrics such as FMR, FNMR [38], (ROC) curve [39] and d’ [40] are applied to examine the effectiveness of the proposed fusion scheme.

In this section, we perform several simulation tests to demonstrate the eligibility of the introduced multibiometrics fusion approach compared to other approaches. To facilitate the comparison of the proposed approach with other approaches, Table 1 gives the abbreviations and the description of different approaches. In addition, I2D is used to express the fusion framework based on dual iris and \((2\hbox {D})^{2}\) MFPCA face feature, and IGJD express the fusion framework based on dual iris and CGJD face feature.

Table 1 The abbreviations and the description of different approaches
Table 2 The performance recognition based on SVM and GMM approaches
Table 3 The performance of the proposed system based on equal error rate (EER)

5.2 Recognition performance based on SVM and GMM approaches

Here, we examine the recognition performance of proposed fusion approach based on two main classification approaches SVM and GMM. Table 2 show the recognition efficacy of the proposed method based on SVM and GMM classifications methods. From the results of Tables 2 and 3, we can summarize the following:

  • LR2 has lower FMR with elevated FNMR. The lowest FNMR goes to the multibiometrics VTLR4. For a biometric system application, the lower FMR is preferred [38]. Thus, it is important to decrease the FNMR at specific FMR values instead of minimizing the overall error rate.

  • The multibiometrics VTLR4 in IGJD is the best one for SVM classification (Table 2). The influence of the classification by SVM is affected by the unbalanced numbers of scores (genuine/ imposter). In SVM, we select some of score result for training, and the left of them for testing.

  • In Table 2, we can see that the performance is better when the number of biometrics model increasing from the framework of I2D. By comparing the I2D and ICGJD, we can see that the feature is very important for performance.

  • The score level fusion based on SVM and GMM need to train the inner and intra matched score data. Because of the number of inner and intra data, it will take the non-balance problem.

  • The GMM model needs the inner and intra score data belonging to some Gauss distribution. However, if the data belonging to some Gauss distribution or not is difficult to acquire from theory.

Figure 3 shows the SVM classification of iris score level fusion. Fig. 4 displays the Norm2 classification for feature vector in TLR3 of I2D while Fig. 5 shows theNorm2 classification for feature vector in fusion strategy TLR3 of ICGJD. As observed from Figs. 3, 4 and 5, we demonstrate that the recognition performance is better at high dimension method and the inner and intra fusion result has larger distance. For the reader of electronic version, the “+” refer to the imposter values and “*” refer to the genuine values. The red “+” refer to the imposter values and blue “.” refer to the genuine values.

Fig. 3
figure 3

The SVM classification of dual (left and right) iris

Fig. 4
figure 4

The Norm2 classification of three biometric traits in I2D

Fig. 5
figure 5

The Norm2 classification of three biometric traits in ICGJD

Fig. 6
figure 6

ROC curve of lower performance approaches for multibiometrics system VTLR4 in I2D

Fig. 7
figure 7

ROC curve of higher performance approaches for multibiometrics system VTLR4 in I2D

Fig. 8
figure 8

ROC curve of lower performance approaches for multibiometrics system VTLR4 in IGJD

Fig. 9
figure 9

ROC curve of higher performance approaches for multibiometrics system VTLR4 in IGJD

Fig. 10
figure 10

The intra-class and inter-class distance of different fusion system based on Hm (\(\delta =0.3\))

Fig. 11
figure 11

The intra-class and inter-class distance of different fusion system based on Fk (\(\delta =0.4\))

Fig. 12
figure 12

The intra-class and inter-class distance of different fusion system based on Yg (\(\delta =0.5\))

Fig. 13
figure 13

The intra-class and inter-class distance of different fusion system based on Ep

Fig. 14
figure 14

The intra-class and inter-class distance of different fusion system based on \(\hbox {AA}_\mathrm{pro}\)

5.3 The EER performance

Table 3 gives the performance for the proposed multibiometrics fusion approach and other several approaches. From the results of Table 3, we can see that the recognition performance is in favor of the proposed frameworks I2D and IGJD at the fusion of four biometric traits (VTLR4). The fusion scheme of VTLR4 avoids the forgery attack besides its recognition performance. All the approaches almost display the best EER with 0.93 for LR2. That is due to the high discrimination in the iris data.

It is noted that VT2 avoids the spoof attack, however, cannot give better EER than iris. By comparing VT2, VTL3 and VTLR4, LR2, and LTR3, the effectiveness of the overall recognition performance turns preferable for the increased biometrics modal. As observed also from Table 3, the methods, Sum, N2 and \(\hbox {AA}_\mathrm{pro}\) show superior performance, while Max and Min schemes don’t show better performance as they should.

5.4 ROC performance

The ROC curves of I2D and IGJD for VTLR4 are shown in Figs. 6 and 9. Figure 6 shows the ROC curve of lower performance approaches for multibiometrics system VTLR4 in I2D. Figure 7 shows the ROC curve of higher performance approaches for multibiometrics system VTLR4 in I2D. Figure 8 shows the ROC curve of lower performance approaches for multibiometrics system VTLR4 in IGJD. Figure 9 shows the ROC curve of higher performance approaches for multibiometrics system VTLR4 in IGJD. From these figures, we can see that the proposed method has high recognition performance than others.

Fig. 15
figure 15

The intra-class and inter-class distance comparison of different score level fusion approaches based on I2D

To illustrate the performance of \(\hbox {AA}_\mathrm{pro}\), the genuine and imposter distributions are shown for I2D framework (Fig. 2) in Figs. 10, 11, 12, 13 and 14, which are stands for Hm (\(\delta =0.5\)), Fk (\(\delta =0.4\)), Yg (\(\delta =0.5\)), Ep and \(\hbox {AA}_\mathrm{pro.}\) The x-label stands for the distance after normalize, smaller the better. The y-label stands for the percent of genuine and imposter data. All the simulations are based on multibiometrics database, including 90 classes, ten samples for every class, 5 for training, 5 for testing, and are 450 inner-comparisons and 36,550 inters-comparisons. Therefore, the imposter data number is large from the figure shown in the right of the figures. The genuine data number is small in the left of the figures.

The distance is acquired using Eq. 8,

$$\begin{aligned} hd(k,l)=\sum _{i=1}^n {\frac{(k_i -l_i )^{2}}{2(k_i -l_i )}} \end{aligned}$$
(8)

In each figure, the distance of LR2 is less than LRT3 and VTLR4. From Figs. 10, 11, 12, 13 and 14, we will see for every multibiometric system, the distance increasing result of \(\hbox {AA}_\mathrm{pro}\) is obvious. The highest distance of VTLR in Fig. 14, is 0.99992. Fig. 15 shows the result of hd distance in I2D framework. Thus, the proposed scheme produces excellent performance than the state-of the-art approaches.

6 Discussion

This paper introduces a framework for biometric-based healthcare monitoring. The proposed framework is able to manage and handle the healthcare situations between the patient/user, caregiver/physician, and cloud-based healthcare monitoring. The paper also introduced a new multi-biometrics fusion method based on three modalities: eyes, thermal face, and visible face. The 1D-log Gabor iris features, the \((2\hbox {D})^{2}\) MFPCA and the CGJD face features are applied to extract each modality feature vector. Then, we applied a score level fusion technique based on the Aczél-Alsin a triangular norm to identify Genuine and Imposter.

Based on the above section, we summarize the contribution of the proposed approach as follows:

  • Explore the performance of multi-modal fusion systems based on triangular norm for monitor healthcare.

  • The proposed multibiometrics fusion approach used four biometric traits: thermal and visible faces, left iris and right iris.

  • It enhances the performance of multibiometrics verification through narrows the distance of intra-class and enlarges the distance of inter-class.

  • It has better performance compared with the previous fusion approaches [29, 40,41,42]

7 Conclusion

A biometric-based healthcare monitoring framework was introduced. The proposed framework produced a secure and efficient way to monitor health using score level multibiometrics fusion approach. The possible useful application of the proposed framework is the ability to detect any malicious healthcare fraud as well as the detection of the inhabitant’s mood and ocular pathologies in order to treat them. The performance of Aczél-Alsina triangular norm-based score level fusion is examined. The biometric analysis and results of the simulation conducted show shat the proposed multi-modal fusion system achieves high performance with a nearer genuine and larger imposter distances. In comparisons with previous approaches, the high recognition performances are in favor of the proposed multibiometrics fusion approach. Our future work will be devoted to the emotion detection from multibiometrics for e-healthcare systems.