Keywords

1 Introduction

Recent advancements in technology have given scope for more threats to personal data and national security due to large amount of stored data. The information transmitted through online can be easily hacked and override the authorized user by the hackers. There are many traditional methods such as password-, watermarking-, and cryptography-based systems to protect the data from the hackers. But these methods are not sufficient to handle new generation applications [1,2,3,4].

The biometric based authentication was introduced to avoid the brute force attack. Here, the authentication process is performed by the unique physical features of humans like fingerprint [5], iris , retina, hand geometry, etc. They provide high-secured systems than the traditional methods. Initially, the mono-biometric [6] authentication systems were used to authenticate users and secure systems. Fingerprint verification system is the one of the biometric authentication systems that is highly reliable and is being extensively used by forensic experts. Fingerprint applications include entrance control, door-lock applications, fingerprint identification mouse, and fingerprint mobile phones, among others. The biometric fingerprint means allow authorized users access to multiple clinical, financial, and other systems. It also avoids forgery of certificates, conveying of false information, threats, and crimes.

There are three stages in the fingerprint verification system. These are the enhancement, feature extraction, and comparison. Image enhancement is the preprocessing stage where the quality of the edges is improved and contrast level is increased. The poor-quality images will have low-contrast edges and also the boundaries are not well defined which reduce the ratio of FAR and FRR to about 10% [7].

In case of biometrics, the huge number of images needs to be maintained irrespective of the features, as the population is typically high. Hence, an effective compression technique is required in order to utilize the storage space efficiently. But the disadvantage of using the compression technique is loss of data, which leads to inaccurate matching. In this chapter, the Morlet wavelet algorithm is discussed for fingerprint enhancement and compression during the preprocessing stage of fingerprint verification system [8].

Minutiae-based methods [9, 10] and image-based methods [11,12,13] are the two variations in fingerprint verification systems. Minutiae is defined as the points of interest in the fingerprint. Minutiae are used as features in minutiae-based methods, and the position of the minutiae, their orientation, and type are stored as sets. The disadvantage is that they may not utilize rich discriminatory information and may have high computation complexity, whereas the image-based methods utilize ridge pattern as feature. Tico et al. [14] proposed transform-based method using digital wavelet transform (DWT) features, while Amornraksa et al. [15] proposed using digital cosine transform (DCT) features. These transform methods show a high matching accuracy for inputs which are identical to the one in its own database. However, these methods have not considered the invariance to an affine transform to deal with different input conditions.

To satisfy the variability condition, integrated wavelet and Fourier-Mellin transform (WFMT) [16] using multiple WFMT features is used. However, this scheme is not suitable for all types of fingerprint images since it chooses core point as a reference point.

To overcome these methods, the simple binaries method is introduced to extract the core reference point. The Zernike and invariant moments are calculated from the reference point invariant to translate, rotate, and scale. The feature is evaluated by the range of correlation between the moments, which reduces the number of features required for comparison during authentication. In this, the authentication is performed by single biometric system [11], which results in high error rates when many similar features exist in the database.

In order to overcome the high error rates, the multimodal biometric system has been developed. It means that more than one biometric [17] is used simultaneously in order to authenticate and validate the user as well as to maintain more information for security purpose. The multimodal biometric system leads to having more information for authentication so it takes more time for authentication and consumes more storage. It results in high complexity, storage, and execution time. The new fused biometric systems have been introduced to solve the above constraints where the features of the multiple biometrics are combined into a single feature and the authentication is performed using predefined threshold value.

The multimodal biometric fusion system leads in an increase in the error rate for authentication due to the more similar features. There are many fusion methods based on decision, score, and feature level that are used in biometric authentication system. These techniques differ upon what biometric information is going to be fused and how the fusing is done. In decision-level fusion techniques [18], the biometric image was divided into equal small squares from which the local binary patterns are fused to single global features pattern. The performance of these techniques leads to 95% of accuracy. The score level fusion technique [19] is fusing the PCA analysis of the face and fingerprint into single identification system, and in this case the error rate reaches more than 11%. The feature level fusion techniques [20] fuse the feature points of the fingerprint and the face and provide 97% efficiency, but none of the previous fusion techniques provide zero error rates.

In this chapter, a new simple and robust fusion technique called the multimodal biometric invariant moment fusion authentication system has been introduced, and it provides better adaptation of genuine and imposter among various test data sets. The fused algorithm gives a single identification decision (data sets) using coefficients which solve the problem of timely constraints and storage space [21]. This approach provides better results than score, feature, and decision-level fusion technique .

2 Multimodal Biometric Invariant Moment Fusion Authentication System

In multimodal biometric system, more than single biometric is used for authentication purpose. Usually, both mono- and multimodal systems perform the two major operations, namely, enrolment and authentication. During enrolment, the distinct information of the biometric is stored in the database which is used for verification. After enrolment, the authentication is performed by comparing the information with the stored information. Depending upon the ratio of similar or non-similar data, the genuine or imposter must be identified.

2.1 Invariant Moment Fusion System

The binaries method extracts the core reference point in which the Zernike and invariant moments are calculated. Translation, rotation, and scaling are performed on invariants. The final features for authentication are evaluated by the range of correlation between the moments to reduce the amount of storage.

2.2 Fingerprint

2.2.1 Morlet Enhancement and Compression

The Morlet fingerprint image enhancement and compression [8] consists of two-stages in processing. They are wavelet analysis and smoothening. In wavelet analysis, the Fourier transforms are applied on the 2D Morlet wavelet and the original image separately. The transformed images are then obtained from these transformed functions. The corrected two-dimensional continuous wavelet transform (2D CWT) is obtained by applying the inverse Fourier transform in the transformed image. During the smoothing process, the orientation and the frequency image [22] of the 2D CWT image are estimated and applied in the Gabor filter in order to remove noise .

The steps involved in the algorithm are as follows:

  1. 1.

    The image is decomposed using Morlet wavelet.

  2. 2.

    Ridge segmentation is done to identify the broken ridges.

  3. 3.

    The ridge orientation is estimated.

  4. 4.

    The frequency is estimated using orientation image.

  5. 5.

    The final image is reconstructed based on adjoining chosen filtered blocks .

2.2.2 Morlet Wavelet

2.2.2.1 2D Continuous Wavelet Transforms

2D CWT is performed by convolving a wavelet function and image. For f(x, y) ∈ L2R, 2D CWT in time domain is given as:

$$ cwt\left(s,a,b\right)=\frac{1}{\sqrt{s}}\int \int f\left(x,y\right)\psi \left(\frac{x-a}{s},\frac{y-b}{s}\right) dxdy $$
(12.1)

where s is the “dilation” parameter used to change the scale and a, b are the translation parameters used to slide in time. The factor of s1/2 is a normalization factor to keep the total energy of the scaled wavelet constant.

The 2D CWT in frequency domain is given as:

$$ cwt\left(s,{w}_1,{w}_2\right)=\sqrt{s}F\left({w}_1,{w}_2\right)\Phi \left({sw}_1,{sw}_2\right) $$
(12.2)

where w1 and w2 refer to the frequency of the image, F(w1,w2) is the low-frequency spectrum, and ϕ(w1,w2) is the phase modulation, which defines the spectrum of deformed image. The Fourier transform in Morlet wavelet is applied to the image, which calculates the discrete points depending on the scale and displays the real part of the inverse Fourier transform.

$$ \psi \left( kx, ky\right)=\sqrt{2\pi}\left({e}^{-\frac{1}{2}\left(2\pi kx-k\right)+{\left(2\pi ky\right)}^2}-{e}^{-\frac{1}{2}{k}^2\psi}\right){e}^{-\frac{1}{2}\left(2\pi {kx}^2+2\pi {ky}^2\right)} $$
(12.3)

The decomposition of the fingerprint image by 2D Morlet wavelet is shown in Fig. 12.1a. The resultant transformed image has good contrast and enhanced ridges with compression .

Fig. 12.1
figure 1

The resultant phases of the enhancement: (a) Morlet image, (b) orientation image, (c) frequency image, and (d) enhanced image

2.2.3 Ridge Orientation

The orientation image represents an intrinsic property of the fingerprint image and defines invariant coordinates for ridges and furrows in a local neighborhood as shown in Fig. 12.1b. A ridge center maps itself as a peak in the projection. The projection waveform facilitates the detection of ridge pixels. The ridges in the fingerprint image are identified with the help of eight different masks. The ridges are separated from the fingerprint image by the following equations:

$$ I\left(x,y\right)=I\left(x,y\right)-\mathrm{mean} $$
(12.4)
$$ S\left(x,y\right)=I\left(x,y\right)/\sigma $$
(12.5)

where σ is the standard deviation and I(x, y) is an integrated image.

By viewing ridges as an oriented texture, a number of methods have been proposed to estimate the orientation field of fingerprint images [22]. Given a transformed image, N, the main steps for calculating dominant directions are as follows:

  1. 1.

    Divide N into blocks of size w × w.

  2. 2.

    Compute the gradients and apply Gaussian filter Gxy. The gradient operators are simple Sobel operators and Gaussian filter is applied as follows:

$$ {G}_{xy}=\frac{1}{2{\pi \sigma}^2}{e}^{-\frac{x^2+{y}^2}{2{\sigma}^2}} $$
(12.6)
  1. 3.

    Estimate the local orientation of each block centered at pixel (i, j)

$$ O\left(x,y\right)=\frac{\pi }{2}\times \tan \left(\left(\frac{G_{xy}-{G}_{yy}}{G_{xy}}\right)/2\right) $$
(12.7)

where the degree of smoothening is governed by the variance σ2.

2.2.4 Frequency Image

The frequency of the fingerprint image is estimated using the orientation image O(x, y) by Eq. 12.7, and it is shown in Fig. 12.1c. The block is rotated and cropped based on the orientation. The median filtering is then applied for smoothening.

$$ F\left(x,y\right)=\frac{F\left(u,v\right)W\left(u,v\right)I\left(u,v\right)}{W\left(u,v\right)I\left(u,v\right)} $$
(12.8)

where \( W\left(u,v\right)=\frac{u}{\sqrt{2}},\frac{v-\frac{u}{\sqrt{2}}}{2} \)

F(u, v) is the wavelet transformed image and I(u, v) ensures that the valid ridge frequency is non-zero. The ridge of 3–25 pixels is the valid range .

2.2.5 Enhanced Image

The Gabor filter optimally captures both local orientation and frequency information to smoothen the fingerprint image. By tuning a Gabor filter to specific frequency and direction, the local frequency and orientation information can be obtained, which will be used for extracting texture information from images, which gives smoothing as a part of enhancement by removing the noise shown in Fig. 12.1d.

$$ E\left(x,y\right)=\frac{1}{2{\pi \sigma}_x{\sigma}_y}{e}^{\left[-\frac{1}{2}\left(\left(\frac{x^2+{y}^2}{\sigma_{x,y}^2}\right)/2\right)\cos 2\pi fx\right]} $$
(12.9)

where σx and σy determine the shape of the filter envelop and f represents the frequency of the image.

2.2.6 Determination of Reference Point and Regions of Interest (ROI)

The reference point is determined in order to evaluate the ROI of the fingerprint image, which are used to extract the Zϕ moments. This process simplifies the process of the extraction by reducing its complexity.

The Otsu method is used to define the threshold to the binaries of the image. Intra-class variance is defined as a weighted sum of variances of the two classes:

$$ {\sigma}_{\omega}^2(t)={\omega}_1(t){\sigma}_1^2(t)+{\omega}_2(t){\sigma}_2^2(t) $$
(12.10)

Weights ω1 and ω2 are the probabilities of the two classes separated by the threshold of variance and σ12and σ22 variances of these classes, respectively. Minimizing the intra-class variance is the same as maximizing interclass variance:

$$ {\sigma}_b^2(t)={\sigma}^2-{\sigma}_{\omega}^2(t)={\omega}_1(t){\omega}_2(t){\left[{\mu}_1(t)-{\mu}_2(t)\right]}^2 $$
(12.11)

which is expressed in terms of class probabilities ωI and class means μi. The class probability ωi(t) is computed from the histogram t:

$$ {\omega}_i(t)=\sum \limits_{i=0}^tp(i) $$
(12.12)

while the class mean μi(t) is:

$$ {\mu}_i(t)=\left[\sum \limits_{i=0}^tp(i)x(i)\right]/{\omega}_i $$
(12.13)

where x(i) is the value at the center of the ith histogram. Similarly, we can compute ω2(t) and μ2(t) on the right-hand side of the histogram .

The algorithm to binaries detects and crops the ROI of fingerprint:

  1. 1.

    Compute histogram(t) and probabilities of each intensity level

  2. 2.

    Set up initialωi(0) and μi(0).

  3. 3.

    For t = 1 to maximum intensity, do:

    1. 3.1

      Update ωi and μi

    2. 3.2

      Compute σb2(t) using Eq. 12.11

    3. 3.3

      Thresholds σb12(t) greater and σb22(t) equal or minimum is defined

$$ \mathrm{Threshold}\ T=\frac{\sigma_{b1}^2(t)+{\sigma}_{b2}^2(t)}{2} $$
(12.14)
$$ \mathrm{Binaries}\ \mathrm{image}\kern0.48em {E}_b\left(x,y\right){=}_{0\ \mathrm{if}\ E\left(x,y\right)>T}^{1\ \mathrm{if}\ E\left(x,y\right)>T} $$
(12.15)
  1. 4.

    The region labeled with four connected components is chosen which determines the high curvature region used to determine ROI .

  2. 5.

    The median of the region is taken as reference point, and image is cropped into size of 120 × 120. It is shown in Fig. 12.2.

Fig. 12.2
figure 2

The resultant phases of the fingerprint enhancement with singular point detection

2.2.7 Invariant and Zernike Moment Analysis

The algebraic invariants and Zernike moment are calculated from the reference point of the fingerprint and are invariant to scale, position, and rotation. Algebraic invariants are applied to the moment generating function under a rotation transformation. Nonlinear centralized moment and absolute orthogonal moment invariants are calculated with reference. Fingerprint ZΦ invariants [18] are shown in Table 12.1.

Table 12.1 Fingerprint Zϕ invariants

Invariant Moments

Central moments of order 3 or less are for translational invariance. For a 2D continuous function f (x, y), the moment of order (p + q) is defined as:

$$ {\displaystyle \begin{array}{l}{m}_{0,0}=\sum \limits_{i,j=1}^nf\kern1em \overline{x}=\frac{m_{1,0}}{m_{0,0}}\overline{y}=\frac{m_{0,1}}{m_{0,0}}{m}_{1,0}=\sum \limits_{i=1}^nx\cdot f\\ {}{m}_{0,1}=\sum \limits_{j=1}^ny\cdot f{m}_{1,1}=\sum \limits_{i,j=1}^nx\cdot y\cdot f\end{array}} $$
$$ {m}_{2,0}=\sum \limits_{i=1}^n{x}^2\cdot f{m}_{0,2}=\sum \limits_{j=1}^n{y}^2\cdot f{m}_{1,2}=\sum \limits_{i,j=1}^nx\cdot {y}^2\cdot f{m}_{3,0}=\sum \limits_{i=1}^n{x}^3\cdot f{m}_{0,3}=\sum \limits_{j=1}^n{y}^3\cdot f{m}_{2,1}=\sum \limits_{i,j=1}^n{x}^2\cdot y\cdot f $$

Second-order central moment for image orientation for scaling invariant:

$$ {\xi}_{1,1}=\frac{\left({m}_{1,1}-\overline{y}\cdot {m}_{1,0}\right)}{m_{0,0}^2}{\xi}_{2,0}=\frac{\left({m}_{2,0}-\overline{x}\cdot {m}_{1,0}\right)}{m_{0,0}^2}\kern1em {\xi}_{0,2}=\frac{\left({m}_{0,2}-\overline{y}\cdot {m}_{0,1}\right)}{m_{0,0}^2} $$
$$ {\xi}_{3,0}=\frac{\left({m}_{3,0}-3\overline{x}\cdot {m}_{2,0}+2\cdot {\overline{x}}^2\cdot {m}_{1,0}\right)}{m_{0,0}^{2.5}}\kern1em {\xi}_{0,3}=\frac{\left({m}_{3,0}-3\overline{y}\cdot {m}_{0,2}+2\cdot {\overline{y}}^2\cdot {m}_{0,1}\right)}{m_{0,0}^{2.5}} $$
$$ {\xi}_{2,1}=\frac{\left({m}_{2,1}-2\overline{x}\cdot {m}_{1,1}+\overline{y}\cdot {m}_{2,0}+2{\overline{x}}^2\cdot {m}_{0,1}\right)}{m_{0,0}^{2.5}}\kern1em {\xi}_{2,1}=\frac{\left({m}_{1,2}-2\overline{y}\cdot {m}_{1,1}-\overline{x}\cdot {m}_{0,2}+2{\overline{y}}^2\cdot {m}_{1,0}\right)}{m_{0,0}^{2.5}} $$

A set of seven invariant moments derived from the second and third moments is a set of absolute orthogonal moment invariants proposed by Hu [23].

Rotational invariant moments: φ(1) = ξ2, 0 + ξ0, 2.

Moment of inertia (pixel intensity to physical density for rotation invariant).

$$ \varphi (2)={\left({\xi}_{2,0}+{\xi}_{0,2}\right)}^2+\left(4{\xi}_{1,1}^2\right)\varphi (3)={\left({\xi}_{3,0}-3{\xi}_{1,2}\right)}^2+{\left(3{\xi}_{2,1}-{\xi}_{0,3}\right)}^2\varphi (4)={\left({\xi}_{3,0}-{\xi}_{1,2}\right)}^2+{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2 $$
$$ {\displaystyle \begin{array}{l}\varphi (5)=\left({\xi}_{3,0}-3{\xi}_{1,2}\right)\left({\xi}_{3,0}+{\xi}_{1,2}\right)\left({\left({\xi}_{3,0}+{\xi}_{1,2}\right)}^2-3{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2+\left(3{\xi}_{2,1}-{\xi}_{0,3}\right)\right.\\ {}\left.\left({\xi}_{2,1}+{\xi}_{0,3}\right)\left(3{\left({\xi}_{3,0}+{\xi}_{1,2}\right)}^2-{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2\right)\right)\end{array}} $$
$$ {\displaystyle \begin{array}{l}\varphi (6)=\left({\xi}_{2,0}-{\xi}_{0,2}\right)\left({\left({\xi}_{3,0}+{\xi}_{1,2}\right)}^2-{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2\right.\\ {}\left.+4{\xi}_{1,1}\left({\xi}_{3,0}+{\xi}_{1,2}\right)\left({\xi}_{2,1}+{\xi}_{0,3}\right)\right)\end{array}} $$
$$ {\displaystyle \begin{array}{l}\varphi (7)=\left(3{\xi}_{2,1}-3{\xi}_{0,3}\right)\left({\xi}_{3,0}+{\xi}_{1,2}\right)\times \left({\left({\xi}_{3,0}+{\xi}_{1,2}\right)}^2-3{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2\right)\\ {}\left.+\left(3{\xi}_{1,2}-{\xi}_{3,0}\right)\left({\xi}_{2,1}+{\xi}_{0,3}\right)\left(3{\left({\xi}_{3,0}+{\xi}_{1,2}\right)}^2-{\left({\xi}_{2,1}+{\xi}_{0,3}\right)}^2\right)\right)\end{array}} $$

Skew invariants distinguish between mirror and identical images .

Zernike Moments

The Zernike moment is a set of complex polynomials {Vnm(x,y)}, which form a complete orthogonal set over the unit disk of x2 + y2≤1 from the polynomial in polar coordinates, where n is the +ve integer or 0, n-|m| is even, |m| ≤n and θ=tan y / x .

The radial polynomial:

R nm r = s = 0 n m / 2 1 s n s ! s ! n + m 2 s ! n m 2 s ! r n 2 s
(12.16)

The Zernike moment is:

$$ {Z}_{\mathrm{nm}}\left(x,y\right)=\frac{n+1}{\pi}\sum \limits_{x=0}^N\sum \limits_{y=0}^Mf\left(x,y\right){V}_{n,-m}\left(x,y\right) $$
(12.17)

2.3 Face Fusion System

The architecture of the face fusion system is shown in Fig. 12.3. The eigen faces are extracted from the face and used for authentication [17]. Initially, the mean and difference of each image in the training set is computed by using Eqs. 12.18 and 12.19. Then the entire centralized image T is merged using mean to obtain the result A. The merged value is used for computing the surrogate covariance matrix L using Eq. 12.20. The diagonal elements of covariance matrix are taken as eigen faces using Eq. 12.21. Eigen elements are sorted and are eliminated if their values are greater than 1. Finally, the six invariant features are extracted from the faces using Eq. 12.22.

Fig. 12.3
figure 3

Block diagram of face fusion

The high dimensionality makes a good face recognition algorithm. The sample tested face features fusion is shown in Table 12.2.

Table 12.2 Face Zϕ invariants
$$ \mathrm{mean}=\frac{1}{n}\sum \limits_{i=1}^n{X}_i $$
(12.18)
A i = T i mean
(12.19)
L= A ×A X i mean
(12.20)
V × D =Eig L
(12.21)
Variant=L×A
(12.22)

2.3.1 Fusion

The data sets are independently computed by the described variants of face and fingerprint [18]. The variation distance of the moments is calculated using Eqs. 12.23 and 12.24. It is used for enrolment and comparison during authentication.

$$ {d}_1=\mu \left({\varphi}_i\right),\mu \left(\sigma \left({\varphi}_i\right)\right),\mu \left({\sigma}^2\left({\varphi}_i\right)\right),\frac{\mu \left({\varphi}_i\right)}{\mu \left(\sigma \left({\varphi}_i\right)\right)} $$
(12.23)
$$ {d}_2=\mu \left({Z}_i{\varphi}_i\right),\mu \left(\sigma \left({Z}_i{\varphi}_i\right)\right),\mu \left({\sigma}^2\left({Z}_i{\varphi}_i\right)\right),\frac{\mu \left({Z}_i{\varphi}_i\right)}{\mu \left(\sigma \left({Z}_i{\varphi}_i\right)\right)} $$
(12.24)

2.3.2 Authentication

The multimodal biometric authentication is one of the new breeds of authentication system performed by means of more than one biometric in order to validate/authenticate the user. The overall architecture of our authentication system is shown in Fig. 12.4. The trained set of inputs in which invariant moment is extracted is fused and enrolled in the database. Now during authentication, the test data input image of fingerprint and face scanned by the user is fused and compared with the fused value in the database. Then matching is performed by calculating the correlation r between the distance di of enrolled moments α and verification moments β by Eq. 12.25. The correlation between the fused values computed using Eq. 12.25 and variation using Eq. 12.26 determine whether the user is legitimate or not.

Fig. 12.4
figure 4

Block diagram of the face and fingerprint fusion authentication

The resultant difference value is compared with the threshold value to validate the user using Eq. 12.22. The threshold value is based upon the sensitivity of the system. If the difference is low, then the similarity will be higher and it crosses the threshold limit to authenticate the user. Otherwise, the user is not authenticated. This multimodal biometric authentication system performed well and provides more than 99% accuracy .

$$ r=\frac{2{C}_{rf}}{C_r+{C}_f}\;\mathrm{where}\;{C}_r=\sum \limits_{i=0}^N\alpha {(i)}^2 $$
$$ {C}_f=\sum \limits_{i=0}^N\beta {(i)}^2\kern0.5em \mathrm{and}\kern0.5em {C}_{rf}=\sum \limits_{i=0}^N\alpha {(i)}^2\beta {(i)}^2 $$
(12.25)
D= Fused scanned Fused Enrolled
(12.26)
$$ A=\left\{\begin{array}{l}\frac{100-D}{100}\times 100<\mathrm{Th}=\mathrm{Notauthehticated}\\ {}\frac{100-D}{100}\times 100>\mathrm{Th}=\mathrm{Authenticated}\end{array}\right. $$
(12.27)

3 Experimental Results

The fingerprint image database used in this experiment is the FVC2002 database, which contains four distinct data set DB1, DB2, DB3, and DB4.

The performance is evaluated in terms of false acceptance rate (FAR) and false reject rate (FRR) .

$$ FAR=\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{accepted}\ \mathrm{imposter}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{imposter}}\times 100 $$
(12.28)
$$ FRR=\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{rejected}\ \mathrm{genuine}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{genuine}}\times 100 $$
(12.29)

The FAR means imposter accepted as a genuine user, and FRR means the genuine user is rejected as imposter. They are calculated using the Eqs. 12.28 and 12.29, respectively.

The equal error rate (EER) is used as a performance indicator, which indicates the point where FRR and FAR are equal and for evaluating the performance in terms of recognition rate.

The receiver operating characteristic is used as another performance indicator (ROC). It plots the genuine acceptance rate (GAR = 1-FRR) against FAR. The missing probability and alarm probability are evaluated.

Finally, EER is evaluated and results are shown in Figs. 12.5, 12.6, and 12.7, where it is shown that the performance of the proposed system works well in comparison with other image-based approaches.

Fig. 12.5
figure 5

The performance evaluation of the proposed method shows the ROC determines the GAR against FAR

Fig. 12.6
figure 6

Probability of missing and alarm

Fig. 12.7
figure 7

Threshold and the equal error rate (ERR) between FAR vs FRR

The DCT coefficient used by Amorniska in [15] and Jimin [16] used WFMT features; Sha [13] with Gabor filter and Ju [24] with invariants using BPNN are compared, and results shown in Table 12.3 with the proposed method provided more accuracy.

Table 12.3 The GAR % against FAR % of the proposed method compared with other methods

4 Conclusion

The combined Morlet enhancement with fusion of Zernike and invariant moment features of fingerprint and face is fused by evaluating the distance, mean, and correlation. The combined Morlet enhancement with fusion of Zernike and invariant moment features reduces the storage of features and error rate. The binaries approach using high curvature region accurately determines the reference point used to extract the moments. It is invariant to affine transformations on various input condition. The combined feature maintained for authentication into single identification data reduces the amount of biometric features. The analysis on multimodal biometric using moment’s invariant improves the verification accuracy up to 97% as compared to other approaches. The maximum FAR and FRR were maintained at less than 1%. This system demonstrates high reliability, robustness, and good performance in personnel authentication systems.