Abstract
The hand geometry, features in face, iris scan, and fingerprint vary from person to person, which provide unique features to be used in biometrics field for providing security to various systems. Most of the mono-biometric authentication systems give high error rate as they use only one feature. Hence, multimodal biometric systems are introduced, which can help in reducing the error rate at the cost of maintaining more data related to the features. Hence, it is said to be that the multimodal biometric systems are more reliable and secure. Image-based approaches offer much higher computation efficiency with minimum preprocessing. This approach is proved to be effective as the reliable feature extraction is possible even when the quality of image is low. However, this approach is weak if there are distortions in the shape of the image and variation in the positions or the orientation angle. Hence, this chapter presents a multimodal biometric invariant fusion authentication system based on fusion of Zφ invariant moment of fingerprint and face features. It reduces the storage of more features for authentication and reduces the error rate. The Morlet wavelet transform is used to make the system less sensitive to shape distortion by smoothening and preserving the local edges. The Zφ moment is the combination of Zernike and invariant moments, which are used to produce an affine transformation that is extracted from the fingerprint and the face. Authentication is successful if the similarity is 90% in the case of fingerprint and 70% in the case of face. False acceptance rate (FAR) and false reject rate (FRR) are optimal with these threshold values.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Recent advancements in technology have given scope for more threats to personal data and national security due to large amount of stored data. The information transmitted through online can be easily hacked and override the authorized user by the hackers. There are many traditional methods such as password-, watermarking-, and cryptography-based systems to protect the data from the hackers. But these methods are not sufficient to handle new generation applications [1,2,3,4].
The biometric based authentication was introduced to avoid the brute force attack. Here, the authentication process is performed by the unique physical features of humans like fingerprint [5], iris , retina, hand geometry, etc. They provide high-secured systems than the traditional methods. Initially, the mono-biometric [6] authentication systems were used to authenticate users and secure systems. Fingerprint verification system is the one of the biometric authentication systems that is highly reliable and is being extensively used by forensic experts. Fingerprint applications include entrance control, door-lock applications, fingerprint identification mouse, and fingerprint mobile phones, among others. The biometric fingerprint means allow authorized users access to multiple clinical, financial, and other systems. It also avoids forgery of certificates, conveying of false information, threats, and crimes.
There are three stages in the fingerprint verification system. These are the enhancement, feature extraction, and comparison. Image enhancement is the preprocessing stage where the quality of the edges is improved and contrast level is increased. The poor-quality images will have low-contrast edges and also the boundaries are not well defined which reduce the ratio of FAR and FRR to about 10% [7].
In case of biometrics, the huge number of images needs to be maintained irrespective of the features, as the population is typically high. Hence, an effective compression technique is required in order to utilize the storage space efficiently. But the disadvantage of using the compression technique is loss of data, which leads to inaccurate matching. In this chapter, the Morlet wavelet algorithm is discussed for fingerprint enhancement and compression during the preprocessing stage of fingerprint verification system [8].
Minutiae-based methods [9, 10] and image-based methods [11,12,13] are the two variations in fingerprint verification systems. Minutiae is defined as the points of interest in the fingerprint. Minutiae are used as features in minutiae-based methods, and the position of the minutiae, their orientation, and type are stored as sets. The disadvantage is that they may not utilize rich discriminatory information and may have high computation complexity, whereas the image-based methods utilize ridge pattern as feature. Tico et al. [14] proposed transform-based method using digital wavelet transform (DWT) features, while Amornraksa et al. [15] proposed using digital cosine transform (DCT) features. These transform methods show a high matching accuracy for inputs which are identical to the one in its own database. However, these methods have not considered the invariance to an affine transform to deal with different input conditions.
To satisfy the variability condition, integrated wavelet and Fourier-Mellin transform (WFMT) [16] using multiple WFMT features is used. However, this scheme is not suitable for all types of fingerprint images since it chooses core point as a reference point.
To overcome these methods, the simple binaries method is introduced to extract the core reference point. The Zernike and invariant moments are calculated from the reference point invariant to translate, rotate, and scale. The feature is evaluated by the range of correlation between the moments, which reduces the number of features required for comparison during authentication. In this, the authentication is performed by single biometric system [11], which results in high error rates when many similar features exist in the database.
In order to overcome the high error rates, the multimodal biometric system has been developed. It means that more than one biometric [17] is used simultaneously in order to authenticate and validate the user as well as to maintain more information for security purpose. The multimodal biometric system leads to having more information for authentication so it takes more time for authentication and consumes more storage. It results in high complexity, storage, and execution time. The new fused biometric systems have been introduced to solve the above constraints where the features of the multiple biometrics are combined into a single feature and the authentication is performed using predefined threshold value.
The multimodal biometric fusion system leads in an increase in the error rate for authentication due to the more similar features. There are many fusion methods based on decision, score, and feature level that are used in biometric authentication system. These techniques differ upon what biometric information is going to be fused and how the fusing is done. In decision-level fusion techniques [18], the biometric image was divided into equal small squares from which the local binary patterns are fused to single global features pattern. The performance of these techniques leads to 95% of accuracy. The score level fusion technique [19] is fusing the PCA analysis of the face and fingerprint into single identification system, and in this case the error rate reaches more than 11%. The feature level fusion techniques [20] fuse the feature points of the fingerprint and the face and provide 97% efficiency, but none of the previous fusion techniques provide zero error rates.
In this chapter, a new simple and robust fusion technique called the multimodal biometric invariant moment fusion authentication system has been introduced, and it provides better adaptation of genuine and imposter among various test data sets. The fused algorithm gives a single identification decision (data sets) using coefficients which solve the problem of timely constraints and storage space [21]. This approach provides better results than score, feature, and decision-level fusion technique .
2 Multimodal Biometric Invariant Moment Fusion Authentication System
In multimodal biometric system, more than single biometric is used for authentication purpose. Usually, both mono- and multimodal systems perform the two major operations, namely, enrolment and authentication. During enrolment, the distinct information of the biometric is stored in the database which is used for verification. After enrolment, the authentication is performed by comparing the information with the stored information. Depending upon the ratio of similar or non-similar data, the genuine or imposter must be identified.
2.1 Invariant Moment Fusion System
The binaries method extracts the core reference point in which the Zernike and invariant moments are calculated. Translation, rotation, and scaling are performed on invariants. The final features for authentication are evaluated by the range of correlation between the moments to reduce the amount of storage.
2.2 Fingerprint
2.2.1 Morlet Enhancement and Compression
The Morlet fingerprint image enhancement and compression [8] consists of two-stages in processing. They are wavelet analysis and smoothening. In wavelet analysis, the Fourier transforms are applied on the 2D Morlet wavelet and the original image separately. The transformed images are then obtained from these transformed functions. The corrected two-dimensional continuous wavelet transform (2D CWT) is obtained by applying the inverse Fourier transform in the transformed image. During the smoothing process, the orientation and the frequency image [22] of the 2D CWT image are estimated and applied in the Gabor filter in order to remove noise .
The steps involved in the algorithm are as follows:
-
1.
The image is decomposed using Morlet wavelet.
-
2.
Ridge segmentation is done to identify the broken ridges.
-
3.
The ridge orientation is estimated.
-
4.
The frequency is estimated using orientation image.
-
5.
The final image is reconstructed based on adjoining chosen filtered blocks .
2.2.2 Morlet Wavelet
2.2.2.1 2D Continuous Wavelet Transforms
2D CWT is performed by convolving a wavelet function and image. For f(x, y) ∈ L2R, 2D CWT in time domain is given as:
where s is the “dilation” parameter used to change the scale and a, b are the translation parameters used to slide in time. The factor of s1/2 is a normalization factor to keep the total energy of the scaled wavelet constant.
The 2D CWT in frequency domain is given as:
where w1 and w2 refer to the frequency of the image, F(w1,w2) is the low-frequency spectrum, and ϕ(w1,w2) is the phase modulation, which defines the spectrum of deformed image. The Fourier transform in Morlet wavelet is applied to the image, which calculates the discrete points depending on the scale and displays the real part of the inverse Fourier transform.
The decomposition of the fingerprint image by 2D Morlet wavelet is shown in Fig. 12.1a. The resultant transformed image has good contrast and enhanced ridges with compression .
2.2.3 Ridge Orientation
The orientation image represents an intrinsic property of the fingerprint image and defines invariant coordinates for ridges and furrows in a local neighborhood as shown in Fig. 12.1b. A ridge center maps itself as a peak in the projection. The projection waveform facilitates the detection of ridge pixels. The ridges in the fingerprint image are identified with the help of eight different masks. The ridges are separated from the fingerprint image by the following equations:
where σ is the standard deviation and I(x, y) is an integrated image.
By viewing ridges as an oriented texture, a number of methods have been proposed to estimate the orientation field of fingerprint images [22]. Given a transformed image, N, the main steps for calculating dominant directions are as follows:
-
1.
Divide N into blocks of size w × w.
-
2.
Compute the gradients and apply Gaussian filter Gxy. The gradient operators are simple Sobel operators and Gaussian filter is applied as follows:
-
3.
Estimate the local orientation of each block centered at pixel (i, j)
where the degree of smoothening is governed by the variance σ2.
2.2.4 Frequency Image
The frequency of the fingerprint image is estimated using the orientation image O(x, y) by Eq. 12.7, and it is shown in Fig. 12.1c. The block is rotated and cropped based on the orientation. The median filtering is then applied for smoothening.
where \( W\left(u,v\right)=\frac{u}{\sqrt{2}},\frac{v-\frac{u}{\sqrt{2}}}{2} \)
F(u, v) is the wavelet transformed image and I(u, v) ensures that the valid ridge frequency is non-zero. The ridge of 3–25 pixels is the valid range .
2.2.5 Enhanced Image
The Gabor filter optimally captures both local orientation and frequency information to smoothen the fingerprint image. By tuning a Gabor filter to specific frequency and direction, the local frequency and orientation information can be obtained, which will be used for extracting texture information from images, which gives smoothing as a part of enhancement by removing the noise shown in Fig. 12.1d.
where σx and σy determine the shape of the filter envelop and f represents the frequency of the image.
2.2.6 Determination of Reference Point and Regions of Interest (ROI)
The reference point is determined in order to evaluate the ROI of the fingerprint image, which are used to extract the Zϕ moments. This process simplifies the process of the extraction by reducing its complexity.
The Otsu method is used to define the threshold to the binaries of the image. Intra-class variance is defined as a weighted sum of variances of the two classes:
Weights ω1 and ω2 are the probabilities of the two classes separated by the threshold of variance and σ12and σ22 variances of these classes, respectively. Minimizing the intra-class variance is the same as maximizing interclass variance:
which is expressed in terms of class probabilities ωI and class means μi. The class probability ωi(t) is computed from the histogram t:
while the class mean μi(t) is:
where x(i) is the value at the center of the ith histogram. Similarly, we can compute ω2(t) and μ2(t) on the right-hand side of the histogram .
The algorithm to binaries detects and crops the ROI of fingerprint:
-
1.
Compute histogram(t) and probabilities of each intensity level
-
2.
Set up initialωi(0) and μi(0).
-
3.
For t = 1 to maximum intensity, do:
-
3.1
Update ωi and μi
-
3.2
Compute σb2(t) using Eq. 12.11
-
3.3
Thresholds σb12(t) greater and σb22(t) equal or minimum is defined
-
3.1
-
4.
The region labeled with four connected components is chosen which determines the high curvature region used to determine ROI .
-
5.
The median of the region is taken as reference point, and image is cropped into size of 120 × 120. It is shown in Fig. 12.2.
2.2.7 Invariant and Zernike Moment Analysis
The algebraic invariants and Zernike moment are calculated from the reference point of the fingerprint and are invariant to scale, position, and rotation. Algebraic invariants are applied to the moment generating function under a rotation transformation. Nonlinear centralized moment and absolute orthogonal moment invariants are calculated with reference. Fingerprint ZΦ invariants [18] are shown in Table 12.1.
Invariant Moments
Central moments of order 3 or less are for translational invariance. For a 2D continuous function f (x, y), the moment of order (p + q) is defined as:
Second-order central moment for image orientation for scaling invariant:
A set of seven invariant moments derived from the second and third moments is a set of absolute orthogonal moment invariants proposed by Hu [23].
Rotational invariant moments: φ(1) = ξ2, 0 + ξ0, 2.
Moment of inertia (pixel intensity to physical density for rotation invariant).
Skew invariants distinguish between mirror and identical images .
Zernike Moments
The Zernike moment is a set of complex polynomials {Vnm(x,y)}, which form a complete orthogonal set over the unit disk of x2 + y2≤1 from the polynomial in polar coordinates, where n is the +ve integer or 0, n-|m| is even, |m| ≤n and .
The radial polynomial:
The Zernike moment is:
2.3 Face Fusion System
The architecture of the face fusion system is shown in Fig. 12.3. The eigen faces are extracted from the face and used for authentication [17]. Initially, the mean and difference of each image in the training set is computed by using Eqs. 12.18 and 12.19. Then the entire centralized image T is merged using mean to obtain the result A. The merged value is used for computing the surrogate covariance matrix L using Eq. 12.20. The diagonal elements of covariance matrix are taken as eigen faces using Eq. 12.21. Eigen elements are sorted and are eliminated if their values are greater than 1. Finally, the six invariant features are extracted from the faces using Eq. 12.22.
The high dimensionality makes a good face recognition algorithm. The sample tested face features fusion is shown in Table 12.2.
2.3.1 Fusion
The data sets are independently computed by the described variants of face and fingerprint [18]. The variation distance of the moments is calculated using Eqs. 12.23 and 12.24. It is used for enrolment and comparison during authentication.
2.3.2 Authentication
The multimodal biometric authentication is one of the new breeds of authentication system performed by means of more than one biometric in order to validate/authenticate the user. The overall architecture of our authentication system is shown in Fig. 12.4. The trained set of inputs in which invariant moment is extracted is fused and enrolled in the database. Now during authentication, the test data input image of fingerprint and face scanned by the user is fused and compared with the fused value in the database. Then matching is performed by calculating the correlation r between the distance di of enrolled moments α and verification moments β by Eq. 12.25. The correlation between the fused values computed using Eq. 12.25 and variation using Eq. 12.26 determine whether the user is legitimate or not.
The resultant difference value is compared with the threshold value to validate the user using Eq. 12.22. The threshold value is based upon the sensitivity of the system. If the difference is low, then the similarity will be higher and it crosses the threshold limit to authenticate the user. Otherwise, the user is not authenticated. This multimodal biometric authentication system performed well and provides more than 99% accuracy .
3 Experimental Results
The fingerprint image database used in this experiment is the FVC2002 database, which contains four distinct data set DB1, DB2, DB3, and DB4.
The performance is evaluated in terms of false acceptance rate (FAR) and false reject rate (FRR) .
The FAR means imposter accepted as a genuine user, and FRR means the genuine user is rejected as imposter. They are calculated using the Eqs. 12.28 and 12.29, respectively.
The equal error rate (EER) is used as a performance indicator, which indicates the point where FRR and FAR are equal and for evaluating the performance in terms of recognition rate.
The receiver operating characteristic is used as another performance indicator (ROC). It plots the genuine acceptance rate (GAR = 1-FRR) against FAR. The missing probability and alarm probability are evaluated.
Finally, EER is evaluated and results are shown in Figs. 12.5, 12.6, and 12.7, where it is shown that the performance of the proposed system works well in comparison with other image-based approaches.
The DCT coefficient used by Amorniska in [15] and Jimin [16] used WFMT features; Sha [13] with Gabor filter and Ju [24] with invariants using BPNN are compared, and results shown in Table 12.3 with the proposed method provided more accuracy.
4 Conclusion
The combined Morlet enhancement with fusion of Zernike and invariant moment features of fingerprint and face is fused by evaluating the distance, mean, and correlation. The combined Morlet enhancement with fusion of Zernike and invariant moment features reduces the storage of features and error rate. The binaries approach using high curvature region accurately determines the reference point used to extract the moments. It is invariant to affine transformations on various input condition. The combined feature maintained for authentication into single identification data reduces the amount of biometric features. The analysis on multimodal biometric using Zφ moment’s invariant improves the verification accuracy up to 97% as compared to other approaches. The maximum FAR and FRR were maintained at less than 1%. This system demonstrates high reliability, robustness, and good performance in personnel authentication systems.
References
M.S. Obaidat, N. Boudriga, Security of e-Systems and Computer Networks (Cambridge University Press, Cambridge, UK, 2007)
M.S. Obaidat, B. Sadoun, Verification of computer users using keystroke dynamics. IEEE Trans. Syst. Man Cybern. B 27(2), 261–269 (1997)
M.S. Obaidat, B. Sadoun, Keystroke dynamics based identification, in Biometrics: Personal Identification in Networked Society, ed. by A. Jain et al. (Springer, Kluwer, 1999), pp. 213–229
W. Stallings, Cryptography and Network Security- Principles and Practices (Prentice-Hall, Upper Saddle River, 2003)
T. Jea, V. Govindaraju, A minutia-based partial fingerprint recognition system. Pattern Recogn. 38(10), 1672–1684 (2005)
T. Jea, V.K. Chavan, V. Govindaraju, J.K. Schneider, Security and matching of partial fingerprint recognition systems. Proc. SPIE 5404, 39–50 (2004)
D. Maio, D. Maltoni, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition (Springer, Berlin, 2003)
P. Viswanathan, P. Venkata Krishna, Fingerprint enhancement and compression method using Morletwavelet. Int. J. Signal Imaging Syst. Eng. 3(4), 261–268 (2010)
S. Prabhakar, J. Wang, A. K. Jain, S. Pankanti, R. Bolle. Minutiae verification and classification for fingerprint matching. In Proc. 15th International Conference Pattern Recognition, Vol. 1, Barcelona, September 3–8, 2000, pp. 25–29
J. Liu, Z. Huang, K. Chan, Direct minutiae extraction from gray-level fingerprint image by relationship examination. Proc. Int. Conf. Image Process. 2, 427–430 (2000)
P. Viswanathan, P. Venkata Krishna, Morlet Wavelet fingerprint invariant automated authentication system. Int. J. Recent Trends Eng. 4(1), 1–5 (2010)
C. Chen, Decision level fusion of hybrid local features for face recognition. In Neural networks and signal Processing, 2008 International Conference on (pp. 199–204). IEEE (2008).
L.F. Sha, F. Zhao, X.O. Tang, Improved finger code for filter bank-based fingerprint matching. Proc. Int. Conf. Image Process. 2, 895–898 (2003)
M. Tico, E. Immonen, P. Ramo, P. Kuosmanen, J. Saarinen, Fingerprint recognition using wavelet features. Proc. IEEE Int. Symp. Circuits Syst. 2, 21–24 (2001)
T. Amornraksa, S. Achaphetpiboon, Fingerprint recognition using DCT features. Electron. Lett. 42(9), 522–523 (2006)
A.T.B. Jin, D.N.C. Ling, O.T. Song, An efficient fingerprint verification system using integrated wavelet and Fourier-Mellin invariant transform. Image Vis. Comput. 22(6), 503–513 (2004)
D. Maio, D. Maltoni, Direct gray scale minutia detection in fingerprints. Trans. PAMI 19(1), 27–40 (1997)
P. Viswanathan, P. VenkataKrishna, Multimodal biometric invariant moment fusion authentication system. Information Management Processing, BAIP 2010, Springer CCIS, vol 70, 2010, pp. 136–144
G.L. Marcialis, F. Roli, Score-level fusion of fingerprint and face matchers for personal verification under “stress” conditions. In 14th International Conference on Image Analysis and Processing (ICIAP 2007) 0-7695-2877-5/07 $25.00 © 2007 IEEE
A. Rattani, D.R. Kisku, M. Bicego, M. Tistarelli, Feature level fusion of face and fingerprint biometrics 978-1-4244-1597-7/07/$25.00 ©2007 IEEE
T.-Y. Jea, V. Govindaraju, A minutia-based partial fingerprint recognition system. Pattern Recogn. 38(10), 1672–1684 (2005)
C.I. Watson, G.T. Candela, P.J. Grother, Comparison of FFT fingerprint filtering methods for neural network classification. NISTIR 5493 (1994) Available: https://ws680.nist.gov/publication/get_pdf.cfm?pub_id=900727
M.K. Hu, Visual pattern recognition by moment invariants. IRE Trans. Info. Theory IT-8, 179–187 (1962)
J.C. Yang, D.S. Park, Fingerprint verification based on invariant moment features and nonlinear BPNN. Int. J. Control. Autom. Syst. 6(6), 800–808 (2008)
L. O’Gormann, J.V. Nickerson, An approach to fingerprint filter design. Pattern Recogn. 22(1), 29–38 (1989)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Viswanatham, P., Venkata Krishna, P., Saritha, V., Obaidat, M.S. (2019). Multimodal Biometric Invariant Fusion Techniques. In: Obaidat, M., Traore, I., Woungang, I. (eds) Biometric-Based Physical and Cybersecurity Systems. Springer, Cham. https://doi.org/10.1007/978-3-319-98734-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-98734-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98733-0
Online ISBN: 978-3-319-98734-7
eBook Packages: EngineeringEngineering (R0)