Abstract
The fusion of palmprint and iris biometric traits is implemented in this paper. The region of interest (ROI) of palm is detected and segmented based on the valley detection algorithm and the ROI of iris is extracted based on the Neighbor-Pixels Value Algorithm (NPVA). The statistical-local binary pattern (SLBP) is used for extracting the local features from the iris and palm. For enhancing the palm features, the combination of discrete cosine transform (DCT) and histogram of oriented gradient (HOG) are applied. The Gabor–Zernike moment is used to extract the iris features. This experimentation was carried out in the identification system. The fuzzy Gaussian membership function was used as classification in the matching stage for the fusion system of palm and iris. The CASIA datasets of palm and iris were used in this research work. The proposed system accuracy was found to be satisfactory.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The pattern recognition is to classify the data based on the knowledge which is already gained in the cognition process [1]. The pattern recognition is a process to understand the object/pattern to which class it belongs. Those patterns/objects can be 1D, e.g., signal or 2D, e.g., image. The biometric is one application of the pattern recognition. The biometric system is to analyze and measure the characters of human body. There are three types of biometrics physical, behavior, and cognitive. The physical biometrics are the bodily parts of the human body suchlike face, fingerprint, and palm. The behavior biometrics are the activities of the human body suchlike gait, keystroke, and voice. The cognitive biometric means every nervous tissue of human has a different response to the signal likes electrocardiogram (ECG), electromyogram (EMG), electroencephalogram (EEG), and electrodermal Response (EDR). In this paper, two physical biometrics were experimented (palmprint and iris).
The iris looks like a ring inside the eye, and it controls the intensity of light which is entering through the pupil. So, the radius of the iris is increasing and decreasing according to the illumination of light. There are more features on the iris such as tiny crypts, nevi anterior border layer collarette, and texture features [2]. The right iris has different features than the left iris for the same human. Also, the iris is one of the touchless biometrics. The palmprint is another trait of physical biometrics which is located between the root-finger and wrist of the hand. Wei and David [3] were the first authors who describe the palm as one type of biometric traits in August 1998.
The paper organization as the next section describes the previous research regarding to the fusion of palm and iris. Section 3 describes the methodologies of the fusion system based on iris and palm. Section 4 discusses the implementation and the results. At the end of the paper will be the conclusion and the references.
2 Literature Survey
Jai et al. [4] have presented a method to extract the palm features and it is named as the histogram of oriented lines (HOL). Indeed, the HOL method is derived from the HOG technique [5] which was presented by N. Dalal and B. Triggs in 2005 for the human detection. The HOL is not sensitive to light changing. Also, the HOL is used to extract the palm lines along with the orientation features. Wei Jai et al. have presented Euclidean distance in the verification and identification system for the palmprint trait. In the verification mode, the error rates were computed, so 0.31 and 0.64% are obtained as error rate in the PolyU M_B and PolyU II databases, respectively. In the identification system, the recognition rates were calculated based on the matching samples from the testing templates. Thus, 99.97 and 100% were the recognition rates in the PolyU II and PolyU M_B databases, respectively.
Fathi et al. [6] have presented the face recognition system which is extracted the global and local features. In the global features, the Zernike moment is applied on the output of the Gabor filter. For extracting the local features of face image, the histogram of oriented gradient (HOG) is used to extract the face features. Thus, the global and local features were combined for producing a feature vector of the face image. The nearest neighbor classification is used for the matching process, and this classification is based on the Euclidean distance. The accuracies were 97.8, 98, and 97.7% in Yale, ORL, and AR databases, respectively.
Mahesh and Raj [7] were proposed the magnitudes of the Zernike moments for generating the feature vector of the face images. They have experimented in 2D and 3D face images. The features were selected on the basis of the order in the Zernike moments, so the length of feature vector was varied from 16 to 42 features. The ORL dataset was into 60% for training 40% for testing. The authors have used the neural network tools in MATLAB (nprtool) for the matching process. In the ORL database, the highest recognition rate is 99.50% with 36 features by using the radial basis function neural network (RBFNN). In 3D face database, the best achievement was 99.71% with 42 features by using multilayer perceptron neural network (MLPNN).
Gayathri and Ramamoorthy [8] were extracted the iris and palmprint features by applying Gabor filter with four orientations and three spatial frequencies. They have fused the palm and iris after the features were described by Gabor filter. The output of Gabor filter for the palm and iris images was integrated in single matrix by using the second level of wavelet transform. In other words, the second wavelet transform was applied on the palm and iris images, then it was combined in one matrix for generating the feature vector. The fusion system accuracy was 99.2% by using the KNN classifier.
Kihal et al. [9] were presented a multimodal of iris and palmprint by extracting the global features using the wavelet transforms. Thus, the iris features were extracted by using the Haar wavelet transform at four levels. For extracting the palmprint features, the Daubechies wavelet transform was applied at four levels as well. The Hamming distance is utilized in the matching phase for the iris modal and palmprint modal separately. The researchers have proposed three different levels of fusion for the iris and palmprint biometric traits features level, score level, and decision level. The authors have compared the results between the three levels of fusion. In case at the feature level, the feature vectors of iris and palm were combined in single vector for generating a feature vector of the iris and palmprint. In the score-level fusion, the feature vectors of iris and palmprint were evaluated separately/individually and based on each feature vector score, the decision is made by using a weighted sum rule. Finally, at the decision level, the fusion was based on the error rate from the feature vectors of iris and palmprint. Khachane et al. [10] were presented the fuzzy rule-based classification for identifying the normal sperm from the abnormal.
3 Methodologies
In this paper, the fusion of palmprint and iris biometric traits was integrated at the feature level as shown in Fig. 1. The palm is segmented from the rest part of hand by using the proposed algorithm from the previous work of authors [11]. The iris was localized by using the Neighbor-Pixels Value Algorithm (NPVA) [12]. The texture features of iris and palm were extracted by calculating the mean and standard deviation of the local binary pattern (LBP) filter. The DCT and HOG were applied together to extract palm texture features. The Gabor filter with eight orientations and five scales were applied on the ROI of iris for generating forty sub-images. Then, the Zernike Moment (ZM) is applied on each sub-image to generate feature vector of the iris. The Gaussian fuzzy membership function is used as classifier in the identification system.
3.1 Preprocessing
The region of interest (ROI) of palm was detected and segmented by applying the valley detection algorithm [13] and thresholding segmentation. For extracting the iris region, the Neighbor-Pixels Value Algorithm (NPVA) was proposed by the author in [12] for calculating the radius and center values of the region of iris. After the iris detection, the iris region is converted into rectangular form by using the rubber sheet normalization [14]. The dimensionality of iris images was resized to 64 × 384 and palm images were resized to 64 × 64.
Further, three of enhancement techniques were applied on the ROI of the palm: contrast-limited adaptive histogram equalization (CLAHE), Sobel filter, and Gaussian filter mask. The contrast-limited adaptive histogram equalization (CLAHE) is used to enhance the intensity contrast of the ROI of palm and iris [15]. The histogram equalization (HE) is applied on the ROI of iris for the enhancing the contrast of the iris pixels. The histogram equalization is distributed the intensity values of pixels in the range between 0 and 255, and it is powerful technique in the image processing techniques.
3.2 Feature Extraction
The concept of feature extractor in pattern recognition in the field of image processing is converting the matrix (image) into a set of mathematical parameters which will be in the form of vector (feature vector) or template features. Those features can be edges, lines in palm, minutia in fingerprint, neighbor pixels-based (local binary pattern features), texture features, or shape-based features, etc. In this experiment, there are three methods were used to extract the features of the palm and iris.
3.2.1 SLBP
The local binary pattern (LBP) was introduced in 2002 by Ojala [16]. In this research, the LBP is used as filter for the palm and iris images. The LBP is applied in each pixel of an image except the first and last row and column. The LBP filter is to compare the pixel with the eight-neighbor values. If the pixel intensity value for the neighbor is greater than or equal to the central pixel, then it will be assigned as one; otherwise, it will be assigned as zero. From the LBP filter, the output of the previous process will be eight binary numbers, so those numbers will convert to decimal number, and it will be replaced with the center pixel as shown in Fig. 2 [17].
For obtaining the SLBP, the mean and standard deviation were calculated from the LBP matrix. Those values were composted for creating the feature vector of SLBP. SLBP method is the common feature extraction techniques for palm and iris.
3.2.2 Composite of DCT and HOG
Discrete cosine transform (DCT) is used for conversion several signals/pixels into a frequency domain. In addition to transform an image, it can be extracted the edges from the palm image. When the palm has different features (principal lines and wrinkles etc.) of the palm, the composite of DCT and HOG was proposed for extracting the features of palm.
Based on following equations, discrete cosine transform (DCT) is computed:
where a = 1, 2, 3, …, N, and N is the length of y. a and y are the same size. The a and N are indexed from a = 1, N = 1 because in MATLAB Vectors runs from 1. The output of DCT is having negative and positive values, and thus the negative values will be converted into positive values.
The histogram of oriented gradient (HOG) is introduced and proposed on 2005 by Dalal and Triggs [5] for the purpose of human detection. In this experiment, the HOG is used to extract the palm texture feature. Thus, the combination of DCT and HOG was proposed to extract the features of palm. The DCT is applied to analyze and describe the edges from the palm image. So, the discrete cosine transform was applied in the palm region images. Then, the HOG is used on the DCT output.
The following steps are describing the HOG algorithm:
Step. 1. The region of palm image is divided into cells and the four groups of cells are considered as blocks, where the cell is 8 × 8 pixels and the size of block is 2 × 2 cells which means 16 × 16 pixels.
Step. 2. 50% overlapping was assigned from the next block as shown in Fig. 3. So, the number of blocks for each palm image is 7 × 7 = 49 Blocks.
Step. 3. Nine directions were selected from the range between 0° and 180o. The orientation and gradient magnitude were computed by using the following equations:
Step. 4. The histograms of each cell with respect to nine bins direction were computed.
where NB is the total of blocks number in the palm image. CB is assigned as four because each block in palm image is divided into four cells. P is the bins orientation which is assigned by 9. From Eq. 9, the total number of HOG features is 49 × 4 × 9 = 1764.
3.2.3 Gabor–Zernike Moments
The idea behind this technique is to combine the Gabor filter with the Zernike moments for extracting the texture features of an iris image. The combination of Zernike moments and Gabor filter was achieved a satisfactory result in the face and iris traits. In this implementation, the Gabor filter [18] was used on the iris image with respect to eight orientations and five scales, thus forty sub-images were generated from the Gabor filter. Therefore, the Zernike moments [19] technique was applied to each sub-image of iris with different orders and repetitions. Four features were selected from each sub-image. The feature vector length of this technique is 160 features. The Zernike moments of order n with repetition m are obtained by the following equations:
where Zernike moment \( Z_{nm} \) on rotated image has the same magnitudes. Therefore, \( \left| {Z_{nm} } \right| \) is the rotation invariance of features. \( V_{nm} \left( {x,y} \right) \) is a Zernike polynomial on a unit circle \( x^{2} + y^{2} \le 1 \).
where n is a nonnegative integer which represents the order of Zernike moment and m represents the repetition. Satisfying the condition n − m is Even and \( \left| m \right| \le n \). Also \( \rho = \sqrt {x^{2} + y^{2} } \) and \( \theta = \tan^{ - 1} \frac{y}{x} \).
3.3 Fusion
The fusion is to integrate the palm and iris biometric traits. In this experiment, the fusion was done in the feature level. The feature vectors of iris and palm were composited to generate a single vector. Before composite the vectors, the max normalization was adopted for making the values of the vector in the same range [0–1]. The max normalization steps as follows:
Step. 1. It is to convert the feature vector values into integer values.
Step. 2. Find out the maximum value of the feature vector of iris and palm.
Step. 3. Divide the feature vector of iris and palm with the maximum values of iris and palm, respectively.
Step. 4. Composite the feature vector of iris and palm in single vector.
3.4 Fuzzy-Based Classification
The feature vectors are classified by using the Gaussian fuzzy membership function [20, 21], and this function gives output in the range between 0 and 1. The following steps are the algorithm of fuzzy classification based on the Gaussian fuzzy membership function:
Step. 1. Enroll the all feature vectors of the biometric traits as matrix.
Step. 2. Divide the feature matrix into training and testing. Where, 50% of the feature matrix was assigned for training model and the rest for testing the system accuracy.
Step. 3. Calculate the average and standard deviation for each training sample of the specific label separately.
where H is a features vector and the n is the length of the H vector. The \( \mu \) is the mean or the average of the specific features. \( \sigma \) is the standard deviation.
Step. 4. Apply the Gaussian fuzzy membership function to the testing samples based on the mean and standard deviation of the training samples.
where e is a mathematical constant and the approximate value is 2.7183. \( H_{w} \) is the Gaussian fuzzy membership values which are in the range between zero and one. \( H_{i} \) is the testing vector. \( \mu \) and \( \sigma \) are the mean and the standard deviation, respectively. The average of each observation \( (H_{w} ) \) of the testing vector is stored in another matrix for computing the accuracy.
4 Implementation and Results
The methodologies and algorithms of this paper were developed by using MATLAB software. The CASIA-iris and CASIA-palm databases [22] were used in the implementation. In this experiment, 24 subjects with eight samples 8 × 24 = 192 subjects were selected randomly from each database of CASIA-Palmprint Database and CASIA-Iris-Interval Database. A total number of selected samples are 8 × 24 + 8 × 24 = 384 images.
The feature vectors of the iris and palm biometric traits were classified based on the Gaussian fuzzy membership function. The feature vectors were divided into 50% as training and 50% as testing. The mean and standard deviation were computed for the training features and it stored as two vectors for each person. The testing vector is evaluated by using the Gaussian fuzzy membership function with respect to the two vectors which were stored (mean and standard deviation vectors).
From Table 1, the accuracies in unimodal biometrics of palm and iris were 97.92 and 93.37%, respectively. The average accuracy of unimodal biometric is ((97.92 + 93.75)/2) = 95.83%. So, the recognition rate was increased in the multimodal biometric by 3.125% (Fig. 4).
5 Conclusion
The comparison between the unimodal and multimodal biometrics system based on fuzzy classifier was developed by using the MATLAB software. The ROI of palm and iris was detected and extracted based on the valley detection algorithm and NVPA, respectively. SLBP was used for extracting the local features of iris and palm. For enhancing the features, the composite of DCT and HOG was applied on the palm images. In case of iris image, the composite of Gabor filter and Zernike moment was applied to extract the iris features. The max normalization was applied to make the feature vectors in the same range, then the feature vectors of palm and iris are integrated in single vector. The fuzzy-based classifier was developed for this system. The recognition rate of the multimodal is 98.96% and approximate time of classification is 7.47 s. The recognition rate of this system is found to be satisfactory.
References
Narasimha M, Susheela Devi V (2011) Pattern recognition: an algorithm approach. Springer Science & Business Media, Berlin, pp 1–6
Wildes RP (1997) Iris recognition: an emerging biometric technology. In: Proceeding of the IEEE, vol 85, issue no. 9, September 1997
Shu W, Zhang D (1998) Automated personal identification by palmprint. In: SPIE [s0091-3286(98)01908-4]
Jia W, Hu R-X, Lei Y-K, Zhao Y, Gui J (2014) Histogram of oriented lines for palmprint recognition. IEEE Trans Syst Man Cybern 44(3)
Dalal N, Triggs B (2005) Histogram of oriented gradients for human detection. In: IEEE computer society conference on Computer Vision and Pattern Recognition (CVPR’05),1063-6919/05
Fathi A, Alirezazadeh P, Abdali-Mohammadi F (2016) A new Global-Gabor-Zernike feature descriptor and its application to face recognition. J Vis Commun Image Represent 38:65–72
Mahesh VGV, Raj ANJ (2015) Invariant face recognition using Zernike moments combined with feed forward neural network. Int J Biom 7(3):286–307
Gayathri R, Ramamoorthy P (2012) Feature level fusion of palmprint and iris. Int J Comput Sci Issues (IJCSI) 9(4):194–203
Kihal N, Chitroub S, Meunier J (2004) Fusion of iris and palmprint for multimodal biometric authentication. In: 2014 4th international conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, New York, pp 1–6
Khachane MY, Manza RR, Ramteke RJ (2015) Fuzzy rule based classification of human spermatozoa. In: 2015 international conference on Electrical, Electronics, Signals, Communication and Optimization (EESCO). IEEE, New York, pp 1–5
Alsubari A, Ramteke RJ (2016) Extraction of palmprint texture features using combined DWT-DCT and local binary pattern. In: IEEE 2nd international conference on Next Generation Computing Technologies (NGCT), Dehradun, pp 748–753
Alsubari A, Ramteke R (2018) Multimodal of face and iris based on local binary pattern and Gabor-Zernike moments. Int J Adv Res Comput Sci 9(1)
Michael GKO, Connie T, Teoh ABJ (2008) Touch-less palmprint biometrics: novel design and implementation. Image Vis Comput 26:1551–1560 (Elsevier)
Masek L (2003) Recognition of human iris patterns for biometric identification. M.S. Dissertation, the University of Western Australia
Zuiderveld K (1994) Contrast limited adaptive histogram equalization. Graphic Gems IV. Academic Press Professional, San Diego, pp 474–485
Ojala T, Pietikäinen M, Mäenpää T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
Alsubari A, Satange DN, Ramteke RJ (2017) Facial expression recognition using wavelet transform and local binary pattern. In: 2017 2nd international conference for Convergence in Technology (I2CT). IEEE, New York, pp 338–342
Štruc V, Pavešić N (2010) The complete Gabor-Fisher classifier for robust face recognition. EURASIP J Adv Sign Process 2010(1)
Khotanzad A, Hong YH (1990) Invariant image recognition by Zernike moments. IEEE Trans Pattern Anal Mach Intell 12(5):489–497
Ramteke RJ, Mehrotra SC (2006) Feature extraction based on moment invariants for handwriting recognition. In: 2006 IEEE conference on cybernetics and intelligent systems. IEEE, New York, pp 1–6
Wang LX, Mendel JM (1992) Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Trans Neural Networks 3(5):807–814
CASIA iris image database and CASIA palm image database. http://biometrics.idealtest.org/
Acknowledgements
The research work is supported and sponsored by SAP DRS-II (No.: F.4-7/2018/DRS-II(SAP-II)), UGC New Delhi, India.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Alsubari, A., Lonkhande, P., Ramteke, R.J. (2019). Fuzzy-Based Classification for Fusion of Palmprint and Iris Biometric Traits. In: Bhattacharyya, S., Pal, S., Pan, I., Das, A. (eds) Recent Trends in Signal and Image Processing. Advances in Intelligent Systems and Computing, vol 922. Springer, Singapore. https://doi.org/10.1007/978-981-13-6783-0_11
Download citation
DOI: https://doi.org/10.1007/978-981-13-6783-0_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6782-3
Online ISBN: 978-981-13-6783-0
eBook Packages: EngineeringEngineering (R0)