Keywords

1 Introduction

Biometric systems are analytical systems that identify or verify an individual by analyzing his behavioural or physical characteristics. With the advancement in technology, it is getting difficult for traditional security methods like I-cards, badges and passwords to provide sufficient level of security and protect vital information from imposters. Even unimodal biometric systems sometimes fail to serve the matter. As unimodal biometric systems depend upon one biometric trait only, they suffer from issues such as noisy or incorrect sensor data, dearth of individuality, high error rates, non-universality, spoofing attacks and lack of invariant representation [1, 2]. Considering any field today, from forensics to e-banking, issuing driving license to even entering to any office or country, security has become the most important aspect. As the threat of imposter breaching in the system increases, the methods of providing security must also get updated. So, people shifted from unimodal systems to multimodal systems as it uses two or more biometric modalities to complete their desired function. Multimodal systems have over-ruled many of the complications that unimodal systems suffered from.

The biometric modalities, on which the functioning of a biometric system mainly depends, can be listed among following two groups, i.e. behavioural and physical. The inherent, very stable and time-invariant type of traits of an individual are his physical traits, for example palmprint, footprint, iris, hand geometry, retina, fingerprint, height, hand vein, face and ears. Whereas the one depending on the habits or behaviour of the person is his behavioural traits, for example voice, signature, keystroke, walking speed, arm or leg movement and gait [3]. These biometric traits, when used in combination for multimodal biometric systems, can be fused at three different fusion levels [2, 4], those are:

  1. I.

    Feature extraction level fusion: First possible level of fusing biometric modalities being used in the system is feature extraction level. As the raw data collected from sensors is the richest source of features/information and if fusion is done at this level, it gives the best results for the verification and identification process. But, this level of fusion is also the most difficult one as different sensors produces data in different form, they may or may not be compatible with each other for fusion of sensor data. Similarly, features extracted from different modalities can be in various forms, their compatibility must also be checked before their fusion.

  2. II.

    Matching score-level fusion: Next level where fusion can be done is matching score level, where the scores generated by the matching classifiers for various feature vectors are fused instead of the feature vectors themselves. This method of fusion is the most used one till date, as this method is rich in information and easy to fuse as well. Matching scores of different feature vectors are generated using classifiers independently using their corresponding template stored in the database, and then, these scores values are fused to obtain a new matching score that can be further utilized by decision module for accepting or rejecting the individual’s identity.

  3. III.

    Decision-level fusion: Last possible level of fusing modalities is at decision level of the system. Decision for different modalities is taken independently depending on their matching scores. Then, these decisions are fused to take the final decision for the acception or rejection using schemes like majority voting. This level of fusion is the easiest one to imply but does not work well with real-time constraints. All these discussed level of fusions are demonstrated in Fig. 1 as well.

    Fig. 1
    figure 1

    Depicting various levels of fusion (FU: fusion module, MM: matching module, DM: decision module) [4]

In this paper, various techniques used in multimodal biometric systems based on iris, and palmprint are reviewed. A comparative analysis is made considering various aspects like level of fusion used, fusion technique used and various accuracy parameters. Then, a novel approach for extracting background from complex background hand images is presented. These extracted images of palm are then used along with iris for verification of the individuals.

Biometric systems work in two different modes depending on the need of the application. Those two modes are identification mode and verification mode [5, 6]. Identification mode is when the system compares the given biometric sample against all the templates present in the system’s database to find out the unknown identity of the given trait. This mode of operation is very complex and time taking, but it is very helpful in negative recognition [7] in crucial areas like forensics and criminal cases. Whereas in verification mode, the system helps to confirm identity of the already known individual by comparing the given biometric trait against the template stored in the database along with that identity. This mode of operation takes less time as less number of comparisons is to be made. Applications like laptop or phone security system, attendance system, entry security in offices and e-banking are the examples of verification mode. The biometric system presented in this paper will work and get checked for results in future in verification mode.

2 Literature Review

The round-shaped flower like portion of human eye is called iris which is surrounded by pupil and sclera on both sides. Iris is among the most accurate biometric modalities being used since last decade. Many researchers like J. Daugman, Leonard Flom, Aran Safir and many more have worked and are still working great in developing new methods for iris recognition. J. Daugman [8] developed the most successful algorithm for iris recognition till date with an accuracy of 99.9% and very low FAR/FRR, i.e. 0.01/0.09. But this algorithm got commercial and hence very expensive, also it was very time consuming. Iris was first used as biometric trait successfully in 1987 by Flom and Safir [9]. After that many techniques like encoding iris code using 2-D Gabor filters [8], circular hough transform [10], RED (ridge energy detection) method [11, 12] and many more. Among these various algorithms, RED algorithm gained popularity after J. Daugman’s presented algorithm. A major problem with iris recognition using all the above-discussed methods was its failure in unconstrained environment. But in 2009, Tan et al. [13] proposed a solution to this problem with a clustering-based algorithm for iris localization and integro-differential constellation pupil extraction. After that many other methods were presented to solve the same problem like 1-D and 2-D wavelet-based techniques [14], K-mean clustering and circular hough transform along with canny edge detector-based algorithm [15], Fuzzy c-mean clustering-based algorithm [16]. Among these, Fuzzy c-mean-based algorithm performed better as it considered membership functions for dividing the clusters.

On the other hand, palmprint is rather new in biometric field, but has many advantages over many other biometric traits. It can provide more information/features as compared to fingerprint. Sensors and hardware required for palmprint are cheaper than iris or retina like traits [17, 18]. Palmprint has so many features hidden in it which are grouped under five names, i.e. texture, line, geometric, point and statistical features. Combination of these features gives a very high accuracy rate in security systems. Some of the techniques developed to use these features include work done by Jain et al. [19] in 2001 who used prominent principle lines with feature points in palm region. 2-D Gabor filters were used for feature extraction in [20]. Sobel operator [21, 22], HMM (Hidden Markov Model) classifier [23], etc. were used for line features extraction. Techniques including PCA (principle component analysis) and ICA (independent component analysis) [24], DCT (discrete cosine transform) [25], Fourier transform [26], Scale-invariant feature transform for contactless images [27], Contourlet transform [28] and many more were used for extracting texture features of palmprint. The results of above-mentioned methods showed that texture features of palmprint give most accurate results among all five types of features.

Iris and palmprint both are very effective and reliable biometric traits but both have some limitations as well. Combining these two traits together can rule out their limitations and can develop a highly accurate and reliable security system. Some of the work done on the fusion of these two modalities includes the algorithm developed by Wu et al. [29] in 2007, which resulted in 0.012% MTR and 0.006% EER. Author used score-level fusion based on sum and product techniques. Another method using feature-level fusion based on wavelet packet transform technique developed by Hariprasath and Prabakar [30] in 2012 gave 93% accuracy rate. In the same year, R. Gayathri and Ramamoorthy [31] also used feature-level fusion using wavelet-based technique for extracting texture features generating an accuracy rate of 99.2% and FAR of 1.6%. After that, Kihal et al. [32] proved that quality of the image being used highly effects the results of the biometric system in 2014. They proved their point by working on three different datasets, performing all three levels of fusion on texture features of iris and palmprint. S.D. Thepade et al., in 2015, worked in transform domain [33] using Haar, Walsh and Kekre transform for extracting texture features and then performed score-level fusion proving that kekre transform works better with a GAR of 51.80 (approx.). Apurva et al. [34], on the other hand, worked in spatial domain using RED algorithm for iris and harris feature extraction algorithm for palmprint focusing on geometric features of palm. They used decision-level fusion for finalizing the results of their biometric system. Table 1 presents a comparative study of these algorithms discussed above.

Table 1 Comparision between fusion algorithms

3 Proposed Work

A novel approach is presented in this paper for recognizing human’s iris in an unconstrained environment. On the other hand, for palmprint segmentation, a new approach proposed for background extraction using IFCM (intuitionistic Fuzzy c-mean) algorithm. This will help in extracting human’s hand image from any kind of unconstrained background making the system more suitable for real-time security applications. Figure 2 represents the flow diagram of major steps included in the proposed method. As it is a multimodal system, it will work on two modalities which are iris and palmprint. Both the traits are very unique and rich in feature information. But both are quite different kind of modalities and their feature sets are also very different so it will be very difficult and complex to fuse them at feature extraction level. The major steps that will be followed in the proposed method are:

Fig. 2
figure 2

Demonstrating the sequence of steps in proposed method

  1. I.

    Extract the red channel out for the hand image as red channel contains most of the important information of the image. So, it can be used individually for the background extraction purpose. This is done to reduce the overheads during background extraction process and making it fast. This is one of the pre-processing steps of the method.

  2. II.

    IFCM algorithm is then applied to the extracted red channel of hand image for dividing the image into two clusters: one for background and one for human hand. This technique is chosen for clustering because it is assumed to give better results than Fuzzy c-mean algorithm for our purpose because it considers both the membership as well as non-memberships functions for creating the clusters in the image. The image will be divided into two clusters based on the intensity values, after that on the basis of the membership function of belongingness to each cluster, it is decided that which part of the image belongs to which cluster [16]. This technique performs better than Fuzzy c-mean algorithm as it rules out some of the difficulties faced by Fuzzy c-mean in creating clusters precisely by making use of non-membership functions as well.

  3. III.

    ROI (palmprint) is then extracted out from clustered hand image using some morphological operations and enhancement algorithms. Palmprint region is full of different types features that are geometrical, statistical, line, point and textural. In this method, main focus will be kept on texture features and line features. Texture features are proved to give most accurate results among all five types of features, and line features are also found to be very unique such as principle lines. Extracted features will then be matched with the template stored in the dataset, and matching score will be generated using hamming distance method. These scores will then be stored.

  4. IV.

    Iris image will then be operated on with various pre-processing steps like localization, normalization and then enhancement. These steps are very important for extracting an iris portion out correctly from an eye image. Iris segmentation is the step on which the recognition results of the system will depend upon. So, iris segmentation must be done accurately and carefully for best results. In our system, pre-processing will be done according to RED (Ridge energy detection) algorithm [11], i.e. some binary morphological operations for finding the centre and radius of pupil and then with the help of that and kurtosis (local statistics), finding the outer boundaries of iris. For this, image is converted to its polar coordinates with 21 possible centre of references around pupil. With this, the outer boundary will get oriented horizontally, and the best fit among these will determine the radius and centre of pupil. After this step, feature extraction can be performed easily.

  5. V.

    RED algorithm will then again transform the iris image into polar coordinates, and then, features will be extracted using horizontal and vertical filtering and the extracted features/information will then be compared against the already retained template in the dataset using hamming distance method. These matching scores will again be stored.

  6. VI.

    Both the generated matching scores will then be fused using sum rule and product rule, and then, final matching score will be generated. Depending on that matching score, final decision will be made for the verification process. If the results match successfully that means the person is verified to be the authentic person. Otherwise, he is not the authentic person and trying to be someone else or trying to hide his original identity.

4 Software and Datasets

We will be using Windows 10 with MATLAB 13a on an Intel core i5 processor. We will be using two different datasets of palmprint for experimenting the background extraction technique: Those will be COEP palmprint database and touchless palmprint database version 1.0 provided publically by IIT Delhi. This will be done to make sure that any kind of unconstrained background can be extracted using this technique. COEP palmprint database is publically available dataset maintained by College of Engineering, Pune consisting of palmprint samples of 167 different people with 8 different instances of same person. IIT Delhi touchless palmprint database is composed of left- and right-hand images from more than 230 subjects having 5 hand image instances from each of the hand. For palmprint feature extraction and matching process, IIT Delhi palmprint database will be used. For feature extraction of iris and its matching process, IITD Iris image database version 1.0 will be used. This dataset contains 2240 images acquired from 224 different users having 10 different instances of each user.

5 Conclusion

As we know, biometric systems are gaining very much importance for providing sufficient security to the vital information these days. Multimodal biometric system is the current trend in security systems. Iris and palmprint, the two traits considered in this paper are very efficient, unique and reliable biometric modalities. This paper provides a short review on both the modalities and their fusion, also a novel approach is presented for using palmprint and iris together for verifying the identity of any individual. In this approach, IFCM is used for extracting hand image from unconstrained background, morphological operations for extracting the ROI (palmprint) and RED algorithm for iris feature extraction. Score-level fusion is used for fusing both the modalities, and then, final decision is taken. As IFCM technique is not used till now for this purpose and is an improvement in Fuzzy c-mean algorithm so it is expected to give better results than previous work done in this area. This technique will be experimented for results on three databases, i.e. palmprint database provided by COEP, touchless palmprint database version 1.0 provided by IIT Delhi and Iris Image Database version 1.0 provided by IITD.