Abstract
Biometric-based authentication system is one of the main strategies to protect and control the access of users to important resources in any system and organization. Iris pattern is one of the best and most reliable biological features used in these systems. Extraction of high-discriminative local features can increase the recognition accuracy of iris-based biometric systems, especially when the number of users is high. Most of the existing methods utilize a combination of simple handcraft local feature models that deteriorate system performance when the number of users is increased. In this paper, after identification and segmentation of iris region, a new learning-based method is proposed to define and extract rotation- and illumination-invariant main local patterns associated with the iris texture. Afterwards, the metric-learning-based transform is employed to improve the discrimination of these patterns in recognition process. The proposed method was applied on more than 10,000 images from CASIA-V4, UBIRIS and ICE data sets. The identification accuracy of this method is 99.7, 98.13 and 99.26%, respectively, that is, higher than other methods in terms of both recognition accuracy and the number of used images.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In modern societies, automatic authentication has become very important, especially in security fields. Research and development in recent decades opened a new chapter in the verification and authentication. As a result, these automatic methods are based on individual characteristics of human beings that is called biometrics. New biometric systems are employed for identification of people with high accuracy and reliability as well as low cost.
The iris pattern is one of the best and most reliable biological features used in this field [1]. Extraction of high-discriminative features associated with iris texture is one of the main challenges that affects the accuracy of iris-based biometric systems, especially when the number of users is high.
In the present paper, after localization of the iris, two appropriate and reliable regions on the left and right side of each iris image are selected. Then, using learning-based high-discriminative local patterns descriptor, the iris texture features of both regions are extracted in the form of two separated histograms. General description of the iris can be achieved by concatenating these histograms. By applying the metric-learning-based transform on the extracted feature vectors, their discrimination is enhanced, so that the matching process can be done with higher accuracy.
This article contains several sections and organized as follows: different methods of iris recognition presented in recent years will be discussed in Sect. 2. The proposed method will be described in Sect. 3. The obtained results are given in Sect. 4, and the conclusions are presented in Sect. 5.
2 Related works
The widespread use of digital systems and connecting them to the Internet will increase the sensitivity of data storage and transfer processes; and this makes user authentication increasingly necessary to prevent unauthorized access to data. Biometric is the science of identifying individuals based on their biological and behavioural traits that has the potential to provide efficient authentication systems [2]. Local patterns of iris texture have been used as one of the most appropriate biometric features in recent years. Iris-based authentication systems include several stages such as pre-processing, segmentation, normalization, extraction of iris features and pattern matching.
Segmentation of iris: iris is a circular section between the pupil (as inner edge) and sclera (as outer edge). The inner and outer edges of iris are nearly circular. The actual location of the iris is defined by isolation of real iris region in an image, or finding inner and outer edges of the iris. Different techniques have been proposed for localization and segmentation of iris [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. Farihan et al. [3] conducted the segmentation process using histogram and filtering. The employing of different filters along with Hough transform is another main technique for iris segmentation [4,5,6,7,8]. Detection of the outer edges of the iris and modelling the visible part of iris using an elliptical curve have been carried out for iris segmentation in several studies [9,10,11]. Roy et al. [12] localized the iris region using active contour method. Hollingsworth et al. [13] used a combination of Canny operator and circular summation in order to localize the iris region. Li et al. [14] used a combination of K-means clustering and improved Hough transform for iris recognition. Hu et al. [15] employed the model selection method using SVM classifier and histogram of oriented gradient (HOG) features. Tan and Kumar [16] proposed the Zernike-moment-based approach for iris segmentation. Jan et al. [17] also suggested a combined approach for iris localization. The inner contour was detected using integration of sliding window and multi-valued adaptive threshold technique, and outer contour was localized using an edge-detecting operator in a sub-image centred at the pupil. Ibrahim et al. [18] presented a two-stage statistical method for locating iris. In the first stage, a circular moving window was utilized to localize the pupil and then the iris boundary was estimated by taking the gradient of the rows within pupil. Koh et al. [19] used a combination of active contours and Hough transform to determine the location of the iris. Jan et al. [20] extracted the inner and outer edges of the iris by using combination of Hough transform, grey-level statistics, adaptive threshold and geometric transformation.
Normalization of iris image: when the region of the iris is successfully isolated, the iris region should be transferred to coordinates with fixed dimensions so that different images can be easily compared. Different factors such as variation in lighting, camera distance, orientation of camera and orientation of head may change the iris’s size. Most studies used the Daugman technique for size normalization purpose [21,22,23]. According to the model proposed by Daugman, at first the size of the iris images is changed so that they all have the same diameter, and then any point in the iris region is transformed to the polar coordinates. Shin et al. [24] and Li et al. [14] used the retinex algorithm for normalization purposes. They used Cartesian–polar transformation, polynomial embedded transformation and Cartesian reconstruction in order to make images parametric.
Feature extraction: after determining the iris region and normalizing it, a feature vector should be extracted to describe the biometric features of the individual’s iris. These vectors must be small so that high volume of data and computations are not imposed to the system; on the other hand, the extracted vector needs to be highly discriminative so that they can correctly perform the authentication process. Daugman introduced the first iris-based biometric system in 1994. This system was designed based on Gabor wavelet [25]. Several new feature extraction methods in the iris biometrics science such as wavelet packet-based analysis [5], Zernike moments, histogram of gradient (HOG) [14], fractal dimension [26], combination of wavelet transform and genetic algorithms [27] and Gabor and wavelet filter banks [28,29,30,31,32,33,34,35] have been presented in recent years. The neighbourhood-based binary patterns (NBP) feature extraction is also one of the techniques used for extraction of features in this field [36]. Co-occurrence matrix is another way of describing the texture of irises [37]. Scale-invariant feature transform (SIFT) is one of the most prominent feature descriptors that employs in different area and also in iris biometric [38, 39]. Chen et al. [40] proposed a new algorithm for iris recognition based on multiple greyscale adaptation. Rahulkar et al. [41] proposed combined directional wavelet filter bank (CDWFB) to extract iris features. An SVM-based method was also proposed for extraction of iris features by Ali et al. [42]. Sun and Tan [43] introduced an ordinal measure-based method using multi-lobs differential filters (MLDF). Ma et al. [44] introduced a method for extraction of features using a bank of spatial filters, and they combined bootstrap and Fisher linear discriminant learning methods to improve detection rates. In [45], features were extracted by Haar wavelet and two-dimensional Gabor filter. In [46], a robust SURF feature descriptor was presented. In [47], a combination of Gabor features with SIFT key points was used for iris recognition. Tan et al. [48] used geometric keys to describe the iris changes such as scale, rotation and transmission as well as environmental noises from the original images. In addition, some methods tried to employ feature-learning-based techniques such as bag-of-words [49], locality-constrained linear coding [50] and sparse representation coding [51]. These methods try to define the texture patterns of iris region using a dictionary of learned texture primitives. This dictionary is learned by analysis of structural or statistical information of the arrangement of pixel intensities of the iris image.
Pattern matching is the final stage of iris recognition systems, in which the similarity of the obtained feature vectors is measured using various distance measures and classifiers such as SVM, ANN and KNN.
3 The proposed method
In this paper, a high-precision method is proposed for identifying individuals using their iris features. In this method, first, the iris region is detected in the input image by applying pre-processing and segmentation techniques. After that, the iris region is normalized into a fixed-size image with 256 × 256 pixels. Then, these two high-reliable regions are determined in the iris region in order to extract the biometric features. Afterwards, the clustering-based method is employed on these two regions to define and extract local patterns that fit the iris texture. In order to increase the recognition accuracy of the proposed method, discrimination of the extracted local features is increased using metric-learning-based technique, and then, a set of high-discriminative local features (HDLF) are extracted. High-discriminative local features (HDLF) in the input image are compared with the HDLFs of the reference images through the nearest neighbour classification and the most appropriate one is determined as the recognition system output. The block diagram of the proposed method is shown in Fig. 1. The details of the proposed method are given in the next.
3.1 Pre-processing and image segmentation
Since iris is located between the sclera and pupil, and since the unique data are only associated with the iris texture, first the inner and outer borders of the iris should be specified. In this study, segmentation is carried out using combination of Canny edge detection algorithm and Hough transform. To increase the segmentation accuracy, the images should be smoothed. The smoothing operation is done by fourth-order partial differential equations proposed by Yu-Li et al. [52]. After smoothing, the edges of the iris are extracted using Canny operator, then the pupils circle is detected as inner edge, and the circle between the iris and sclera is regarded as the outer edge of the iris. The inner circle radius is considered slightly larger, and the outer circle radius is considered slightly smaller than their obtained values, so that the selected regions completely fit into the real iris region. This is because of this fact that in the most images the pupil is not in the form of a circle and almost has an oval geometric shape. Therefore, this process prevents the entering of the pupil pixels into the final region of the iris and the reliability of extracted features will increase.
To address the challenges arising from the distance of individuals from the camera and different sizes of the iris region, the square surrounding the extracted iris is resized into an image with 256 × 256 pixels and the next steps will be performed on this image.
Another source of error in the iris-based biometric identification systems is the covering of the iris region by eyelids and eyelashes. Therefore, to increase the performance of the iris-based biometric systems, more reliable region of the iris should be selected for feature extraction. Since the identification process of the iris biometric can be accomplished with just 37.5% of the iris features [24], only two regions on the left and right side of the iris that have lower coverage with eyelids can be used for biometric feature extraction. These regions were also employed in most of the state-of-the-art methods [24, 30, 50, 51]. This limitation not only speeds up the processing, but also increases the reliability of extracted features. To this end, two radius lines are drawn from the centre of the pupil in + 35° and − 55° relative to the horizontal axis and their intersection in the iris region is regarded as a right-side territory for extraction of local features. In addition, to determine the left-side territory, two radius lines are drawn from the centre of the pupil in + 145° and + 235° relative to the horizontal axis and their intersection in the iris region is regarded for extraction of local features. In most images, when we move from the centre of the iris towards the upper eyelid, till angle of 35° we will not reach to the upper eyelids, and hence, this angle is selected for determination of reliable iris regions. In addition, the angle of − 55° is used for the lower half of the iris because the lower eyelid has less movement compared to the upper eyelid. These reasons are also true for left region, but the proper angles are + 145° and + 235°. Figure 2 shows the output of this process.
3.2 Learning-based rotation- and illumination-invariant local patterns extraction
Since iris biometric features lie in the fine patterns of the iris texture, local patterns extraction technique should be used for each pixel of the iris region. Local pattern extraction is a mapping from the pixels space to the pattern space that leads to better display of the iris texture. The description of the local patterns of iris can be done by statistical or structural techniques like SURF [46], SIFT [38], HOG [14], Gabor [47] and LBP [36]. These features are handcraft features and are not related to the iris patterns. Therefore, we try to define and extract the main patterns of iris biometric. To this end, a clustering-based local feature extraction technique is employed to describe rotation- and illumination-invariant main local patterns of the iris texture.
Hence, the neighbouring pixels, as shown in Fig. 3, are used for defining local pattern of each pixel in the form of a 24-ary vector. In fact, for each pixel (central pixel PC in Fig. 3) 24 neighbours with distance of one and two pixels construct this vector (G = {g1, g2, …, g24}). This type of pattern description is sensitive to image/camera rotation and illumination variation, because when the image rotates, the location of grey values (gi) in the extracted patterns is changing and the obtained patterns are different. In addition, when the lighting condition is changing, the value of all neighbouring points (gi) is changed and different patterns are produced for each pixel.
To solve the former limitation, we rotate each block around the central pixel (PC), such that the P1 has the least value between g1 and g8 point. In other word, we find the minimum value between eight neighbours in the first layer (neighbours with distance of one pixel (g1 to g8)), and then circular rotation is employed on g1 to g8 so that the minimum value is placed on location P1. If the number of rotations for the first layer is K, then we also rotate the second-layer points (g9 to g24) 2K times. If more than one pixel has minimum value, the nearest one is selected. By employing this process, we can reduce the effect of head movement and consequently the rotation of iris region.
To solve the last limitation and reduce the effect of different lighting condition on the extracted rotation-invariant local patterns (P = {P1, P2, …, P24}), the value of central pixel is subtracted from all neighbouring pixels. Because when the lighting is changing, the same variation in the value of all pixels is occurred. Hence, the difference between them remains unchanged. By this subtraction, all uniform illumination changes occurred in all pixels of the input image can be eliminated. Figure 3 shows the arrangement of neighbouring pixels and performed steps to extract final illumination- and rotation-invariant feature vector (L = {L1, L2, …, L24}) for each pixel.
To show the rotation invariance ability of the proposed method, two examples are shown in Fig. 4. In Fig. 4a, the location of the minimum value in original block is g2 and then we need one rotation (k = 1) on g1 to g8 to come this pixel to location P1. Also, we should rotate g9 to g24 two times. If the original block is rotated with + 45°, then Fig. 4b block is obtained. In rotated block (Fig. 4b), the location of the minimum value is g3 and then we need two rotations (k = 2) on g1 to g8 to come this pixel to location P1. Also, we should rotate g9 to g24 four times to correct the applied rotation. In Fig. 4, we can see that the final vector (L) of original and rotated block is same.
Another noteworthy point about this type of pattern description is that the total number of definable patterns is huge (equal to 25624), and it would be impossible to extract the histogram of these patterns as a descriptor of the iris texture region. To solve this problem, we use K-means unsupervised learning technique to cluster all existing patterns in the limited number of classes. If the number of these classes is too high, the intra-class distance increases; and if the number of classes is too small, the inter-class distance decreases and their discrimination declines. For this reason, experimentally the number of these classes was set to 256. In order to extract local features that fit iris texture, a large number of iris images (600 iris images in this case) are employed to extract local patterns. These patterns are clustered into 256 different groups, and the centre of these clusters is saved as the main patterns of the iris texture to describe the biometric features of all images.
Then, for each pixel, the extracted local feature vector is compared with all main patterns (the obtained cluster centres) and the nearest cluster’s centre is selected as the pattern of corresponding pixel.
After determining the main patterns of all pixels in the left- and right-side region, separated histograms of these patterns are constructed for each region. By concatenating these two histograms, a 512-bin feature vector (F) is extracted as descriptor of entire iris texture features. This feature vector will be extracted for each input image and for all reference images (images of authorized users).
3.3 Increasing discrimination of local features by metric learning
Although the extracted feature vector (Fi) can efficiently describe the iris texture of image i, to elevate the recognition accuracy of biometric systems, the discrimination of these features should be increased. Learning-based methods, such as metric learning, can be employed to extract an efficient transform (T) for applying on the extracted features and obtaining high-discriminative local features (HDLF).
The purpose of the metric learning is obtaining a transformation matrix (W) to reduce the distance between samples within each class and maximize the distance between samples of different classes. In fact, the metric learning turns into an optimization problem to find the W matrix value that maximizes the following expression.
where J1(W) is the mean square difference between the samples of different classes (inter-class distance) and J2(W) is the mean square difference between samples of each class (intra-class distance). These distances can be calculated as below:
where M is the total number of classes, K is the number of samples in class i (Ci), and N is the total number of samples of all classes (N = K × M). H1 and H2 are the inter-class and intra-class mean square error of extracted local features in all reference images, respectively.
Thus, by applying the WTW = I condition, the metric learning problem will be transformed into the following form:
WTW = I limits the W scale in a way that above optimization problem provides an appropriate answer (well posed). Thus, W can be obtained by solving the following eigenvalue problem [53]:
If ω1 to ωk are the eigenvectors related to k largest eigenvalues λ1 ≥ λ2 ≥ ··· ≥ λk in Eq. (6), then the transformation matrix W will be obtained as step of the optimization problem stated in Eq. (2). Based on our best knowledge, there is no closed-form solution for such an optimization problem. Alternatively, this optimization problem can be solved in an iterative manner. Hence, in iteration t we obtain intermediate transformation matrix \( W^{\tau } = [\omega_{1} , \omega_{2} , \ldots , \omega_{k} \)]. This intermediate transformation matrix is used for transferring the obtained features to a new space. Then, resolve the optimization problem using these new features and update the transformation matrix. This process is repeated until the difference of transformation matrix in two consecutive repetitions becomes lower than a predefined threshold (experimentally set to 0.0001). In this case, the final transformation matrix used for extracting high-discriminative local features (W = Wt) is obtained. The proposed transform learning technique can be summarized as Algorithm 1.
Although in some existing metric-learning-based methods, like locality preserving projection (LPP) approaches [54, 55], only the similarity (or distance) between K nearest samples was used, in the proposed method we employ all samples of each class to increase the discrimination power of the obtained transformation matrix (W).
The final transformation matrix (W) is also applied on the features of reference images for each person and on each new input image to extract high-discriminative local features.
3.4 Authentication by nearest neighbour algorithm
After extracting high-discriminative local features for reference images and the input images, the KNN classification is employed to determine the identity of the input image. To this end, the distance between the input image and all the reference images is calculated and the best match is selected as the identity of the input image.
4 Experimental results
To evaluate the proposed authentication method, we applied it on three popular data sets: CASIA-V4 Thousand [56], UBIRIS [57] and ICE [58] databases.
At first to evaluate the proposed authentication method based on high-discrimination local features of the iris, 6000 images of 300 individuals from data set CASIA-V4 Thousand [56] were used. Then, 600 images from 30 persons in this collection randomly selected for extracting and defining the main iris patterns. The proposed method was tested on the remaining images of 270 persons. To this end, fivefolds evaluation was used (where 20% of the images were taken from each individual as test images and the remaining 80% were used as reference images) and the metric learning process was repeated separately for each fold. The images of this data set contain artefacts in terms of illumination changes, blurred images and severe occlusion of upper eyelids and eyelashes.
Then, for more evaluation, we tested the proposed method on UBIRIS.v1 and ICE data sets. The UBIRIS.v1 database [57] is composed of 1877 images collected from 241 persons. Also, ICE [58] contains 2953 images from 244 classes. It consists of left and right iris images for experimentation (1528 left iris images from 120 classes and 1425 right iris images from 124 classes).The main characteristics of these databases are that they incorporate several noise factors to measure the performance of iris recognition systems.
Different parameters of the proposed method such as distance measure used in the KNN, the effect of using two separate histograms for right and left regions of iris and the effect of metric learning method were also investigated on CASIA-V4 Thousand [56] data set. The results of these tests are shown in Figs. 5, 6 and 7. Figure 5 shows the results of applying different distance measures in the KNN classifier. In these tests, the concatenation of two histograms along with metric learning algorithm as well as the nearest neighbour method was used for classification purposes. According to the obtained results, it is clear that the Spearman’s distance measure has the highest recognition accuracy. Although Cityblock, Jaccard, Chi-square and standard Euclidean measures have also shown relatively acceptable accuracy levels, these distance measures at least have 6% lower accuracy than Spearman’s measure.
Figure 6 shows the results of using combination or concatenation of two histograms for the superior distance measures such as Spearman, Cityblock, Jaccard, Chi-square and standard Euclidean in the KNN classifier. In these tests, metric learning algorithm and the nearest neighbourhood method are used for classification purposes. According to the obtained results, it is clear that recognition accuracy obtained from the concatenation of two histograms is much higher than the recognition accuracy obtained from combination of histograms in all distance measures. The level of enhancement in the recognition accuracy was being more than 23, 6, 14, 8 and 6% for Cityblock, Chi-square, Jaccard, Spearman and Euclidean distance measures, respectively.
Figure 7 shows the results of using metric learning algorithm. In these tests, the effect of metric learning algorithm in two cases (concatenation of two histograms and combination of two histograms) was evaluated, when the Spearman’s distance measure and nearest neighbour method have been used for classification purpose. According to the obtained results, it is clear that metric learning algorithm enhances the recognition accuracy more than 10%.
The proposed method can be used for both verification and identification applications. During verification, we authenticate the input feature vector by calculating the similarity score (or distance) between input sample and the feature vectors of reference subject whose identity is claimed. If the obtained similarity is greater than predefined threshold value (this threshold is set to minimum similarity between all reference samples of each subject), the claimed identity is verified, and it is rejected otherwise. The performance evaluation for verification mode was investigated by constructing receiver operating characteristic (ROC) curve [59]. To this end, ROC curves of the proposed method in two cases (concatenation of histograms and combination of histograms) for CASIA iris database are shown in Fig. 8a. In these curves, true positive rate (TPR) versus false positive rate (FPR) was drawn. The obtained results reflect that the performance of the proposed method is quite good in both cases, especially when two histograms were concatenated.
During identification or recognition, the input feature vector is compared with all N reference samples and their similarity scores are obtained. Then, these N scores are arranged in ascending order and a rank is assigned to each sorted score. The subject with rank 1 is declared as the identity of the input sample. For performance evaluation of the identification mode, CMC (cumulative match curve) [59] was employed for different possible ranks. To this end, correct recognition rate for different ranks is plotted and shown in Fig. 8b. In this figure, fused results are shown for two cases (concatenation of histograms and combination of histograms). The obtained results show that the performance of the proposed method in identification mode can reach to 100% when we used rank 5 or higher for concatenation of histograms. This situation was obtained for combination of histograms when we used rank 8.
Finally, the proposed method was compared with existing state-of-the-art methods on the CASIA data set. The results of this comparison are shown in Table 1. According to Table 1, it is clear that the proposed method is more efficient than other existing methods. Although the accuracy of some methods is same as the proposed method, the number of images used by the proposed method is at least twice more than other methods. Also, equal error rate (EER) of the proposed method is low and comparable with other state-of-the-art methods. This reflects the importance of employing local features and metric learning algorithms to increase the discrimination of features and enhance the recognition accuracy of the proposed biometric systems.
In addition, the proposed method was compared with existing methods on the UBIRIS and ICE data sets. The results of these comparisons are shown in Tables 2 and 3. According to the obtained results, it is clear that the proposed method is more efficient than other existing methods. The proposed method has better performance than all of them on the CASIA and ICE data sets, and on UBIRIS data set it has comparable result with Umer et al. [51] and better than others.
Finally, the average execution time of the proposed method along with some state-of-the-art methods is given in Table 4. The proposed method was implemented with MATLAB 2013R codes on a PC with a Pentium-IV 3.2 GHz Core i5 CPU and 4.0 GB RAM. It needs about 978 ms CPU time to process one image with size 640 × 480 pixels. The CPU time of the proposed method is similar to other existing methods. In real applications, the computation time can be significantly reduced by implementing the algorithm in C/C++ programming. In this way, our method can be seen as an interesting option to handle large image sets.
5 Conclusion
In this paper, an efficient authentication system based on iris biometric features was introduced, especially for cases when the number of users is large. In this system, first two regions with high reliability within the iris region were detected and a combination of a new rotation- and illumination-invariant local patterns descriptor and metric learning algorithms was used for extracting high-discriminative local features. Then, this technique was utilized to extract separated histogram of patterns available in both iris regions. The extracted histograms were concatenated to provide an efficient description of the iris biometric patterns for each image. Finally, the KNN classifier was conducted for pattern matching and recognition of individuals. The proposed method was tested and evaluated on CASIA, UBIRIS and ICE data sets, and its accuracies were equal to 99.7, 98.13 and 99.26%, respectively. These results obtained from the proposed technique are better than the results obtained from other techniques both in terms of the number of images used and in terms of recognition accuracy.
In the future works, we can study the effect of limiting the number of samples in each class to K nearest samples, like that is employed in the locality preserving projection (LPP) approach; and also we can evaluate it on the very large data sets.
References
Blackburn T, Butavicius M, Graves I, Hemming D, Ivancevic V, Johnson R, Kaine A, McLindin B, Meaney K, Smith B, Sunde J (2003) Biometrics technology review 2002. DEFENCE SCINCE and Technology DSTO-GD-0359, pp 1–59
Reddy Jillela R, Ross A (2015) Segmenting iris images in the visible spectrum with applications in mobile biometrics”. Pattern Recogn Lett 57:4–16
Farihan A, Raffei M, Asmuni H, Hassan R, Othman R (2015) A low lighting or contrast ratio visible iris recognition using iso-contrast limited adaptive histogram equalization. Knowl Based Syst 74:40–48
Bhateja AK, Sharma S, Chaudhury S, Agrawal N (2016) Iris recognition based on sparse representation and k-nearest sub space with genetic algorithm. Pattern Recogn Lett 73:13–18
Li J, Tao B, Wang Y, Li X (2012) Research and implementation of iris recognition algorithm. Proc Eng 29:3353–3358
Farouk RM (2011) Iris recognition based on elastic graph matching and Gabor wavelets. Comput Vis Image Underst 115(8):1239–1244
Rai H, Yadav A (2014) Iris recognition using combined support vector machine and Hamming distance approach. Expert Syst Appl 41:588–593
Seetharaman K, Ragupathy R (2012) LDPC and SHA based iris recognition for image authentication. Egypt Inform J 13:217–224
Trucco E, Razeto M (2005) Robust iris location in close-up images of the eye. Pattern Anal Appl 8(3):247–255
Peroenca H, Alexandre L (2010) Iris recognition: analysis of the error rates regarding the accuracy of the segmentation stage. Image Vis Comput 28:202–206
Sankowski W, Grabowski K, Napieralska M, Zubert M, Napieralski A (2010) Reliable algorithm for iris segmentation in eye image. Image Vis Comput 28(2):231–237
Roy K, Bhattacharya P, Suen CY (2011) Iris recognition using shape-guided approach and game theory. Pattern Anal Appl 14(4):329–348
Hollingsworth K, Bowyer KW, Flynn P (2009) pupil dilation degrades iris biometric performance. Comput Vis Image Underst 113:150–157
Li P, Liu X, Xiao L, Song Q (2010) Robust and accurate iris segmentation in very noisy iris images. Image Vis Comput 28(2):246–253
Hu Y, Sirlantzis K, Howells G (2015) Improving colour iris segmentation using a model selection technique. Pattern Recogn Lett 57:24–32
Tan C, Kumar A (2011) Automated segmentation of iris images using visible wavelength face images. In: IEEE conference on computer vision and pattern recognition workshops (CVPRW 2011), pp 9–14
Jan F, Usman I, Khan SA, Malik SA (2014) A dynamic non-circular iris localization technique for non-ideal data. Comput Electr Eng 40:215–226
Ibrahim MT, Khan Tariq M, Khan Shahid A, Aurangzeb Khan M, Guan Ling (2012) Iris localization using local histogram and other image statistics. Opt Lasers Eng 50:645–654
Koh J, Govindaraju V, Chaudhary V (2010) A robust iris localization method using an active contour model and hough transform. In: Proceedings of 20th international conference on pattern recognition (ICPR), Istanbul, Turkey, pp 2852–2856
Jan F, Usman I, Agha S (2012) Iris localization in frontal eye images for less constrained iris recognition systems. Digit Signal Process 22:971–986
Wang Q, Zhang X, Li M, Dong X, Zhou Q, Yin Y (2012) Adaboost and multi_orientation 2d Gabor_based noisy iris recognition. Pattern Recogn Lett 33(8):978–983
Shitharaman K, Ragupathy R (2012) Iris recognition for personal identification system. Proc Eng 38:1531–1546
Tan T, Zhang X, Sun Z, Zhang H (2012) Noisy iris image matching by using multiple cues. Pattern Recogn Lett 33:970–977
Shin KY, Nam GP, Jeong DS, Cho DH, Kang BJ, Park KR, Kim J (2012) New iris recognition method for noisy iris images. Pattern Recogn Lett 33(8):991–999
Daugman J (1994) Biometric personal identification system based on iris analysis. US patent no. 5,291,560 issued
Chen WK, Lee JC, Han WY, Shih CK, Chang KC (2013) Iris recognition based on bidimensional empirical mode decomposition and fractal dimension. Inf Sci 221:439–451
Roy K, Bhattacharya P, Suen C (2011) Towards non ideal iris recognition based on level set method, genetic algorithms and adaptive asymmetrical SVMs. Eng Appl Artif Intell 24:458–475
Desoky AI, Ali HA, Abdel-Hamid NB (2012) Enhancing iris recognition system performance using templates fusion. Ain Shams Eng J 3(2):133–140
Rankin DM, Scotney BW, Morrow PJ, Pierscionek BK (2012) Iris recognition failure over time: the effect soft texture. Pattern Recogn 45:145–150
Szewczyk R, Grabowski K, Napieralska M, Sankowski W, Zubert M, Napieralski A (2012) A reliable iris recognition algorithm based on reverse biorthogonal wavelet transform. Pattern Recogn Lett 33:1019–1026
Liu Y, He F, Zhu X, Chen Y, Han Y, Fu Y (2014) Video sequence-based iris recognition inspired by human cognition manne. J Bionic Eng 11:481–489
Mahabadi A, Mirzaei A (2008) A new iris recognition method for identification systems. Bina J Ophthalmol 14(1):50–59
Chen CH, Chu CT (2009) High performance iris recognition based on 1-D circular feature extraction and PSO–PNN classifier. Expert Syst Appl 36:10351–10356
Rahulkar AD, Waghmare LM, Holambe RS (2014) A new approach to the design of hybrid finer directional wavelet filter bank for iris feature extraction and classification using k-out-of-n: a post-classifier. Pattern Anal Appl 17(3):529–547
Bowyer KW, Hollingsworth K, Flynn PJ (2008) Image understanding for iris biometrics: a survey. Comput Vis Image Underst 110(2):281–307
Hamouchene I, Aouat S (2014) A new texture analysis approach for iris recognition. AASRI Proc 9:2–7
Umer S, Dhara BC, Chanda B (2016) Texture code matrix-based multi-instance iris recognition. Pattern Anal Appl 19(1):283–295
Sun S, Yang S, Zhao L (2013) Noncooperative bovine iris recognition via SIFT Shengnan. Neurocomputing 120:310–317
Belcher C, Du Y (2009) Region-based SIFT approach to iris recognition. Opt Lasers Eng 47(1):139–147
Chen XZ, Wu CY, Xiong LL, Yang F (2012) The optimal matching algorithm for multi-scale iris recognition. Energy Proc 16:876–882
Rahulkar AD, Holambe RS (2012) Partial iris feature extraction and recognition based on a new combined directional and rotated directional wavelet filter banks. Neurocomputing 81:12–23
Ali H, Salami MJ, Wahyudi E (2008) Iris recognition system using support vector machine. In: International conference on computer and communication engineering, pp 516–521
Sun Z, Tan T (2009) Ordinal measures for iris recognition. IEEE Trans Pattern Anal Mach Intell 31(12):2211–2226
Ma L, Tan T, Wang Y, Zhang D (2003) Personal identification based on iris texture analysis. IEEE Trans Pattern Anal Mach Intell 25(12):1519–1533
Rai H, Yadav A (2014) Iris recognition using combined support vector machine and Hamming distance approach. Expert Syst Appl 41(2):588–593
Mehrotra H, Sa PK, Majhi B (2013) Fast segmentation and adaptive SURF descriptor for iris recognition. Math Comput Model 58:132–146
Liu Y, He F, Zhu X, Liu Z, Chen Y, Han Y, Yu L (2015) The Improved characteristics of bionic gabor representations by combining with SIFT key-points for iris recognition. J Bionic Eng 12:504–517
Tan CW, Kumar A (2014) Efficient and accurate at-a-distance iris recognition using geometric key-based iris encoding. IEEE Trans Inform Forensics Secur 9(9):1518–1526
Nithya AA, Lakshmi C, Krithiga RR (2016) Enhanced annular iris recognition using bag of vocabulary models. Int J Control Theory Appl 9(40):1095–1100
Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: IEEE conference on computer vision and pattern recognition, pp 3360–3367
Umer S, Dhara BC, Chanda B (2017) A novel cancelable iris recognition system based on feature learning techniques. Inform Sci 406–407:102–118
Yu-Li Y, Kaveh M (2000) Fourth-order partial differential equations for noise removal. IEEE Trans Image Process 9(10):1723–1730
Lu LJ, Zhou X, Tan YP, Shang Y, Shang Y (2014) Neighborhood epulsed metric learning for kinship verification. IEEE Trans Pattern Anal Mach Intell 36(2):331–345
Belkin M, Niyogi P (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems, vol 14, Vancouver, BC, Canada
He X, Niyogi P (2004) Locality preserving projections. In: Advances in neural information processing systems, vol 16, Vancouver, BC, Canada
CASIA Iris Image Database. http://biometrics.idealtest.org/
Proena H, Alexandre LA (2005) UBIRIS: a noisy iris image database. In: Proceeding of ICIAP 2005—international conference on image analysis and processing, vol 1, pp 970–977
Iris Challenge Evaluation (ICE) dataset. http://iris.nist.gov/ICE/S
Bolle RM, Connell JH, Pankanti S, Ratha NK, Senior AW (2005) The relation between the roc curve and the CMC. In: IEEE Proceedings, pp 15–20
Hilal A, Beauseroy P, Daya B (2014) Elastic strips normalisation model for higher iris recognition performance. IET Biometr 3(4):190–197
Acknowledgements
Authors want to thank Prof. Tieniu Tan’s research group at the Center for Biometrics Security Research (CBSR), National Laboratory of Pattern Recognition (NLPR), Institute of Automation and Chinese Academy of Sciences for accessing to CASIA data set.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fathi, A., Mohamadi, M. Metric-learning-based high-discriminative local features extraction for iris recognition. Pattern Anal Applic 22, 1427–1438 (2019). https://doi.org/10.1007/s10044-018-0713-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10044-018-0713-4