1 Introduction

Iris recognition is a reliable and accurate biometric identification technology due to the uniqueness, aging invariant and noninvasive characteristics of iris. Moreover, this is a noncontact data acquisition technology.

Since, Flom and Safir [1] proposed the concept of iris recognition for first time, many research works on automatic iris recognition have been published. These approaches comprise, iris preprocessing and segmentation, iris code generation and finally, comparison and recognition [2]. An earlier automatic iris recognition method, based on multiscale Gabor wavelets and extracted phase information of iris textures, was proposed by Daugman [3]. Wildes [4] employed a gradient-based binary edge map and the Hough transform to detect the iris and pupil boundaries. Iris images were classified by using the normalized correlation. Recently, many other automatic iris recognition algorithms have been proposed which are based on the pioneered algorithms of Daugman [3] and Wildes [4]. Table 1 summarizes the state-of-the-art automatic iris recognition approaches. Preprocessing and segmentation generally consist on iris localization and iris normalization. For iris localization, which is the process of detecting the inner (iris/pupil) and the outer (iris/sclera) boundaries in the eye image, several techniques have been proposed, such as Integro-differential operator [3, 5] a combination of Hough transform and region-based active contours [6], and thresholding [7]. In iris normalization, most of the algorithms applied Daugman rubber sheet model [3, 5, 710]. Most of the methods performed well for ideal conditions in a very constrained environment [2, 3]. However, iris recognition under nonideal real-world conditions still presents many challenges not solved by those algorithms. A nonideal dataset of eye images may contain occlusions such as eyelids and eyelashes or low contrast, specular reflections, focus, and nonuniform illumination. Besides, the off-axis eye image (eye not oriented horizontally) that occurs frequently in real eye images is another common problem to overcome in iris recognition [11]. Recently, some methods have been proposed [1215] to segment iris from nonideal eye images. Datasets CASIA Ver.3 and Ver.4 [16], UBIRIS Ver.2 [17] have been used to evaluate the proposed segmentation methods.

Table 1 State-of-the-art of iris recognition.

On the other hand, different methods have been applied to extract features from normalized iris images, such as approaches based on Gabor filters [3], Wavelet transforms [810, 18] Curvelet transforms [5] and 1-D circular profiles [19]. Even though, the wavelet transform is popular, powerful and familiar among the iris image processing techniques, it has its own limitations in capturing directional information such as smooth contours and the directional edges of the image. This problem is addressed by Contourlet transform (CT) [22]. In addition to multiscale and time-frequency localization properties of wavelets, CT offers directionality and anisotropy. A 4-level CT method for iris feature extraction is described in [23], in which normalized images are partitioned into multiscale and multi-directional subbands. The normalized energy of subbands are calculated as features to train a support vector machine (SVM) classifier. Due to downsampling and upsampling, the CT lacks shift-invariance. To overcome this limiting factor, Cunha et al. [24] proposed a shift-invariant version of CT designated nonsubsampled contourlet transform (NSCT).

Several methods for feature extraction, representing different aspects of the iris images, were reported [8, 25]. To reduce the computational cost and to improve the classification performance, a selection of the best discriminative features is highly desirable.

To address the highlighted issues, the paper provides the following contributions:

  1. 1.

    An overall contribution: A new scale-, shift- and rotation-invariant iris recognition method, in time-frequency and spatial domains is proposed. The effectiveness of our approach is validated, through a set of experiments using CASIA dataset Ver.1 and Ver.4-lamp.

  2. 2.

    Iris segmentation: to determine the pupil region, among labeled regions in binary eye image, a pupillary boundary detection method is proposed. The contribution here is a new way for selection of the pupil region that consists of selecting a region with the largest area and the smallest eccentricity in the binary image. Moreover, in some of related works (Table 1) only the upper and/or lower part of the iris image (texture) is used to remove the occluded regions by the eyelid and eyelashes, which results in loss of significant “information”. Therefore, to mitigate this problem after detection of limbic boundary, a four-ROIs selection method is proposed.

  3. 3.

    Feature extraction: after normalizing the selected regions of interest some textural features are extracted from the gray level co-occurrence matrix (GLCM). The GLCM is calculated on both spatial image and frequency subbands of NSCT decomposition. Moreover, numerical features are calculated directly on NSCT frequency and spatial iris image.

  4. 4.

    Feature selection: to reduce the influence of extreme values, the extracted features are transformed, normalized and then fed into our feature selector. Selection of the features using well known automatic feature selectors is not accurate enough to get the best results; most of these feature selectors, select feature elements from all the feature-types which can not yield the best selection. In order to obtain a more accurate selection and further reduce the number of extracted features, a new two-step feature selection process, which consists of filtering and a wrapper phase, is proposed. In the first step, some of the feature-types are removed using a simple filtering algorithm and then in the second step the minimal-redundancy and maximal-relevance (mRMR) algorithm which is a wrapper based feature selector, is applied.

2 Proposed Approach

The proposed iris recognition method includes five major phases: a) iris preprocessing and segmentation, b) feature extraction, c) feature transformation and normalization, d) feature selection, and e) classification.

2.1 Iris Preprocessing and Segmentation

For the purpose of iris recognition, some parts of eye image such as eyelid, sclera, eyelash and pupil should be removed. In addition, even for iris of the same eye, the size may vary depending on camera-to-eye distance as well as light brightness. Therefore, the original eye image needs to be preprocessed to reduce the influence of the mentioned occlusions.

2.1.1 Localization

As shown in Fig. 1, to locate the inner (iris/pupil) and outer (iris/sclera) boundaries in an eye image, the following steps are performed: 1) reflection removal. 2) pupillary boundary detection. 3) limbic boundary detection.

Figure 1
figure 1

Block diagram of iris localization steps.

Reflection Removal

Specular reflections (light spots in the eye image) can cause some problems in the localization process. As shown in Fig. 2(a–f), to localize the light source reflections, firstly the eye image is binarized using a thresholding technique (in the experiments, the threshold = 190 was used). The binarized eye image is then dilated, to consider all possible affected regions. Next, to fill the segmented reflection, the resulted mask (c) is complemented (d) and applied to the eye image for marking the reflections spots. Finally, the detected specular reflections are “inpainted” using the 8 surrounding neighbors (all steps are detailed in Algorithm 1).

Figure 2
figure 2

Reflection removal steps. (a) the original eye image. (b) binarized eye image after applying the threshold. (c) dilated binarized eye image resulted from (b). (d) complement of image (c). (e) mask image. (f) inpainted image.

figure e

Pupillary Boundary Detection

To detect the pupillary boundary, the inpainted eye image is first binarized (Fig. 3(b)) using a threshold value, M+25 [27] where M is a minimum fixed value of the inpainted image. In addition to the pupil, other dark regions of the eye image such as eyelashes fall below this threshold value. In order to eliminate the regions corresponding to the eyelashes a 2-D median filter with a 10 × 10-convolution mask is applied to the binary image. This reduces the number of candidate regions detected as a consequence of thresholding [27] (Fig. 3(c)). The remaining regions in the median-filtered binary image are labeled and the region with the largest area and the smallest eccentricity is determined as pupil region. Finally, the pupil radius and centroid are calculated as follows:

Figure 3
figure 3

Pupil boundary detection steps. (a) inpainted image. (b) binarized inpainted image. (c) smoothed image. (d) detected pupillary boundary.

$$ pupilRadius=\left(\sqrt{4\times A/\pi}\right)/2 $$
(1)
$$ \left({C}_x,{C}_y\right)=\left({\displaystyle \int xdA}/ A,{\displaystyle \int ydA}/ A\right) $$
(2)
figure f
figure g

where (C x , C y ) denote the center coordinates of the pupil and A is the area of the pupil. All steps are detailed in Algorithm 2.

Limbic Boundary Detection

As shown in Algorithm 3, before locating the outer boundary, a gamma threshold [28] is adjusted to the iris edge map (extracted by a Canny edge detector) to enhance the iris contrast. Then, the weak edge pixels are set to zero using non-maxima suppression; thus only the dominant edges are extracted. Finally, a hysteresis threshold is applied to the image. Having the pupil center coordinates, the radius and center coordinates of the iris boundary can be deduced using the circular Hough transform (Fig. 4).

Figure 4
figure 4

Illustration of limbic boundary detection steps. (a) inpainted image. (b) result of applying Canny edge detector. (c) result of applying gamma adjustment. (d) result of applying non-maxima suppression. (e) result of applying hysteresis thresholding. (f) result of applying circular Hough transform on (e) and detected limbic boundary.

2.1.2 Selecting Region of Interest

As depicted in Fig. 5, to disregard the iris regions occluded by the eyelid and eyelashes and to avoid loss of discriminative features, we adopt the method described in our previous work [26], in which four regions of interest (ROI) are selected:

Figure 5
figure 5

Selected areas for normalization.

  • Right side of the iris circle, a sector between angles -π/4 and π/4 with a radius equal to iris radius (Fig. 5(a)).

  • Left side of the iris circle, a sector between angles 4π/5 and 4π/3 with a radius equal to iris radius (Fig. 5(a)).

  • Bottom side of the iris circle, a sector between angles 4π/3 and –π/4 with a radius of 1/2 of the iris radius (Fig. 5(b)).

  • A disk around the pupil with a radius of 1/3 of the iris radius to cover the collarette area (Fig. 5(c)).

2.1.3 Normalization and Enhancement

To compensate several external factors such as illumination variations and imaging distance, the partial iris images are normalized using “Daugman Rubber Sheet” model [3].

Since the original iris image has low contrast and may have non uniform illumination caused by the position of the light sources, some enhancements need to be applied. The histogram equalization is used to enhance the normalized iris images. The enhancement involves tessellating the normalized iris into 32 × 32 tiles (Fig. 6(a)) and subjecting each tile to histogram equalization. Then the Wiener noise-removal filter is applied to the output of equalized histogram (Fig. 6(b)).

Figure 6
figure 6

Illustration of iris enhancement step. (a) tiled normalized image. (b) enhanced iris image resulted from histogram equalization and Wiener filtering.

2.2 Feature Extraction

A reliable iris recognition system should extract features that are invariant to scaling, shift and rotation. As we described in [26], the scale invariance is obtained by unwrapping the selected iris regions into four fixed size rectangles. To achieve shift invariance, the enhanced images are transformed into the frequency domain using the NSCT which is a shift-invariant transform and can capture the geometry of the iris texture. Finally, the GLCM is calculated on both spatial image and NSCT frequency subbands, which yields rotation invariance. The method is detailed in the following paragraphs.

2.2.1 Nonsubsampled Contourlet Transform

In contourlet transform, the Laplacian Pyramid (LP) is first used to capture point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities into linear structures [29]. The overall result is an image expansion using basic elements like contour segments, and thus called contourlet transform, which is implemented by a Pyramidal Directional Filter Bank (PDFB) [30]. The LP decomposition at each level generates a downsampled lowpass version of the original image, and a difference between the original image and the prediction results in a bandpass image. As stated in [24] “due to downsamplers and upsamplers present in both LP and DFB, contourlet transform is not shift-invariant”. To achieve the shift-invariance property, NSCT was proposed.

The NSCT is built upon nonsubsampled laplacian pyramids (NSLP) and nonsubsampled directional filter bank (NSDFB); thus, it is a fully shift-invariant, multiscale, and multidirection image decomposition that has a fast implementation.

2.2.2 Primary Features

The enhanced iris image is decomposed into 6 directions using NSDFB at 2 different scales. Next, some textural features are extracted from the spatial iris image and all the resultant NSCT frequency subbands. Textural features f1-f22 mentioned in Table 2 are computed on the basis of statistical distribution of pixels’ intensity at a given position relative to others in a matrix of pixels called GLCM [25]. Since the GLCM is computed for different orientations, the rotation of the iris can be captured by one of the matrices. Feature extraction based on GLCM is a second-order statistic that can be employed to analyze an image as a texture. Although GLCM captures properties of a texture, it cannot be directly used for further analysis, such as the comparison of two textures; thus numeric features f1-f22 which contain significant information about the textural characteristics are obtained from the GLCM in different directions [31], [25] and [33]. Moreover, numerical features f23-f26 are calculated directly on NSCT frequency subbands and spatial iris image.

Table 2 Textural features.

2.3 Feature Transformation and Normalization

The extracted features are transformed and normalized in order to reduce the influence of extreme values. The transformation methods applied to each feature are described in [34]. After a thorough experimental evaluation of each transform operator over extracted features, it was empirically verified that the best classification results were attained by applying \( {x}_{ij}=1/\sqrt{y_{ij}} \) where y ij denotes the ijth element of a feature matrix Y, and X = {x ij ; i = 1, 2, …, N and j = 1, 2, …, M} (where N and M denote the number of subjects and features respectively) is the transformed feature matrix. Thereby this transform was adopted in the overall iris recognition system. To avoid features in greater numeric ranges dominating those in smaller numeric ranges, each feature of the transformed matrix X is independently normalized to the (0, 1) range by applying

$$ {\overline{x}}_{ij}={x}_{ij}/\left( \max \left({\boldsymbol{x}}_{\boldsymbol{j}}\right)- \min \left({\boldsymbol{x}}_{\boldsymbol{j}}\right)\right) $$
(3)

where x j is a vector of each independent feature [35].

2.4 Feature Selection

Selection of the features using well known automatic feature selectors is not accurate enough to get the best results; most of these feature selectors, select feature elements from all the feature-types which yield inaccurate selection. In order to obtain a more accurate selection and further reduce the number of extracted features, a new two-step feature selection process, which consists of filtering and a wrapper phases, is proposed. Filter based methods are in general faster than wrapper strategies. On the other hand, wrapper strategies are found to be more accurate [36]. In first step (as detailed in Algorithm 4), several feature-types (each feature-type consists of some feature elements) with the minimum redundancy are selected between the entire feature-types of Table 2. As shown in Table 3, in this step two prominent groups of features are selected: 1) Group I composed by features f10, f11, f12, f13, f14 and f15; and 2) Group II consists of features f21, f22, f23, f24, f25 and f26 (see Table 2). As second step of feature selector, the minimal-redundancy and maximal-relevance (mRMR) [37] is used to select the most discriminative feature elements from these two groups of feature-types. Moreover, in the second step of feature selector, we compared the result of mRMR with sequential forward selection (SFS) [38], sequential backward selection (SBS) [38], sequential floating forward selection (SFFS) [39], sequential floating backward selection (SFBS) [39] and differential evolution based feature selection (DEFS) [36].

Table 3 Classification accuracy of each feature, some combinations of two features, and best combinations attained be applying the Algorithm 4.

Minimal-Redundancy and Maximal-Relevance

The mRMR method uses the mutual information between a feature and a class to infer its relevance for the class. The mutual information of two random variables measures the mutual dependence between them. Maximal Relevance is to search a feature set S satisfying [37]:

$$ \begin{array}{ll} \max D\left( S, c\right)\hfill & D=\frac{1}{\left| S\right|}{\displaystyle \sum_{x_i\in S} I\left({x}_i; c\right)}\hfill \end{array} $$
(4)

where I(x i ; c) means the mutual information between feature x i and class c. mRMR also uses the mutual information between features as redundancy of each feature. The minimal redundancy feature set R can be determined under condition

$$ \begin{array}{ll} \min R(S),\hfill & R=\frac{1}{\left| S\right|{}^2}\hfill \end{array}{\displaystyle \sum_{x_i,{x}_j\in S} I\left({x}_i,{x}_j\right)} $$
(5)

where I(x i , x j ) indicates the mutual information between features x i and x j . The “minimal-Redundancy and Maximal-Relevance” (mRMR) criterion combines measures (4) and (5) as follows:

$$ \begin{array}{ll} \max \varPhi \left( D, R\right),\hfill & \varPhi = D- R\hfill \end{array} $$
(6)
figure h

Sequential Floating Feature-Selection Approaches

Sequential forward selection (SFS), which is the simplest from the sequential strategies, is a greedy search algorithm that determines iteratively an optimal subset of features by adding one feature per iteration, if the value of a chosen objective function is increased. Sequential backward selection (SBS) is similar to SFS but works in the opposite direction, i.e., it starts with the superset of all features and sequentially removes one features, if the value of the objective function increases.

The main drawback of these sequential approaches is that they gravitate toward local minima due to the inability to reevaluate the usefulness of features that were previously added or discarded, i.e., once a feature is added to or removed from the final set of features, it cannot be changed. Therefore, two expansions for SFS and SBS algorithms were proposed [39]. The sequential forward floating selection (SFFS) finds an optimum subset by insertions (i.e., by appending a new feature to the subset of previously selected features) and deletions (i.e., by discarding a feature from the subset of already selected features) of selected features by the SFS algorithm. The sequential backward floating selection (SBFS) is similar to SFFS but works in the opposite direction; it finds an optimum subset of features by insertions (i.e., by appending an already deleted feature to the subset of selected features) and deletions (i.e., by discarding a feature from the subset of already selected features) in the SBS algorithm [39].

Differential Evolution Feature Selection (DEFS)

DEFS approach uses a combination of differential evolution (DE) optimization method and a repair mechanism based on feature distribution measures. This method, utilizes the DE float number optimizer in the combinatorial optimization problem of feature selection. In order to make the solutions generated by the float optimizer suitable for feature selection, a roulette wheel structure is constructed and supplied with the probabilities of features distribution. These probabilities are constructed during iterations by identifying the features that contribute to the most promising solutions [36].

2.5 Classification

For the classification stage we used SVM [40]. Furthermore, k-nearest neighbor (KNN), naïve bayes (NB), and artificial neural network (ANN) are used to compare efficiency of the system.

3 Performance Assessment

To assess the performance of the proposed algorithm, several experiments were conducted using different publicly available datasets. All of the experiments were carried out in identification mode. The features of a test iris image were compared with the features of whole dataset. Left eye images of the CASIA dataset Ver.1 and Ver.4-lamp were used, which are popular iris datasets and widely adopted to evaluate the iris recognition system [16]. CASIA Ver.1 contains a total of 756 iris images from 108 subjects, in which the images were captured in two sessions, with at least one month interval. CASIA Ver.4-lamp was collected in one session using a hand-held iris sensor; a lamp was turned on/off close to the subject to make different illumination conditions. It contains 16213 iris images from 411 subjects. As stated in a note of CASIA Ver.4-lamp [16] “Elastic deformation of iris texture due to pupil expansion and contraction under different illumination conditions is one of the most common and challenging issues in the iris recognition”. CASIA Ver.4-lamp offers eye images in nonideal conditions, providing suitable data to assess the effects of iris image normalization and robust iris feature representation.

In our experiments, a two-level NSCT decomposition was adopted with 2 and 4 directions for each pyramidal level, respectively. Three GLCMs were calculated on all NSCT frequency subbands and the spatial image both in 0°, 90° and 135°. The normalized iris images were decomposed by the NSPDFB. We have selected “pyrexc” and “pkva” as NSLP and NSDFB filter in PDFB decomposition [24] given their superior performance assessed empirically. SVM-KM [41] toolbox with Gaussian kernel was used in the classification phase. The Gaussian degree and C parameters were set to 6 and 100 respectively as they produced the best empirical results. Experiments were carried out over 2000 images of 200 randomly selected classes, with 10 images per class and 756 images of 108 classes for CASIA Ver.4 and Ver.1, respectively. In order to verify reliability of the results, all the assessments were determined by leave-one-out cross-validation (LOOCV). Moreover, to characterize the performance of the proposed method some well-known measures such as accuracy, area under curve (AUC), the equal-error rate (EER), sensitivity, specificity and F-measure were used. In particular, F-measure or balanced F-score is a weighted average of precision and recall where precision is the fraction of retrieved instances that are relevant and recall is the fraction of relevant instances that are retrieved.

3.1 Evaluation of the Proposed Scheme for Iris Localization and Region of Interest Selection

To validate the performance of the proposed scheme for localization, we applied the method to the eye images with different occlusions and artifacts such as eyelids, eyelashes obstruction, specular reflections, contrast changes, non-uniform illumination, rotation and scale. As illustrated in Fig. 7(a–i), the proposed method performs well despite of the artifacts, and it can localize the inner and the outer boundaries accurately. However, we observed that, it does not properly localize the outer boundaries due to the low contrast between the iris and sclera (see Fig. 7(h)). To alleviate the loss of significant data, four iris ROIs were selected in our segmentation method. The detection error trade-off (DET) curves in Fig. 8 show the comparison of different iris localization approaches on the CASIA Ver.4-lamp. Each curve is denoted by symbols r, Θ which represent normalized polar coordinates. Four cases are considered: 1) Θ = (0,2π), r = IrisR that corresponds to a disk around the iris with iris radius, which covers the whole iris region; 2) Θ = (0,2π), r = 1/3 × IrisR that corresponds to a disk around the iris with 1/3 iris radius, similar to Fig. 5(c); 3) ΘLeft = (3π/4, 5π/4), ΘRight = (-π/4, π/4), r = IrisR that refers to a state similar to Fig. 5(a); and 4) our proposed method detailed in Section 2.1.2. The results shown in Fig. 8 illustrate the superior performance of the proposed method over the other mentioned approaches.

Figure 7
figure 7

Illustration of some randomly selected iris segmentation results for CASIA Ver.4-lamp. (a), (b), and (c) have some artifacts; moreover, (c) shows robustness of segmentation method to left rotation. (d) and (e) suffer from occlusion. (f) shows robustness to right rotation and suffers from makeup. The pupils in (g) and (h) are bigger and smaller than the normal size, respectively. (i) example of high amounts of blur and shows robustness to scaling.

Figure 8
figure 8

The DET curves for comparison of different iris ROIs in the localization process over the CASIAVer.4-lamp. The parameters of method I are Θ = (0,2π), r = IrisR, method II Θ = (0,2π), r = 1/3 × IrisR and method III are ΘLeft = (3π/4, 5π/4), ΘRight = (−π/4, π/4), r = IrisR

3.2 Performance Assessment of Using Different Time-Frequency Transforms

This section is devoted to the analysis of the impact of different time-frequency transforms (wavelet, contourlet and NSCT), applied in feature extraction, in the overall iris recognition performance. According to the AUC curves of Fig. 9, F-measure and average accuracy values shown in Table 4, NSCT provided the best results, which may be correlated to its redundant structure. It gives the highest matching performance (0.9801) with the lowest number of features (224). However, for wavelet transforms the best AUC value of 0.9543 was obtained with 200 features; this AUC is lower than in the case of using NSCT. For the contourlet transform, the best average AUC of 0.9794 was attained with 250 features, which in comparison with NSCT-based feature extraction has a higher number of features and lower accuracy and F-measure values.

Figure 9
figure 9

Comparison between the AUC curves of the proposed method with NSCT, contourlet, and wavelet transforms on CASIA V.4-lamp.

Table 4 Comparison of feature extraction method on using different time-frequency transforms.

3.3 Evaluation of the Features Importance

To account with the high dimensionality problem in iris recognition, the proposed two-step feature selection method was used (Fig. 10). As shown in Table 3, after analysis of different combinations of features, two features groups were selected using Algorithm 4: 1) Group I composed by features f10, f11, f12, f13, f14 and f15 (see Table 2); 2) Group II composed by features f21, f22, f23, f24, f25 and f26 (see Table 2). Starting with 2240 extracted feature element from the four ROIs, a total of 896 elements (features of groups I,II) were selected in the first step of the feature selector. Next, in the second step of the feature selection strategy, these two groups of features were fed into the different feature selectors mentioned in section 2.4. Figure 11 shows the proportion of the selected features from entire features resulted from the first step. It is shown that, mRMR was able to select a subset of 224 features that contained the discriminant information that gave lower EER for recognition. However, as illustrated in Fig. 11, the highest average accuracy, performed with a combination of SBS and SVM, in comparison with mRMR and SVM has a higher number of features. In fact, SBS selected 448 features, which is twice of selected features by mRMR.

Figure 10
figure 10

Two-step feature selector: step1: selection of two prominent groups of features; step2: selection of feature element.

Figure 11
figure 11

Performance of different feature selection methods. Dark blue: selected features, light blue: total features. Homom: Homogeneity: matlab, Homo: Homogeneity, MaxProb: Maximum probability, SOSvar: Sum of squares Variance, Savg: Sum of average, Svar: Sum of variance, IDN: Inverse difference normalized, IDMN: Inverse difference moment normalized, STD: Standard deviation, Mean, Var: variance and EOF: Energy of Fast Fourier transform.

3.4 Performance Evaluation of the Proposed Scheme

Figures 12 and 13 compares the performance obtained by the proposed method with different combinations of mentioned feature selector/classifiers in Sections 2.4 and 2.5. In the experiments, four different types of classifiers were considered: NB, KNN, ANN and SVM. They are capable of handling large-scale classification problems. Moreover, six of the best feature selection approaches were used; DEFS, mRMR, and sequential methods (SBS, SFBS, SFFS and SFS). The results are expressed in terms of box-whisker plots showing the average, median, the first and third quartile values of the accuracies and F-measures. The horizontal lines outside each box identify the upper and lower whiskers, and dot points denote the outliers. According to the results shown in Figs. 12 and 13, the proposed combination of mRMR and SVM outperformed the others (accuracy of 0.9996 with 224 features). Although, the highest accuracy (0.9997) was attained with a combination of SBS and SVM it was obtained with the cost of requiring a higher number of features (mRMR = 224, SBS = 448). Moreover, some of the other combinations (e.g. combination of DEFS and ANN) also attained acceptable results.

Figure 12
figure 12

Accuracy of the iris recognition, corresponding to 6 feature selectors (DEFS, SFS, SBS, SFBS, SFFS and mRMR,) and 4 classifiers (KNN, NB, ANN and SVM).

Figure 13
figure 13

F-measure of the iris recognition method, corresponding to 6 feature selectors (DEFS, SFS, SBS, SFBS, SFFS and mRMR,) and 4 classifiers (KNN, NB, ANN and SVM).

Indeed, as shown in Figs. 12 and 13, SVM classifiers present the lowest interquartile of accuracies, and F-measure.

3.5 Comparison with State-of-the-art Methods

Table 5 summarizes results of existing state-of-the-art iris recognition methods, tested at least on one of the following datasets: CASIA Ver.1, Ver.3-lamp and Ver.4-lamp. Regarding CASIA Ver.1, some accuracy results on Table 5 are higher than the average accuracy value of our method.Footnote 1 Considering that there are no reported performance results based on CASIA Ver.4-lamp, and due to the similarity of Ver.3-lamp and Ver.4-lamp [16] we compared our results with Ver.3-lamp. The proposed method attains better results than our previous work [26] with a lower number of features. Moreover, it performs at the state-of-the-art as can be observed from the results in Table 5.

Table 5 Comparison with other methods for CASIA Ver.1, Ver.3-lamp and Ver.4-lamp (the results are taken from the published works).

4 Conclusion

In this paper a new iris recognition method based on NSCT and GLCM, was proposed. This method has some advantages over other approaches. First the proposed iris localization algorithm performs well under nonconstrained conditions such as rotation, scale, and illumination conditions existing in CASIA Ver.4-lamp (see Fig. 7). Secondly, some of the summarized works just used the upper and/or lower part of the iris image to remove the occluded regions by the eyelid and eyelashes, which results in loss of significant data. The proposed method selects four ROIs to make use of the most significant data in the iris texture. Thirdly, the extracted features are invariant to scaling, shift and rotation, which are some of the most important properties in iris recognition. Fourthly, to reduce the effect of extreme values in the feature matrix, the extracted feature set are transformed and normalized which improved the recognition rate. A two-step feature selection process that consists on a filtering and a wrapper phases was proposed. Moreover, we inferred from the proposed feature selector, that features homogeneity, inverse difference normalized, inverse difference moment normalized, sum of variance, sum of squares variance, sum of average, maximum probability, standard deviation, energy of fast Fourier transform and their combinations are the best features for iris recognition problem. Finally, to estimate the accuracy of the proposed method LOOCV was used. The obtained average accuracies on CASIA Ver.4-lamp and Ver.1 were 99.96 %, and 99.97 % respectively.