Keywords

1 Introduction

Recent developments in science and technology emphasize on biometric based security systems for personal verification and identification, owing to high reliability and accuracy. Biometric technology has a wide range of applications including unique identity detection, computer network log on, electronic data security, ATMs, credit card purchases, mobile phones, medical records management, airport and border security, crime control etc.,

With the increasing demands for safety and security, it is necessary to have secured identification systems. Amid the biometric traits available, we focus on the most accurate, reliable, stable and unique characteristic – iris, which remains unchanged for the entire lifetime of a person. It is also observed that the irises of twins are completely different; similarly, the two irises of same person are also not identical. The iris is a thin, circular shaped area in between the pupil and the sclera, containing many features like freckles, stripes, corneas, arching ligaments, zigzag collarette area etc., as shown in the following eye image (Fig. 1). The pupil is the central transparent area, surrounded by the iris; it appears black in colour, as light rays entering the eye through the pupil are absorbed by the tissues inside the eye. The search space for finding the center of the iris is minimized if the pupil is localized accurately as the center of the iris lies inside the pupil. In general, the behaviour of an iris recognition system is mainly based on iris localization and segmentation.

The most intensive task in iris recognition is isolating iris from pupil, sclera, eyelashes and eyebrows, which is called iris segmentation. We need to accurately determine the inner and outer boundaries of the iris, which forms the basis for normalization, feature extraction, code generation and matching.

Fig. 1.
figure 1

Eye image for iris recognition

2 Related Works

Most of the algorithms assume the shape of the pupil as a circle. A denoising mask using improved fractional differentiation [19] preserves edges better than the existing methods, which require more iteration to ensure the stability and convergence. Medical images are enhanced adaptively based on the dynamic gradient feature of the image [3, 10]. Local descriptors named as principal patterns of fractional-order differential gradients [14] are designed for face recognition. Though, the local shape information and spatial information are preserved for face images, still the computational complexity of the method needs to be improved. The images enhanced by fractional differential mask using Newton’s interpolation [6] retain more textural information in the enhanced images, compared to the traditional fractional differential operator. But the results are based only on few gray scale images chosen for experimentation.

An edge gradient algorithm combined with improved Hough transform is used to locate the pupil center [9] based on the assumption that the pupil is circular, which may not be practical in all situations. Six fractional differential masks [27] were defined, which has constant values for fractional orders that may change, if area features of the image are altered. Automatic pupil segmentation based on threshold, area and eccentricity of local histogram is implemented on FPGA using a non-iterative scheme [22]. But when the eccentricity values are equal- this approach fails to locate the pupil region. A multiscale approach for edge detection [7] is extended to locate the pupil center, the length of the semi-axes and its orientation. The drawback of this method is the assumption, which is the shape of the pupil is elliptic and the image is blurred by a Gaussian kernel, which may not be conventional in all situations.

The pupil is localized automatically using histogram thresholding and mask filter from the region with highest probability [18], where the technique for choosing the value of structuring element and constructing the region may be optimized. Fuzzy linear discriminant analysis with wavelet transforms [4, 25] is used to extract the iris features. The pupil is then recognized based on a pixel unit by measuring its area and diameter. Another solution to localize the pupil using eccentricity along with gray levels is developed [23]. The iris is then localized by finding the gradient of gray level profile extracted from directional decomposition of the eye image. Though it addresses the problem of eyelids and eyelashes, it fails to detect the pupil region in the presence of specular reflections. Another drawback is that, this method is computationally hard because of its iterative nature. A Graph cut method to segment the pupil region for iris recognition [15] considers only the gray level information whose performance may be deteriorated when the noises in the image are present with the same gray level.

Most of the techniques reported in literature for locating the boundary of the pupil are circular-edge based and histogram-based techniques. Circular-edge based techniques like Hough transform are sensitive to noise and have high computational complexity. Also, these methods show poor performance in case of low contrast images and specular reflections. While, the histogram-based techniques fails when the other parts of the eye have the same gray level as pupil region or when the eyes are partially opened and have dense eyelashes. The fractional differential masks obtained using Gauss and Lagrange interpolation formulas are found to be more suitable for image enhancement rather than segmentation.

In this paper, a new mathematical approach for automatic pupil segmentation using the newly derived fractional differential mask, based on the technique of Stirling’s interpolation is proposed. Stirling’s formula gives more accurate results than other interpolation formula, considering few terms of the function values. This is a non-iterative method and the pupil region are segmented based on the dynamic threshold; this approach works well in the presence of specular reflections, partially opened eyes and also, when the eyes are occluded by the eyelashes and eyelids or even in the presence of spectacles.

3 Theoretical Background of Fractional Differentials

Recent researches on engineering applications are concentrating on fractional calculus. Fractional derivative plays an important role in various fields such as solid mechanics, astro physics, nano plasmonics, biology, electricity, modeling, visco elasticity, robotics etc. Most of the edge detection operators such as Sobel, Prewitt and Roberts, in image processing are based on integer-order differentiation; the concept of non-integer differentiation or fractional differentiation which is the generalization of integer-order differentiation is currently employed in the fields of signal and image analysis. Fractional differentials are the best descriptors of natural phenomena [27] as they non-linearly enhance the complex texture features. They also preserve the low frequency details in smooth areas and high-frequency marginal details in the region. The traditional fractional differential operators usually not successful in processing pixels corrupted by noise and with small correlations.

3.1 Definition

The definition for fractional derivative given by Grunwald-Letnikov (G-L), Riemann Liouville (R-L) and Caputo [21] are most popular in Euclidean space. The R-L definition is for analytical purpose; G-L based derivative is used mainly for discrete computation in digital image processing applications. The G-L based ν-order differential [20, 21] of a signal F(t) is,

$$ {\text{D}}^{{\upnu }} {\text{ F}}\left( {\text{t}} \right) = \mathop {\lim }\limits_{{{\text{h}} \to 0}} \begin{array}{*{20}c} {{\text{F}}^{{\left( {\upnu } \right)}} \left( {\text{t}} \right)} \\ { } \\ \end{array} = \mathop {{\text{lim}}}\limits_{{{\text{h}} \to 0}} \frac{{{\text{h}}^{{ - {\upnu }}} }}{{{\Gamma }\left( { - {\upnu }} \right)}}\sum\nolimits_{{{\text{m}} = 0}}^{{{\text{n}} - 1}} {\left( { - 1} \right)^{{\text{m}}} } \frac{{{\Gamma }\left( {{\upnu } + 1} \right)}}{{{\text{m}}!{\Gamma }\left( {{\upnu } - {\text{m}} + 1} \right)}}{\text{F}}\left( {{\text{t}} - {\text{mh}}} \right) $$
(1)

where n = (t−a)/h, is the step size and \({\it \Gamma} (t)=(t-1)!\) is the gamma function of t. For all ν \(\in \) R (R represents the real set and [ν] is its integral part), the duration of the signal F(t) is [a, t], where \(a<t\), a \(\in \) R, t \(\in \) R, has mth (m \(\in \) Z, Z represents integer set) order continuous differentiation. When ν > 0, m is not less than [ν]. The geometric meaning of fractional derivative of the signal f(t) is the fractional slope and its physical meaning is the fractional flow or speed, while the fractional order ν is the fractional equilibrium coefficient. It is also the generalized amplitude modulation and phase modulation of the signal [3, 27].

Grunwald-Letnikov based fractional differential mask has been used for enhancing retinal images [12]. The degree of texture enhancement and effect of noise is controlled by the use of fractional differential. G-L fractional differential mask is also used to smoothen the iris images, which plays a key role in highlighting the essential features of the iris image for segmentation. The pupil is then segmented using wavelet transforms [13]. This method is efficient in segmenting the pupil regardless of its shape and noise in the image.

3.2 Fractional Differential Filter

For a two dimensional signal f(x, y), the ν-order differentiation along the x and y direction of the signal will be

$$ \begin{aligned} \frac{{\partial^{{\upnu }} {\text{f}}\left( {{\text{x}},{\text{y}}} \right)}}{{\partial {\text{x}}^{{\upnu }} }} \cong {\text{f}}\left( {{\text{x}},{\text{y}}} \right) & + \left( { - {\upnu }} \right){\text{f}}\left( {{\text{x}} - 1,{\text{y}}} \right) + \frac{{\left( { - {\upnu }} \right)\left( { - {\upnu } + 1} \right){ }}}{2}{\text{f}}\left( {{\text{x}} - 2,{\text{y}}} \right) + \ldots \\ & + \frac{{{\Gamma }\left( {{\text{n}} - {\upnu } - 1} \right)}}{{{ }\left( {{\text{n}} - 1} \right)!{\Gamma }\left( { - {\upnu }} \right)}}{\text{f}}\left( {{\text{x}} - {\text{n}} + 1,{\text{y}}} \right){ } \\ \end{aligned} $$
(2)
$$ \begin{aligned} \frac{{\partial^{{\upnu }} {\text{f}}\left( {{\text{x}},{\text{y}}} \right)}}{{\partial {\text{y}}^{{\upnu }} }} \cong {\text{f}}\left( {{\text{x}},{\text{y}}} \right) & + \left( { - {\upnu }} \right){\text{f}}\left( {{\text{x}},{\text{y}} - 1} \right) + { }\frac{{\left( { - {\upnu }} \right)\left( { - {\upnu } + 1} \right){ }}}{2}{\text{f}}\left( {{\text{x}},{\text{y}} - 2} \right) + \ldots \\ & + { }\frac{{{\Gamma }\left( {{\text{n}} - {\upnu } - 1} \right)}}{{{ }\left( {{\text{n}} - 1} \right)!{\Gamma }\left( { - {\upnu }} \right)}}{\text{f}}\left( {{\text{x}} - {\text{n}} + 1,{\text{y}}} \right) \\ \end{aligned} $$
(3)

We observe that the sum of non-zero coefficients 1, −ν, (−ν(−ν + 1)/2),..… (\({\it \Gamma} \left( { - \nu { + 1}} \right)\)/(n−1)! \({\it \Gamma} \left( { - \nu {\text{ + n}}} \right)\)) is not zero, which is the explicit difference between fractional differential and integer based differentials. In other words, the low-frequency component of signal is zero in integer differential, but it is non-zero, in the case of fractional differentials. This is the motivation for using fractional differentiation in the field of image processing.

Any k × k sized mask with fractional order is known as the fractional differential mask (filter). Figure 2 shows 3 × 3 fractional differential mask along the x and y directions from Eqs. (2) and (3).

Fig. 2.
figure 2

Fractional differential mask in (a) x direction and (b) y direction

4 Formulation of New Segmentation Mask

The pursuance of the existing fractional differential operators needs to be improved for image processing applications. So, we use the approach of Stirling’s interpolation to shape the fractional differential operator for image segmentation. Define any point between t− mh− h and t− mh + h. When ν \(\in\) [−1, 1], let ξ = t− mh + (ν/2)h. Then, the signal F(t) in (1) using the Stirling interpolation becomes,

$$ \begin{aligned} F\left( \xi \right) = F\left( {x_{0} } \right) + & \left[ {\frac{{\xi - x_{0} }}{h}} \right]\left[ {\frac{{F\left( {x_{1} } \right) - F\left( {x_{ - 1} } \right)}}{2}} \right] + \frac{1}{2!}\left[ {\frac{{\xi - x_{0} }}{h}} \right]^{2} \left[ {F\left( {x_{1} } \right) - 2F\left( {x_{0} } \right) + F\left( {x_{ - 1} } \right)} \right] \\ & \quad \quad + { }\frac{1}{3!}\left[ {\frac{{\xi - x_{0} }}{h}} \right]\left[ {\left( {\frac{{\xi - x_{0} }}{h}} \right)^{2} - 1} \right]\left[ {\frac{{F\left( {x_{2} } \right) - 2F\left( {x_{1} } \right) + 2F\left( {x_{ - 1} } \right) - F\left( {x_{ - 2} } \right)}}{2}} \right] \\ & + \ldots \\ \end{aligned} $$
(4)

where x0 = t – mh, x1 = t – mh – h, x-1 = t – mh + h, x2 = t – mh – 2 h and x–2 = t – mh + 2h, substituting these values and ξ in the above Eq. (4) and simplifying we get,

$$ \begin{aligned} F\left( \xi \right) = \left( {1 - \frac{{\nu^{2} }}{4}} \right) & F\left( {t - mh} \right) + \left( {\frac{{16v + 6\nu^{2} - \nu^{3} }}{48}} \right)\left[ {F\left( {t - mh - h} \right)} \right] \\ & \quad + \left( {\frac{{\nu^{3} - 4v}}{96}} \right)\left[ {F\left( {t - mh - 2h} \right)} \right] + \left( {\frac{{6\nu^{2} - 16\nu + \nu^{3} }}{48}} \right)\left[ {F\left( {t - mh + h} \right)} \right] \\ & \quad - \left( {\frac{{\nu^{3} - 4v}}{96}} \right)\left[ {F\left( {t - mh + 2h} \right)} \right] + .. \ldots \ldots .. \\ \end{aligned} $$
(5)

The above Eq. (5), gives a signal value F(ξ) for any new point. It can be noted that the new signal F(ξ) is a linear combination of the points, F(x−2), F(x−1), F(x0), F(x1) and F(x2), thus F(ξ) will contain more information from its neighborhood points. Since the shortest changing distance of gray-level is one pixel (i.e., h = 1), when we replace F(t) in (1) by F(ξ), from (5), the new approximation on expanding would be as follows:

$$ \begin{aligned} { }\frac{{d^{\nu } F\left( t \right)}}{{dt^{\nu } }} = & \left( { - \frac{\nu }{3} + \frac{{\nu^{2} }}{12} + \frac{{v^{3} }}{48} + \frac{{v^{4} }}{96}} \right)F\left( {t + h} \right) + \left( {1 + \frac{{17\nu^{2} }}{48} - \frac{{7\nu^{3} }}{48} - \frac{{5\nu^{4} }}{192} + \frac{{\nu^{5} }}{192}} \right)F\left( t \right) \\ & \quad \quad + \left( { - \frac{2\nu }{3} + \frac{{14\nu^{2} }}{48} + \frac{{5v^{4} }}{96} + \frac{{v^{5} }}{96}} \right)F\left( {t - h} \right) + \left( {\frac{\nu }{24} - \frac{{v^{3} }}{96}} \right)F\left( {t + 2h} \right) + \ldots + \\ { } & \frac{{\Gamma \left( {m - \nu } \right)}}{{\Gamma \left( { - \nu } \right)\Gamma \left( {m + 1} \right)}}\left[ {\begin{array}{*{20}c} {\left( {1 - \frac{{\nu^{2} }}{4}} \right)F\left( {t - mh} \right) + \left( {\frac{{16v + 6\nu^{2} - \nu^{3} }}{48}} \right)\left[ {F\left( {t - mh - h} \right)} \right] + } \\ {\left( {\frac{{\nu^{3} - 4v}}{96}} \right)\left[ {F\left( {t - mh - 2h} \right)} \right] + } \\ { \left( {\frac{{6\nu^{2} - 16\nu + \nu^{3} }}{48}} \right)\left[ {F\left( {t - mh + h} \right)} \right]} \\ { - \left( {\frac{{\nu^{3} - 4v}}{96}} \right)\left[ {F\left( {t - mh + 2h} \right)} \right]} \\ \end{array} } \right] + \ldots \\ { } \\ \end{aligned} $$
(6)

The above Eq. (6) is the new fractional differentiation of F(t) based on Stirling’s interpolation. This expression actually gives an approximated value as it simplifies fractional differentiation. The coefficients of F(t + h), F(t), F(t−h).. F(t−nh) in the above Eq. (6) be denoted as,

$$ \begin{array}{*{20}l} {a_{ - 2} = \left( {\frac{\nu }{24} - \frac{{v^{3} }}{96}} \right)} \hfill \\ {a_{ - 1} = \left( { - \frac{\nu }{3} + \frac{{\nu^{2} }}{12} + \frac{{v^{3} }}{48} + \frac{{v^{4} }}{96}} \right)} \hfill \\ {a_{0} = \left( {1 + \frac{{17\nu^{2} }}{48} - \frac{{7\nu^{3} }}{48} - \frac{{5\nu^{4} }}{192} + \frac{{\nu^{5} }}{192}} \right)} \hfill \\ {a_{1} = \left( { - \frac{2\nu }{3} + \frac{{14\nu^{2} }}{48} + \frac{{5v^{4} }}{96} + \frac{{v^{5} }}{96}} \right)} \hfill \\ {........} \hfill \\ \begin{aligned} a_{n} = & \left( {\frac{\nu }{24} - \frac{{v^{3} }}{96}} \right)\frac{{{\Gamma }\left( {n - \nu - 3} \right)}}{{{\Gamma }\left( { - \nu } \right){\Gamma }\left( {n - 2} \right)}} + \left( { - \frac{\nu }{3} + \frac{{\nu^{2} }}{12} + \frac{{v^{3} }}{48} + \frac{{v^{4} }}{96}} \right)\frac{{{\Gamma }\left( {n - \nu - 2} \right)}}{{{\Gamma }\left( { - \nu } \right){\Gamma }\left( {n - 1} \right)}} + \\ & \left( {1 + \frac{{17\nu^{2} }}{48} - \frac{{7\nu^{3} }}{48} - \frac{{5\nu^{4} }}{192} + \frac{{\nu^{5} }}{192}} \right)\frac{{{\Gamma }\left( {n - \nu - 1} \right)}}{{{\Gamma }\left( { - \nu } \right){\Gamma }\left( n \right)}} + \\ & \left( { - \frac{2\nu }{3} + \frac{{14\nu^{2} }}{48} + \frac{{5v^{4} }}{96} + \frac{{v^{5} }}{96}} \right)\frac{{{\Gamma }\left( {n - \nu } \right)}}{{{\Gamma }\left( { - \nu } \right){\Gamma }\left( {n + 1} \right)}} \\ \end{aligned} \hfill \\ \end{array} $$

Using the above values, Eq. (6) becomes,

$$ \begin{aligned} \frac{{d^{v} F\left( t \right)}}{{dt^{v} }} \cong & a_{ - 2} F\left( {t + 2h} \right) + a_{ - 1} F\left( {t + h} \right) + a_{0} F\left( t \right) + a_{1} F\left( {t - h} \right) + \\ & a_{2} F\left( {t - 2h} \right) + \ldots + a_{n} F\left( {t - nh} \right) \\ \end{aligned} $$
(7)

In order to ensure that the fractional differential masks are rotation invariant (isotropic), the proposed improved differential, 3×3 masks are constructed in eight symmetric directions (e.g. 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°) namely (i) negative y-coordinate, (ii) positive y-coordinate (iii) positive x-coordinate, (iv) negative x-coordinate, (v) left upper diagonal, (vi) left lower diagonal, (vii) right upper diagonal (viii) and right lower diagonal respectively as shown in the Fig. 3 below:

Fig. 3.
figure 3

Proposed mask for segmentation in eight directions

5 Proposed Method

The pupil is the large dark region, which differs from other parts of the eye not only with their physical characteristics, but also on the gray levels. This difference in the gray level motivated to segment the pupil from other parts of the eye. In this paper, we have used a newly designed fractional differential mask to locate the pupil. The true pupil region is isolated from the iris images by means of correlation in linear filtering - the Stirling’s interpolation based fractional differential mask with the iris image. The threshold used to detect the pupil region is estimated dynamically from the gradient magnitude. These are the two factors, which play a key role for accurate pupil segmentation. The algorithm of the new fractional differential based Stirling’s interpolation approach for pupil segmentation is described below:

New Pupil Segmentation Algorithm

  • Step 1: Input a gray scale image f. If it is a colour image, convert it to gray scale.

  • Step 2: Input the fractional differential order ν for the 3 × 3 sized differential mask,w.

  • Step 3: Calculate the values of coefficients a0, a1 and a-1 and define the mask, w in x and y directions.

  • Step 4: Calculate the gradients in x and y direction based on the defined mask using correlation in linear filtering.

  • Step 5: Calculate the magnitude of the gradient Gm using the formula,

  • \({\text{G}}_{{\text{m}}} = \sqrt {g_{x}^{2} + g_{y}^{2} }\)

  • Step 6: Compute the minimum value for each column of Gm.

  • Step 7: Compute the threshold T which is the average of all values computed in Step 6. //dynamic threshold

  • Step 8: Check whether the gradient magnitude is greater than the given threshold, T.

    figure a
  • Step 9: Display the segmented pupil region.

  • Step 10: Display the pupil boundary in the original image using the segmented region.

Here, the fractional differential order of 0.3 is chosen experimentally for pupil segmentation and the corresponding 3 × 3 masks are given below:

Fig. 4.
figure 4

Fractional differential mask for ν = 0.3 in positive (a) x direction and (b) y direction

6 Results and Discussion

We have evaluated the legitimacy of the proposed method on two public databases: CASIA Version 1.0 [5] and MMU version 2 database [16]. The proposed algorithm is implemented in MATLAB 7.50 on a computer with E5 2670v2 processor, 64 GB RAM and 8 TB Hard disk. The output images by the proposed method are obtained by using the proposed masks shown in Fig. 4. It is also evident from the Fig. 5 that the proposed method outperforms in segmenting the pupil region, compared to the well-known segmentation algorithms [7] like Canny, Sobel and Laplacian of Gaussian. These existing methods identify the pupil region together with more noises, which makes them unsuitable for pupil segmentation.

The performance of the proposed method is measured using the accuracy rate (ACrate) [17] which is based on the accuracy error (Aerr), defined as

$$ {\text{A}}_{{{\text{err}}}} = \frac{{\left| {N_{pact} - N_{pdet} } \right|}}{{N_{pact} }}x{ }100 $$
(8)

where Npact and Npdet are the number of actual and detected pupil pixels, respectively. The actual and detected pupil pixels are obtained using functions in the image processing tool ImageJ [11]. If Aerr is less than 10%, then the detected pupil is marked as the true pupil. ACrate is defined as follows:

$${\text{AC}}_{{{\text{rate}}}} = \frac{{\text{N}_{\text{success}} }}{{\text{N}_{\text{total}} }} \times 100 $$
(9)

where Nsuccess is the total number of eye images in which the pupil has been localized successfully and Ntotal is the total number of images in the database. The detailed descriptions of the experimental results are as follows:

6.1 Experimental Setup1

In the first setup, results are obtained by testing the proposed algorithm on CASIA Version 1.0 Iris database. It contains 756 eye images of 108 persons, 7 images of each person, with a resolution of 320 × 280 pixels. Table 1 and 2 show the accuracy rates and accuracy error values of pupil segmentation by the proposed method for sample images from CASIA V1 database.

Table 1. Pupil segmentation comparison with existing methods on CASIA Version 1.0 (the results given by respective authors)
Table 2. Accuracy error values of the proposed method for sample images from CASIA Version 1.0 database

The numbers quoted under images in the Table 2 and 3 are the names of images given in the databases.

Each image in MMU database contains a white spot in the pupil region due to specular reflection. Therefore, it is necessary to reduce the effect of this white spot, by means of filters. It is also noted that few images in this database are occluded by eyelashes or has dense eyebrows. In order to remove these noises, most of the existing methods have used separate filters or have pre-processed the images before applying their technique. But, the greatest advantage of the proposed method is that this newly defined fractional differential mask acts as a filter in removing these noises, even in the presence of spectacles and segments the pupil region more accurately. The computational time for the proposed method is 1.037 s on an average. A comparative analysis of the proposed method with few existing methods is shown in Table 1 and 4 for the two databases.

6.2 Experimental Setup2

In the next setup, results are obtained by testing the proposed algorithm on MMU 2.0 Iris database. It contains 995 contributed by 100 persons, 5 images per person, with a resolution of 320 × 240 pixels. Table 3 and 4 show the accuracy error values and accuracy rates of pupil segmentation by the proposed method for sample images from MMU 2 database.

Table 3. Accuracy error values of the proposed method for sample images from MMU 2.0 database
Table 4. Pupil segmentation comparison with existing methods on MMU 2.0 (the results given by respective authors)
Fig. 5.
figure 5

Segmentation results of the existing and proposed method from CASIAV1.0 and MMU2 database

Table 5. Statistical measures to evaluate the proposed method

The results in Tables 1 and 4 show that the proposed algorithm has higher accuracy than the existing techniques for both the databases. The values of statistical measures computed based on the accuracy error is listed in Table 5. The accuracy error values listed in the Tables 2 and 3 are calculated based on the Eq. (8). These values indicate that the accuracy error of the entire images in the databases falls below 10%. Hence, an accuracy rate of 99.98% is achieved for CASIA V1 database images and 99.99% of accuracy is obtained for MMU2 database images, based on the average accuracy error, shown in the Table 5.

7 Conclusion

In this paper, a new method for segmenting the pupil region from iris images using the concept of Stirling’s interpolation on fractional derivative is proposed. This paper presents an innovative view of using fractional derivative together with Stirling’s interpolation for segmenting the pupil from the eye image. The pupil region is segmented from the eye image based on the dynamic threshold. This threshold overcomes the drawbacks of histogram-based thresholding. The performance of the proposed method is not deteriorated even in the presence of dense eyelashes, eyelids and spectacles. The research also reveals that the proposed mask acts as a filter or preprocessor to remove the noises. Experimental results, proves that the proposed method is highly accurate, compared to the reported ones. Hence, the proposed method can be chosen for iris recognition-based security applications and also in the fields of Ophthalmology.

Future work will involve extending this method for a complete iris recognition system.