1 Introduction

Glaucoma is a disease of the eye causing optic nerve damage. The severity of the disease can be understood by the fact that it is the second leading cause of blindness in the world (Narasimhan and Vijayarekha 2011; Bulletin of the World Health Organization 2004). There is a gradual loss of vision as the disease progresses which occur over a long interval of time. Glaucoma is called the “silent thief of sight”, as the patient is quite unaware of the disease until it has reached an advanced stage (National Eye Institute 2014). This disease is incurable, however, by proper treatment in time its development can be prolonged. Therefore, early detection of glaucoma plays an indispensable part in the diagnosis of the disease.

Figure 1 illustrates a retinal image showing the optic disk (OD), optic cup (OC) and neuroretinal rim (NRR). The OD also known as the optic nerve head (ONH), serves as the entry and exit point of the eye. The central retinal artery and vein passes through the OD. A normal OD is orange-pink in color but appears pale in presence of pathologies. The OC is a cup-like structure that lies in the center of the OD. It is the central excavation in the optic nerve head and is devoid of nerve fibers (Bhartiya et al. 2010). The neuroretinal rim forms the outer boundary of the optic cup and lies between the edge of the OD and the OC. The NRR is that region of the OD that occupies the retinal nerve fiber axons. The shape of the NRR changes in case of a pathological OD.

Fig. 1
figure 1

Retinal image showing the OD, OC, NRR and the ISNT quadrants

The intraocular fluid present in the eye exerts pressure on the eye known as the intraocular pressure (IOP). In normal condition, if the drainage system of the eye is working properly then the intraocular fluid can flow easily and prevent a pressure build-up. However, in case of glaucoma, the drainage system becomes blocked and so the intraocular fluid cannot drain easily. This causes a rise in the IOP, resulting in the damage of the optic nerve fibers. As the intensity of the damage increases, the OD begins to hollow and develops a cupped shape (Jackson 2014). The OC expands slowly with the progression of the disease and this causes a gradual loss of vision. Focal enlargement of the OC and thinning of the NRR can be considered as the two basic characteristics of a glaucomatous image (Bourne 2006). The increase in the cup area serves as an indicator for glaucoma and is measured by the cup-to-disk ratio (CDR). The CDR value is small for normal OD, while it is large for glaucomatous disks. The neuroretinal rim can be divided into four quadrants: inferior (I), superior (S), nasal (N) and temporal (T) as shown in Fig. 1. In an healthy eye, the inferior (I) rim is usually thicker than the superior (S) rim, which is thicker than the nasal (N) rim, and the temporal (T) rim is the thinnest. This is known as the ISNT rule (Harizman et al. 2006; Bourne 2006). For assessment of glaucoma, the cup-to-disk ratio (CDR) and the ISNT rule are the two most widely used parameters. Precise segmentation of the OD and OC plays a vital role in evaluating the CDR and NRR area accurately.

2 Literature survey

Numerous methods have been reported in the literature for the screening of glaucoma. There are many different methods available in literature for OD detection (Aquino et al. 2010; Morales et al. 2012, 2013). Morphological and edge detection techniques followed by the Circular Hough Transform is proposed in Aquino et al. (2010), to obtain a circular OD boundary approximation. The method requires a pixel located within the OD to extract an OD-containing sub-image. The OD boundary was extracted in both red and green plane from the RGB sub-image, and the better of the two segmentation results was finally selected. The OD boundary was approximated by a circle using the circular Hough transform. The method was able to find the OD correctly in 1186 out of the 1200 images from the Messidor database yielding a success rate of 99 %. The method highlights the poor segmentation performance of the circular approximation as compared to the elliptical approximation. An OD segmentation method based on region growing and morphological operations is proposed in Priyadharshini and Anitha (2014). Different operations such as grayscale conversion and histogram equalization are used in the pre-processing stage to enhance the input image. The blood vessels are removed using morphological operations and later median filtering is used to smooth the image. Region growing algorithm is applied to obtain the segmented OD. The method is simple and involves less computational complexity but the performance of the method in detecting the OD over a diverse range of images is questionable. The method proposed in Morales et al. (2012) utilizes the principal component analysis (PCA) and stochastic watershed transformation for OD segmentation. The method used PCA in the pre-processing stage prior to the segmentation process in order to obtain an intensity image which illustrates the different features of the retina more vividly. The stochastic watershed transformation was applied on this image to obtain the OD. Stochastic watershed differs from the classical watershed in the sense that, in this transformation a number of marker-controlled watershed realizations are performed to estimate a probability density function (pdf) of the image. The pdf is used to filter out the non-significant features. The method was evaluated on the 110 images from the DRIONS database and achieved an accuracy of 0.9901. The method shows an improvement for OD segmentation over other classical watershed segmentation methods. However, the method involves more complexity than the classical watershed segmentation methods.

There are a very few methods in literature regarding OC segmentation as compared to OD segmentation. Wyawahare and Patil (2014) used thresholding and canny edge detection method to segment the OC. Three thresholds for the R, G and B plane images was obtained by finding the maximum intensity values in the individual planes, respectively. The approximated cup region was found based on the thresholds. Canny edge detection method was used to obtain the OC boundary, which was later approximated by a circle. The method achieved an accuracy of 96.5 % evaluated on 370 images from the MESSIDOR database. The performance of the method shows an improvement over other existing methods. An OC segmentation method based on radial gradient is proposed by Ingle and Mishra (2013). The images were enhanced by applying CLAHE to the individual R, G and B plane images. A initial threshold was obtained from the R, G and B plane images. The threshold was later used to obtain an enhanced RGB color image. The G channel from the resulting RGB color image was extracted and the radial gradient was applied on this image. The approximate cup region was found from the gradient magnitude image and then component labeling was used to get the OC boundary. The main drawback of the proposed method is that the computational complexity is more as compared to the linear gradient based methods.

The CDR is the commonly used feature for assessment of glaucoma. Mishra et al. (2011) used CDR for the analysis of glaucoma. Morphological operations and image inpainting were used for the purpose of illumination correction and blood vessels removal. Active contour model was used for detection of OC and OD. The CDR was calculated by taking the ratio of the OC area to OD area. The proposed method was evaluated on the Messidor and other OD databases, and provided promising results regarding the detection of glaucoma. A robust method for glaucoma detection based on CDR is proposed in Rajaiah and Britto (2014). The method makes use of different operations such as linear discriminant analysis (LDA), image inpainting and morphological operations for OD segmentation. The OC was segmented using watershed transformation. The CDR was calculated from the segmented OD and OC as the ratio of the vertical cup height to the vertical disk height. The method was tested on a database of 50 images and obtained a success rate of 90 %. The available literatures that describe the use of other features besides CDR such as ISNT rule, PPA, ISNT ratio, displacement of the vascular bundle, etc., for the screening of glaucoma includes (Narasimhan and Vijayarekha 2011; Kavitha et al. 2010; Ahmad et al. 2014; de la Fuente-Arriaga et al. 2014). The method proposed in Narasimhan and Vijayarekha (2011) uses the CDR and the ISNT ratio for glaucoma evaluation. K-means clustering technique was used to extract the OD and OC, which were finally, approximated using an elliptical approximation. The CDR was evaluated as the ratio between the area of the OC to the area of the OD. The ISNT ratio was calculated by measuring the area of the blood vessels in ISNT quadrant. The performance of the method was analyzed on three different classifiers KNN, SVM and Bayes classifier. The method achieved a maximum classification rate of 95 % for glaucoma detection for the SVM classifier. Kavitha et al. (2010) used the CDR and the ISNT rule for glaucoma evaluation. The OD was extracted using manual threshold method, component analysis and region of interest (ROI) based segmentation. Component labeling was used to obtain the OD boundary. The OC was detected from the green plane using component analysis method and the boundary was obtained through active contour. The method uses different features like CDR, asymmetry between left and right eye, NRR area and ISNT rule to diagnose glaucoma. The method was tested on image datasets from a local hospital and shows promising results for glaucoma detection. The component analysis method performs well in OC localization even if the image is of low contrast. In Ahmad et al. (2014), the CDR and the ratio of the NRR area in the ISNT quadrants are used for glaucoma evaluation. The OD and OC were segmented using mean thresholding morphological method. The edges of the OD and OC were found by a canny filter. The CDR was found by the ratio of OC area to OD area. The NRR area was found by performing AND-operation on the resultant images of OD and OC. The NRR area in the ISNT quadrants was evaluated by applying the extracted NRR image to the corresponding mask. The proposed method was evaluated on three publicly available databases, DMED, FAU and Messidor. The method achieved an accuracy of 97.5 %.

The proposed method involves less computational complexity as it does not include any classifier or clustering techniques. It also provides better classification accuracy, as the classification is done on the basis of the two features, CDR and ISNT rule. The method has been implemented on a diverse range of images, including four publicly available databases and a local hospital database.

3 Theoretical background

3.1 Mathematical morphology

Mathematical morphology (MM) is used for the implementation of image analysis. It deals with the mathematical theory of sets, where the sets represent objects in an image. In MM, a binary image is considered as a set, which consists of its foreground pixels. Morphological operations involves the interaction of an image with a sub-image, which is relative small as compared to the image. The sub-image is termed as structuring element (SE) (Gonzalez and Woods 2006).

3.1.1 Erosion

The erosion of a binary image I by the SE S, denoted by \(I \ominus S\), is defined as the set of all points z such that, I translated by z, is contained in I.

$$\begin{aligned} I \ominus S = \{ z|(S)_z \subseteq I\} \end{aligned}$$

where \((S)_z\) is the translation of S by z, given as,

$$\begin{aligned} (S)_z = \{ x|x = c + z, \ {\text {for }} \ c \in S\} \end{aligned}$$

Erosion has the effect of shrinking objects by etching away their boundaries.

3.1.2 Dilation

The dilation of a binary image I by SE S, denoted by \(I \oplus S\), represents the interaction of S with I such that, \(\hat{S}\) and I overlap by at least one element.

$$\begin{aligned} I \oplus S = \{ z|(\hat{S})_z \cap I \ne \phi \} \end{aligned}$$

where \(\hat{S}\) is the reflection of S about its origin and \((\hat{S})_{z}\) is the translation of \(\hat{S}\) by z. Dilation can be used to fill in holes and connect disjoint objects.

3.1.3 Opening

The opening of a binary image I by the SE S, denoted by \(I \circ S\), is defined as the erosion of I by S, followed by dilation of the result by S.

$$\begin{aligned} I \circ S = (I \ominus S) \oplus S \end{aligned}$$

Opening can be used to smooth the contour of an object and eliminate those objects which too small to contain the SE.

3.1.4 Closing

The closing of a binary image I by the SE S, denoted by \(I \bullet S\), is defined as the dilation of I by S, followed by erosion of the result by S.

$$\begin{aligned} I \bullet S = (I \oplus S) \ominus S \end{aligned}$$

Closing can be used to fill in holes and small gaps.

3.1.5 Morphological reconstruction

Morphological reconstruction is a morphological tool that is often used to constrain the growth or diminution of objects to some predefined boundaries. Contrary to other morphological operations that require an image and an SE, morphological reconstruction requires two images and an SE. The two images are termed as marker image and mask image, respectively. The transformation starts from the marker image and is constrained by the mask image, while the SE specifies the connectivity (Gonzalez and Woods 2006).

3.2 Region growing

Region growing is a region-based segmentation method that assembles pixels or sub-regions into larger regions based on predefined criteria for growth. The first step in region growing is to select a set of seed points. A seed is an experimental pixel with characteristic that belongs to the object to be segmented. The seed set may have one or more members. The region growing process may start from a single seed point or a set of seed points. The regions are iteratively grown by appending all unallocated neighboring pixels to the regions that satisfies a predefined criterion. One such predefined criterion is the use of an intensity threshold. The pixel which satisfies the threshold criteria is allocated to the region. This is an iterative process and continues until all pixels that satisfy the predefined criteria are allocated to a region (Adams and Bischof 1994; Jebasudha and Kaleeswari 2012).

3.3 Watershed transformation

The Watershed transform is a region-based segmentation algorithm introduced by Vincent and Soille (1991), and often produces stable segmentation results. It is based on the concept of immersion (Kaur and Aayushi 2014). Here, an image is visualized as a topological surface where the intensity values are considered as heights. Three types of points are considered in such a topographic interpretation: (a) points which belong to a regional minima; (b) points where if a water-drop is placed at any of the location defined in (a), will fall definitely to a single minima; and (c) points where water is equally likely to fall to more than one minima. For a specific regional minimum, the set of points fulfilling condition (b) is called the watershed of that minimum, while the points fulfilling condition (c) are termed as watershed lines. The main objective of the segmentation algorithm is to find the watershed lines.

Watershed segmentation is often applied to the gradient of an image, than to the image itself as the small values of the gradient corresponds nicely to the objects of interest. Watershed transform when applied to the gradient image often leads to over-segmentation, due to presence of a large number of regional minimum in the gradient. Marker-controlled watershed segmentation is an approach to control over-segmentation through the use of markers. Through a procedure called minima imposition, the markers (internal and external) are used to modify the gradient image so that a regional minimum occurs only at the marked location (Gonzalez et al. 2004).

4 Proposed method

The proposed method investigates the input retinal images for the presence of glaucoma. These images are obtained from various sources including HRF, MESSIDOR, DRIONS-DB, DIARETDB1 and from a local eye hospital. Inclusion of data from various sources increases the diversity of the dataset and hence tests the system to its full ability. The proposed method is applied on 163 normal images and 81 glaucomatous images. The various stages involved in this process are shown in Fig. 2.

Fig. 2
figure 2

Block diagram of proposed method

The input images from the various databases are cropped manually to obtain the ROI, which consists of the OD region. The OD and OC is segmented using two methods, Region growing method and Watershed transform. The use of two methods ensures a high degree of reliability in the segmentation process. The results obtained from the two methods are combined to obtain a single segmented OD and OC, which are later, approximated using a circular approximation. The combination of the two methods to obtain a single output yields better segmentation results as compared to a single method. The OD and OC segmentation is followed by CDR evaluation and NRR area evaluation in the ISNT quadrants. The two features, CDR and ISNT rule is used to classify an image as normal or glaucomatous. The results shown in the following section is obtained by using the same image (from HRF database) for all the cases.

4.1 Optic disk segmentation

A simple and efficient method is presented in this paper for OD segmentation. The red channel of the RGB color space is extracted for the purpose of segmentation as the OD appears bright and homogenous in the red channel as can be seen from Fig. 3b. Generally, retinal images suffer from uneven illumination which gives rise to shading effect. This shading effect occurs due to the presence of low-frequency noise contents which can be removed by using a median filter (Dougherty 2011). Figure 3c shows the median-filtered image.

Fig. 3
figure 3

a Original cropped fundus image, b red plane image, c median-filtered image, d opening, e segmented OD obtained by region growing method

The OD region is obstructed by blood vessel which makes it difficult to perform segmentation. Gray-scale morphological opening operation is used to remove the blood vessels. A structuring element (SE) is chosen such that only the blood vessels are removed and the object of interest is not altered. Objects that are too small to contain the SE are removed in the morphological opening operation. The amount of blood vessels removed depends on the size of the SE. Thus, if the size of the SE is too small, than only the thin blood vessels will be removed. This would result in a smaller segmented OD area from the segmentation procedure. On the other hand, if the size of the SE is too large than the bigger vessels will be removed along with some area around the OD. This would result in a larger segmented OD area from the segmentation procedure than the actual OD area. Experimentally, different structure elements has been tested in the morphological opening operation to get good segmentation results. It is found that for the datasets used in the presented work, a disk shaped structure element of radius 34 produces better segmentation results in most of the images. Figure 3d shows the processed image after elimination of blood vessels. The proposed method uses two methods for OD segmentation. After the removal of the blood vessels, the OD is segmented simultaneously using region growing method and watershed transformation. The final result is obtained by combining the outputs from the two methods.

4.1.1 OD segmentation using region growing method

The OD appears as the brightest feature in the R-plane image as can be seen from Fig. 3b. The approximate region of the OD can then be found by finding the region with the maximum intensity in this image. Any pixel within this region can be specified as a seed point. Here, only a single seed point is used to start the region growing process. A threshold is used to append all the neighboring pixels to the seed point. If the intensity of the seed point is S, then a pixel is appended to the seed point if, \(\left| {S - N} \right| \le T\), where N is the intensity of the pixel and T specifies the threshold. Thus, a pixel is appended to the seed point if absolute difference between the seed points intensity and the pixels intensity is equal to or below the threshold T. For a lower value of threshold, less number of pixels is appended to the seed point yielding a segmentation result where the segmented OD area is less than the actual OD area. On the other hand, for a higher value of threshold more number of pixels are appended to the seed point yielding a segmentation result where the segmented OD area is more than the actual OD area. Experimentally, different values of threshold have been tested for OD segmentation. It is found that for the datasets used in the present work, the threshold value of 25 produces better segmentation results in most of the cases.

Fig. 4
figure 4

a Complemented image, b gradient magnitude image, c internal markers, d external marker, e modified gradient magnitude image, f regional minima of the modified gradient magnitude image, g segmented OD

The resultant image obtained from the above procedure may contain regions which are not related to the OD. Therefore, morphological reconstruction is applied on this image extract only those pixels that are part of the OD. As discussed in Sect. 3.1.5, morphological reconstruction requires two images and an SE. One image, the marker serves as the starting point for the transformation while the other image, the mask constrains the transformation. The SE is used to define the connectivity. Here, a binary image consisting of the seed point as the foreground is used as the marker image while the resultant image obtained from appending the neighboring pixels to the seed point is used as the mask image. A \(3 \times 3\) array of 1s is used as the SE. If I, J and s denote the mask, marker and SE, respectively, then the reconstruction of I from s represented by \({h_{k + 1}}\) is obtained by the following iterative procedure:

  1. 1.

    Initialize \({h_1} = J\), here \((k=1)\)

  2. 2.

    Repeat, \({h_{k + 1}} = ({h_k} \oplus s) \cap I\) until \({h_{k + 1}} = {h_k}\),

where \(({h_k} \oplus s) \cap I\) specifies the dilation of \({h_k}\) with SE, s followed by logical AND operation of the result with the mask, I. This is an iterative procedure and continues until \({h_{k + 1}} = {h_k}\). The final segmented result is represented by \({h_{k + 1}}\) and is shown in Fig. 3e.

4.1.2 OD segmentation using watershed transform

The watershed transforms interpret the regional minimum in an image as the objects to be segmented and develops watershed ridge-lines around these objects. So, to make the OD as regional minimum, the image obtained after the removal of blood vessels (Fig. 3d) is complemented, Fig. 4a. The gradient magnitude is used to pre-process this image prior to segmentation. The gradient magnitude image as shown in Fig. 4b is evaluated using the sobel operator. The gradient magnitude image is modified with the help of markers. The internal markers correspond to OD region and are evaluated by finding the regional minima in the complemented image (Fig. 4a). Regional minima are connected components of pixels with a constant intensity value. The external boundaries of those regions have a higher value. A threshold value T is set as given by, \(T = \min (I(x,y)) + 1\), where \(\min (I(x,y))\) gives the minimum value of the complemented image (Fig. 4a), to extract only those regional minima that are of interest. All the minima in the image (Fig. 4a) whose depth is less than or equal to the threshold are highlighted as white (i.e., intensity value of 255) in the resultant image as shown in Fig. 4c. The external marker is given as a circle of constant diameter centered on the centroid of the image. The size of the circle is dependent on the image-size. It is seen that the OD constitutes about 50 % area in the cropped fundus image. Thus, the size of the circle is considered to be greater than 50 % of the image-size. The external marker is shown in Fig. 4d. The internal and external markers are used to modify the gradient magnitude image as shown in Fig. 4e, so that regional minima occur only at the marked locations as can be seen in Fig. 4f. The watershed transform is applied on the modified gradient image to get the segmented output as shown in Fig. 4g.

4.2 Optic cup segmentation

The segmentation of OC is considerably more difficult than OD segmentation, due to high density of blood vessels encompassing the cup boundary. The occasional uncertain boundary between the OD and OC makes it difficult to distinguish between them. The color information of the cup region can be used to detect the OC. It is found that the cup region appears homogenous and well contrasted against the background in the ‘a’ plane of the Lab color space. The cup region appears dark in the ‘a’ color plane as can be seen from Fig. 5a. Median filtering is done on the ‘a’ plane image to remove the low-noise components, as shown in Fig. 5b. Morphological closing operation is used to remove the interfering blood vessels. Closing tends to enlarge the boundaries of foreground regions (region with maximum intensity) in an image and shrink background holes in such regions. Thus, if the size of the SE is too small, then only the thin vessels encompassing the OC will be removed. This would result in a smaller segmented OD area from the segmentation procedure. On the other hand, if the size of the SE is too large than the boundaries of the foreground region (OC region) will get enlarged resulting in a larger segmented OC area from the segmentation procedure. Experimentally, it is found that for the datasets used in the presented work, a disk shaped structure element of radius 13 produces better segmentation results in most of the cases. The image obtained after eliminating the blood vessels is shown in Fig. 5c. Once the blood vessels are removed, the segmentation operation is carried out. The OC is segmented simultaneously using region growing and watershed segmentation, and the results are combined to obtain the final output.

Fig. 5
figure 5

a ‘a’-plane image, b median-filtered image, c closing, d complemented image, e segmented OC

4.2.1 OC segmentation using region growing

Single-seeded region growing method is used for the segmentation of the OC. The a-plane image obtained after the removal of blood vessels (Fig. 5c) is complemented as shown in Fig. 5d. The optic cup region appears to be the brightest region in this image. The approximate region of the OC can then be located by finding the region with the maximum intensity value. Any pixel in this approximated region can then be used as the seed location. After the specifying the seed point, the OC is segmented using the region growing algorithm as discussed in Sect. 4.1.1. As in the case of OD segmentation (Sect. 4.1.1), the threshold value to be used in the region growing algorithm is set experimentally. Experimentally, it is found that for the datasets used in the present work, the threshold value of 25 produces better segmentation results in most of the cases. The segmented result is shown in Fig. 5e.

4.2.2 OC segmentation using watershed transform

Marker-controlled watershed segmentation is used for OC segmentation. The gradient magnitude as shown in Fig. 6a is obtained by applying the sobel operator on the blood vessels removed image of Fig. 5c. The internal and external markers are used to modify the gradient magnitude image. The internal markers are evaluated by finding the regional minima in Fig. 5c by the procedure as described in Sect. 4.1.2. The external marker is given as a circle of constant diameter centered on the centroid of the image. The size of the circle is evaluated as discussed in Sect. 4.1.2 and is approximated by about 25 % of the image-size. The internal and external markers are shown in Fig. 6b, c, respectively. Using minima imposition the gradient image is modified to have the regional minima only at the marked locations as shown in Fig. 6d. The watershed segmentation algorithm is applied on this modified gradient image to get the segmented OC as shown in Fig. 6e.

Fig. 6
figure 6

a Gradient magnitude image, b internal marker, c external marker, d modified gradient magnitude image, e segmented OC obtained by Watershed transform

4.3 Post-processing

The segmented OD and OC obtained from the two methods, as shown in Fig. 7a, b, respectively, are combined to get the final segmented results, as shown in Fig. 7c. The results are combined using the logical OR-operation. In literature, generally the OD and OC are represented using circular or elliptical approximation. Here, we have approximated the final results using a circular approximation as it is easy to approximate the OD and OC using a circle rather than by an ellipse. The use of elliptical approximation requires the estimation of two centers and two radii which increases the complexity of the system. Whereas, implementation of circular approximation is simple as it involves the estimation of a single center and radius. The center for implementing the circle can be easily evaluated by finding the centroid of the segmented OD and OC, whereas the radius can be evaluated by calculating half of the OD and OC width (from the segmented OD and OC) along the horizontal or vertical direction, whichever is more. The final approximated OD and OC is shown in Fig. 7d.

Fig. 7
figure 7

Segmented OD and OC using: a region growing method, b Watershed transform, c combination of a and b, d circular approximation of the final result

4.4 Detection of glaucoma

A brief study of the different literatures as discussed in Sect. 2 highlights the use of CDR and ISNT rule as the two most widely used features for glaucoma evaluation. The proposed method uses both the CDR and ISNT rule for the diagnosis of glaucoma, as combined evaluation increases the detection accuracy.

4.4.1 Evaluation of CDR

The cup area increases slowly in glaucoma which leads to gradual loss of vision. The increase in cup area is examined by evaluating the CDR value. Clinically, the CDR is defined as the ratio of the vertical cup diameter (VCD) to vertical disk diameter (VDD) as shown in Fig. 8.

Fig. 8
figure 8

Fundus image showing the evaluation of CDR

The CDR value is computed from Eq. 1.

$$\begin{aligned} {\text {CDR}} = \frac{{{\text {Vertical \,cup \,diameter}}}}{{{\text {Vertical \,disk \,diameter}}}} \end{aligned}$$
(1)

The CDR value is less than 0.5 for a normal OD, whereas it exceeds 0.5 in case of a glaucomatous disk (Khan et al. 2013).

4.4.2 Evaluation of NRR area in the ISNT quadrants

The NRR area plays an important role in glaucoma analysis. According to the ISNT rule, for normal OD, the NRR area lies in the order:

$$\begin{aligned} {\text {Inferior}} \ge {\text {Superior}} \ge {\text {Nasal}} \ge {\text {Temporal}} \end{aligned}$$

Glaucoma causes damage to the superior and inferior nerve fibers prior to the temporal and nasal fibers. This results in thinning of the superior and inferior rims, and thus violation of the ISNT rule (Kavitha et al. 2010; Harizman et al. 2006).

Fig. 9
figure 9

NRR area obtained from the segmented OD and OC

The NRR region as shown in Fig. 9 is extracted by performing XOR operations on binary images of the segmented OD and OC. The NRR area in the ISNT quadrants can be calculated by applying different mask to the extracted NRR region. Figure 10a–d shows the different masks used for extracting the NRR area in the ISNT quadrants. The NRR area in different quadrants is obtained by performing AND operation of the extracted NRR image with the corresponding mask respectively. The results obtained are shown in Fig. 10e–g.

Fig. 10
figure 10

ad Mask for evaluation the NRR area in superior, inferior, nasal and temporal quadrants, ef NRR area obtained in superior, inferior, nasal and temporal quadrants

5 Results and discussions

The assessment of the proposed method has been carried out on four publicly available databases: HRF (Budai and Odstrcilik 2011), MESSIDOR (MESSIDOR-TECHNO-VISION Project 2014), DRIONS-DB (Feijoo et al. 2009) and DIARETDB1 (Kauppi et al. 2007). The images from a local eye hospital (Sri Sankaradeva Netralaya) are also used for evaluation of the proposed method. high-resolution fundus (HRF) database supports comparative studies on automatic segmentation algorithms on retinal fundus images. It contains 15 images of healthy patients, 15 images of patients with diabetic retinopathy and 15 images of glaucomatous patients. The 45 retinal images are available in JPG format with a resolution of \(3504 \times 2336\) pixels, captured by a Canon CR-1 fundus camera with a 45 degree field of view (FOV). The Messidor Database mainly focuses towards the computer-assisted diagnosis of diabetic retinopathy. It consists of 1200 fundus color images that were acquired by 3 ophthalmologic departments using a Topcon TRC NW 6 non-mydriatic retinograph with a 45 degree FOV. The images are available in 8-bits per color plane with resolution of \(1440\times 960\), \(2240 \times 1488\) or \(2304 \times 1536\) and provided in TIFF format. The DRIONS-DB database helps in the assessment of optic nerve head (ONH) segmentation. The database consist of 110 color digital retinal images, that were acquired with a color analogical fundus camera and later digitized using a HP-PhotoSmart-S20 high-resolution scanner. The images are available in the RGB format with 8 bits/pixel having resolution of \(600 \times 400\). The database consists of about 25 patients having chronic simple glaucoma and the remaining with eye hypertension. The DIARETDB1 database is used for benchmarking diabetic retinopathy detection methods. It contains 89 color fundus images, 5 of which are considered normal not containing any signs of diabetic retinopathy while the other 84 images show signs of diabetic retinopathy. The images were captured using a 50 degree FOV digital fundus camera with varying settings.

The images obtained from the various sources are classified as normal or glaucomatous based on the two features, CDR and ISNT rule. Table 1 is the representation of only ten images obtained from the HRF and Messidor databases showing CDR evaluated by proposed method and the compliance with the ISNT rule. An image is classified as normal if the CDR value is less than 0.5 and it satisfies the ISNT rule. On the other hand if the CDR value for an image is greater 0.5 and it violates the ISNT rule then it is classified as glaucomatous. It is seen that large OD have large cups, and so the CDR in such cases may give erroneous results (Garway-Heath et al. 1998; Kavitha et al. 2010). The ISNT rule provides better classification accuracy in such cases as it is independent of the OD size (Jonas et al. 1998). Thus, in case of a mismatch between the two features, the classification decision is made based on the ISNT rule.

Table 1 Results of glaucoma evaluation

The classification performance of the proposed method is assessed using the sensitivity, specificity and accuracy measures. Sensitivity is the percentage of glaucomatous images correctly classified as glaucomatous, whereas specificity is the percentage of normal images correctly classified as normal. Accuracy is the percentage of correctly classified images from the total set of images. These parameters are computed as:

$$\begin{aligned} {\text {Sensitivity}}= & {} \frac{{{\text {TP}}}}{{{\text {TP}} + {\text {FN}}}} \end{aligned}$$
(2)
$$\begin{aligned} {\text {Specificity}}= & {} \frac{{{\text {TN}}}}{{{\text {TN}} + {\text {FP}}}}\end{aligned}$$
(3)
$$\begin{aligned} {\text {Accuracy}}= & {} \frac{{{\text {TP}} + {\text {TN}}}}{{{\text {TP}} + {\text {FN}} + {\text {TN}} + {\text {FP}}}} \end{aligned}$$
(4)

where

  • True positive (TP) is the number of glaucomatous images classified as glaucomatous.

  • False negative (FN) is the number of glaucomatous images classified as normal.

  • True negative (TN) is the number of normal images classified as normal.

  • False positive (FP) is the number of normal images classified as glaucomatous.

Table 2 Results for detection of glaucoma images
Table 3 Results for detection of normal images
Table 4 Comparison of the proposed method with other techniques
Table 5 Comparison of the proposed method with other techniques implemented on the same datasets

Tables 2 and 3 show the sensitivity and specificity of the proposed method for different databases used. The proposed method correctly detects 75 retinal images as glaucomatous out of 81 glaucomatous retinal images with an overall sensitivity of 92.59 %. From 163 normal images, 154 were correctly classified as normal with an overall specificity of 94.47 %. The accuracy of the proposed method was found to be 93.85 %. Table 4 shows the comparison of the proposed method with other methods available in the literature in terms of sensitivity, specificity and accuracy. The methods mentioned in Table 4 shows that they have been evaluated either on some publicly available databases or on a local hospital database. The proposed method has been evaluated on a diverse range of images from different public databases and a local hospital database. The combined performance of proposed method in terms of sensitivity, specificity and accuracy is better as compared to the other methods.

From Table 4 it can be seen that the comparison of the proposed method with other existing methods is done on different datasets. Thus, for the sake of fair comparison, two of the methods are implemented on the same datasets consisting of HRF, Messidor, DRIONS-DB, DIARETDB1, local hospital database and the results are compared with the proposed method. It is difficult to implement all of the methods due to the lack of sufficient information. Table 5 shows the comparison of the proposed method with the two methods which have been implemented on the same dataset. As observed from results shown in Table 5, the combined performance of proposed method in terms of sensitivity, specificity and accuracy is better as compared to the other two methods

Fig. 11
figure 11

Example of: a false-positive and b false-negative

The proposed method was unsuccessful in determining the classification state in some of the images. This may be due to the two conditions:

  • The proposed method fails if the OD contains other pathologies.

  • The method fails if the input retinal image is of low contrast such that the border between the OC and OD cannot be distinguished easily.

The method produces some false-positives and false-negatives. Figure 11a shows an example of false-positives where a normal image is classified as glaucomatous by the proposed method. The CDR value in this case as evaluated by the proposed method is 0.57 which is larger than the normal value (0.5). The ISNT rule is also not satisfied in this case. This error may be due to the over-segmentation of the OC due to which the segmented OC area is more than the actual area. Figure 11b shows an example of false-negatives where a glaucomatous image is classified as normal by the proposed method. The CDR value evaluated by the proposed method in this case is 0.26, which is smaller than the normal value (0.5). This case also satisfies the ISNT rule thus yielding incorrect classification result. This error may be due to the inefficient segmentation of the OC. Due to the low contrast of the OD region, the OC segmentation algorithm is unable to segment the entire OC region. As a result of which the segmented OC area is much less than the actual area.

Table 6 Comparison of the algorithm classification results to the ophthalmologist’s diagnosis

6 Subjective evaluation

The results obtained by the proposed method are also evaluated by an ophthalmologist. A batch of 122 images from the total set of 244 images is used for evaluation. The images are obtained from HRF, DRIONS-DB and local hospital database where there is a priori information regarding the classification state of the images. Table 6 shows the comparison of the results obtained by the proposed method to the ophthalmologist’s diagnosis. The ophthalmologist was unable to diagnose some of the images due to the poor quality of the images. From Table 6 it can be observed that the results obtained by the proposed method shows high correspondence with that of the ophthalmologist’s diagnosis.

7 Conclusion

Glaucoma is a severe disease of the eye which leads to blindness. Detection of glaucoma in time can stop further progression of the disease. However, diagnosis of glaucoma is based on time-consuming manual observations. Therefore, the development of computer-assisted detection techniques can aid the ophthalmologists in the diagnosis of the disease in a timely and cost-effective manner. In this paper, a method for glaucoma detection has been proposed. The proposed method uses the CDR and the ISNT rule as the two main features to detect glaucoma. The OD and OC are segmented using two different methods and then combined to get the final segmented OD and OC, which are later approximated using a circular approximation. The CDR was calculated as the ratio of the vertical cup diameter to the vertical disk diameter. The ISNT rule was evaluated by finding the NRR area in the ISNT quadrants. The proposed method is applied on retinal images obtained from various sources including HRF, Messidor, DRIONS-DB, DIARETDB1 and local hospital database. The method was able to successfully detect glaucoma in 75 images out of 81, achieving a sensitivity of 92.59 %. The proposed method is simple, computationally efficient and hence can be used as a helpful tool in glaucoma screening applications. The method can be further improved by implementing an OD localization algorithm which would make it fully automatic.