Introduction

People in the medical field use computers to help them figure out what diseases they have. This is called computer-assisted diagnosis (CAD). In recent years, it has evolved into a more potent tool in the field of ophthalmology. Computer-Aided Diagnosis (CAD) allows for the efficient screening of pictures to assist doctors in the detection of disease. To be successful in illness detection, it is necessary to conduct a thorough investigation into the picture features, which in turn is dependent on precise image segmentation. However, many people think that making sure that a picture is split into separate parts is very hard. Since the optic disc of the retinal fundus picture is one of the most significant clinical characteristics for glaucoma detection, it is crucial to understand how it is obtained and how it is used. Because of this, it is very important to separate the optic disc from the retinal fundus image, so that glaucoma can be found.

Glaucoma slowly damages the eyes in the early stages without causing noticeable vision loss. By the time the patient notices the vision changes, the condition may be very advanced. If the patient is untreated or inadequately treated at the earlier stage, it leads to permanent vision loss. It is important to detect glaucoma at an early stage to avoid permanent vision loss. The proposed method presents three techniques for extracting the optic disc from fundus images to aid ophthalmologists in the early detection of glaucoma.

Threshold-based methods are straight forward techniques. These techniques segment the retinal images based on intensity levels [1, 2].

Figure 1 depicts three categories of optical disc segmentation. The clustering-based methods partition the retinal image into different clusters. Each cluster is made up of pixels with similar properties [3,4,5]. Clustering methods are classified into two basic categories: the first is hierarchical methods and the second is partition-based methods. The hierarchical methods work on the principal concepts of trees. In this approach, the whole image is considered as the root node, and the internal nodes represent different segments or clusters. In partition methods, an objective function is minimized by repeatedly executing the optimization methods.

Fig. 1
figure 1

Optic disc segmentation methods

The region-based segmentation methods are the methods that divide the image into different sectors having common characteristics [6, 7]. These approaches are classified into two basic methods: region-growing methods and region-merging methods.

Region-growing methods: these methods divide the image into different areas based on the seed values. These seed values are the initial pixels from which the region starts growing. The region's growing process continues until it finds the pixels that are completely different from the specified pixels. These initial pixels can be user-defined pixels or manually selected pixels from a portion of the image based on certain conditions, or automatically selected pixels from a portion of the image based on a particular application.

Region-merging methods combine adjacent regions if they are similar in some characteristics. Region merging usually starts from a uniform region, which is called the seed area. Some applications discover the seed area for segmentation of the image into different regions in which each region is completely dissimilar to the others.

The following paragraphs illustrate the work carried out by the various authors. In [8], the authors clarified that optic disc extraction utilizing the changed U-Net CNN algorithm is the best and gives moderate outcomes on fundus pictures. Work depends on the deep learning convolutional neural network technique. The principal impediment to this work is run time, which sets aside a long time to deliver the outcome. Jayanthi et al. [9] explained various strategies appropriate for optic disc segmentation from fundus pictures. The authors likewise proposed a technique to extract the optic disc. The proposed technique is called "Convolutional Neural Network." The authors acquired moderate outcomes on publically accessible datasets, DRIONS-DB and RIM-ONE.

Almazroa et al. [10] introduced a novel optic disc extraction technique utilizing a level set strategy on a localized OD image. The fundamental target of this work is the development of robotized image processing framework for glaucoma screening. Authors introduced a disc algorithm to extract the disc from retinal pictures for the glaucoma investigation. For inferior quality images, authors utilized a twofold level set strategy and acquired an accuracy of 83.9%. Veena et al. [11] introduced a deep learning approach Convolutional Neural Networks (CNNs) for optic disc extraction from the information image. Authors assessed the proposed strategy by considering the DRISHTI GS data set and acquired great outcomes.

A programmed disc extraction strategy, as well as the evacuation of blood vessels, was demonstrated by Bharkad et al. [12] to aid in the segmentation process. The writers also provided instructions on how to improve the image's brightness. Zahoor et al. [13] developed a method for extracting the optic disc from a hologram by combining the Circular Hough Transform with the polar change methodology. Shallen Bengani and colleagues [14] developed a disc extraction technique by combining the methods of transfer learning and semi-supervised learning, and they achieved a precision of 96% in the dice measure as a result of their efforts. Whatever the case, the recommended technique allotted more time for the preparation of both models than was originally planned. A full network channel to distinguish disc region images from fundus images was proposed by Jin, Baixin, and colleagues [15]. Specifically, the proposed method is based on texture features that have been retrieved from an image. Syna Sreng and colleagues [16] presented a two-stage solution that makes use of deep learning techniques. Deeplab3+, which uses a large number of deep neural networks, is used to segment the optic disc region of the image. Using a mix of transfer learning and support vector machines, we were able to complete the segmentation of the data. On the other hand, the extraction of the optic disc is carried out using a deep neural network that has been previously trained. On the MICCAI Refuge fundus picture information base, Xuesheng Bian et al. [17] suggested a model to extract the optic disc and optic cup applying cascade neural network techniques, which achieved a result of 93% accuracy.

In this section, we will look at some of the issues with the existing approaches. The Student Sight Savers Program in the United States of America (USA) has provided glaucoma screenings to more than 41,000 people. Within the program, intraocular pressure measurement, a questionnaire form, and a visual function assessment are the screening approaches that are employed for selection. It was discovered that, of the three screening methods used by Sight Savers, the frequency doubling technology (FDT) test had the highest sensitivity, with only 58.1% of cases being detected. The capacity of a test to provide an accurate result when a problem is present, and the ability of a test to provide a negative result when the patient tested is healthy [18] is defined as sensitivity plus specificity plus accuracy. Using a non-contact tonometer, we were able to get IOP measurements with a sensitivity of only 22.1%. As a result, various strategies for illness detection are being evaluated.

The thresholding method is employed in the suggested method for disc segmentation. Images are segmented using thresholding, which is the most efficient and straightforward method of segmentation in image processing [19, 20]. Medical image processing makes extensive use of these techniques, which are useful, since the grey levels of medical images differ from those of background and foreground images. When image processing is being used in a variety of applications, it is common to see the grey levels of pixels belonging to an object completely different from the grey levels of pixels belonging to the background. Thresholding is the process of determining a specific intensity in a grayscale image that distinguishes between two distinct portions of the image, such as the object and the backdrop, based on their grey level. In addition, these grey levels can be used as "detectors" to distinguish between the background and an object that is considered to be in the foreground of the image. Clustering-based methods and region-based segmentation methods are also used, because they do not require the user to initialize any parameters and are based on the density estimation of the surrounding pixels.

Materials and Methods

This work presents the three categories of methods to extract optic disc from the fundus images.

Thresholding Method: Otsu Thresholding

This method works on grayscale images. It will iterate through all possible grey-level values (from 0 to 255) and calculate the optimal threshold value of an image. Later, it will measure the distribution of the pixels on each side of the threshold. The pixels of the image either fall into the foreground or background based on the optimal threshold.

Algorithm: Optic disc segmentation using Otsu Thresholding.

Input: Test image, I

Output: Segmented optic disc from test image

Procedure:

1. Input a test image, I

2. RGB image I converted into grayscale image I

3. Calculate the foreground and background variances for each of the threshold (pixel) T: Background Weight, Background Mean, Background Variance, Foreground Weight, Foreground Mean, Foreground Variance

4. Calculate ‘Within-Class Variance' for each T.

Within-Class Variance=sum of the Background Variance and Foreground Variance multiplied by their associated weights. Result is the sum of weighted variances

5. Find lowest sum of weighted variance among the variances and consider the associated threshold, T, as optimal threshold for segmentation of disc.

6. Generate a binary image by considering the pixels with a level < T are background, all those with a level ≥ T are foreground

7. Perform the morphological image filling and closing operations on resulted image to obtain accurate optic disc

8. Mark the boundary of the disc using viscircles function

9. Extract the optic disc.

Clustering-Based Method: K-Means Clustering (KM)

Based on pixels’ assignment, clustering techniques are classified into two types. (1) Hard clustering: it is one simple clustering technique that assigns one pixel to exactly one cluster. (2) Soft clustering: this is a popular segmentation clustering technique. This technique assigns one pixel to one or more clusters based on the degree of belonging. In this work, a hard clustering-based technique, the k-means clustering technique, is used for image segmentation.

Algorithm: Optic disc segmentation using k-means clustering.

Input: Test image, I

Output: Segmented optic disc from test image

Procedure:

1. Input a test image, I

2. Extract red, green, and blue channel images from I and generate data vectors

3. Initialize number of clusters K=8

4. Select K pixel randomly from data vectors as initial centroids

5. Consider each pixel, Assign each pixel to the closest centroid, which forms K clusters

6. Compute new centroid of each cluster

7. Reassign the each pixel to new closest centroid if possible then go to step 6; otherwise, go to step 8

8. Perform the morphological image filling and closing operations on resulted image to obtain accurate optic disc

9. Mark the boundary of the disc using viscircles function

10. Extract the optic disc.

Region-Based Method: Statistical Region Merging (SRM)

In this work, the statistical region-merging algorithm is used for optic disc segmentation. One common method used in region-merging methods is to divide an image into 2 × 2 or 4 × 4 blocks, and then, adjacent blocks are compared and combined if they are similar in some characteristics. The merging process continues until the adjacent blocks are confirmed to meet uniformity criteria. Later, a merged region is extracted from the image and a further seed is used to combine another region.

Algorithm: Optic disc segmentation using statistical region merging.

Input: Test image, I, Segmentation parameter, Q. if Q is small, generates few segments or if Q is large, generates many segments.

Output: Segmented optic disc from test image

Procedure:

1. Input a test image, I

2. Divide image I into 2x2 blocks, consider all regions in Processing List

3. From first region in the image, generate a regions adjacency map.

4. For each region in Processing List do:

Consider its adjacent region and test for similarity.

For similar regions, merge them and modify the regions adjacency map.

5. Repeat step 4 until number of merged regions are equal to Q.

6. Convert the segmented RGB image, I to binary image, BW using level Thresholding.

7. Perform the morphological image filling and closing operations on resulted image to obtain accurate optic disc

8. Mark the boundary of the disc using viscircles function

9. Extract the optic disc.

Results

The proposed approaches are tested using the publicly available High-Resolution Fundus (HRF) and DRIONS-DB Image Databases [21, 22]. Each image is 3504 × 2336, 72 dpi, and 24 bits in depth; the database contains 45 photos. In the DRIONS-DB, 110 fundus photos with a resolution of 96 dots per inch (dpi) and a bit depth of 24 bits each make up the database. Two ophthalmologists' annotations of the optic disc area are taken into account while evaluating the efficacy of this strategy. A dice coefficient is a performance metric that gauges how closely two areas are alike, i.e., similarity between the segmented optic disc area Y and the ophthalmologists’ ground truth region X. From zero to one, the dice coefficient value is available. The more accurate the proposed method is, the higher the coefficient value (close to 1)

$${\text{Dice}} = \frac{{2 \times {\text{Area}}\left( {X \cap Y} \right)}}{{{\text{Area}}\left( X \right) + {\text{Area}}\left( Y \right)}}.$$
(1)

Figure 2 shows the information related to the input image and Region of Interest (ROI). Figure 2a illustrates the Fundus eye image taken from HRF dataset. The same image is used for testing the proposed systems. Figure 2b illustrates the optic disc regions marked by two ophthalmologists. The green line represents the area marked by ophthalmologist 1 and the blue line represents the disc area marked by ophthalmologist 2. After capturing the eye image of the patient using a fundus camera, ophthalmologists marked the disc regions. Figure 2c illustrates segmented optic disc using imfreehand function (marked by ophthalmologist 1) for similarity testing with each of the proposed methods. Figure 2d illustrates the segmented optic disc using imfreehand function (marked by ophthalmologist 2) for dice coefficient calculation with a disc area generated by each of the proposed methods. Figure 2e illustrates the density distribution graph of the fundus image. Figure 2f illustrates the highest intensity area identified for ROI Extraction by setting the threshold to 150 on the green channel image. Figure 2g illustrates the selected area in the fundus image for ROI Extraction. Figure 2h illustrates the Region of Interest (ROI) extracted from the fundus image using a centroid point obtained from the selected area. 960 × 960 ROI extracted using a centroid point. Figure 2i illustrates the density distribution graph of ROI. Figure 2j illustrates the scaled ROI data with threshold used for ROI extraction and threshold used in a simple threshold method to extract optic disc (110). Figure 2k illustrates the disc area extracted from ROI, marked by ophthalmologist 1. Figure 2l illustrates the disc area extracted from ROI, marked by ophthalmologist 2. Figure 2m illustrates a histogram of ROI.

Fig. 2
figure 2

Input image and ROI

Figure 3a illustrates the disc area selected by Otsu Thresholding. Figure 3b illustrates the optic disc regions marked by two ophthalmologists (green line: ophthalmologist 1; blue line: ophthalmologist 2) and the proposed method (marked in red line) on the second ROI (960 × 960), which is extracted from the ophthalmologist's annotated image. Figure 3c illustrates the segmented optic disc from ROI. Figure 3d illustrates the dice coefficient obtained for the ophthalmologist's 1 marked area and the proposed system's extracted area. Figure 3e illustrates the dice coefficient obtained for the ophthalmologist's 2 marked areas and the proposed system extracted areas.

Fig. 3
figure 3

Otsu Thresholding results

Figure 4a illustrates cluster 1 generated by the K-means algorithm. Figure 4b illustrates cluster 2. Cluster 3 is depicted in Fig. 4c. Cluster 4 is depicted in Fig. 4d. Figure 4e illustrates the eroded image. Figure 4f depicts the image generated by morphological operations (imopen flowed by imclose operations are performed). Figure 4g illustrates the optic disc regions marked by two ophthalmologists (green line: ophthalmologist 1; blue line: ophthalmologist 2) and the proposed method (marked in red line) on the second ROI (960 × 960), which is extracted from the ophthalmologist's annotated image. Figure 4h illustrates the segmented optic disc from ROI. Figure 4i illustrates the dice coefficient obtained for the ophthalmologist's 1 marked area and the proposed system extracted area. Figure 4j illustrates the dice coefficient obtained for the ophthalmologist 2 marked area and the proposed system extracted area.

Fig. 4
figure 4

K-means clustering results

Figure 5 illustrates the Statistical Region Merging segmented image. Figure 5b illustrates the disc part extracted from the segmented image. Figure 5c illustrates the optic disc regions marked by two ophthalmologists (green line: ophthalmologist 1, blue line: ophthalmologist 2) and the proposed method (marked in red line) on the second ROI (960 × 960), which is extracted from the ophthalmologist's annotated image. Figure 5d illustrates the segmented optic disc from ROI. Figure 5e illustrates the dice coefficient obtained for the ophthalmologist's 1 marked area and the proposed system's extracted area. Figure 5f illustrates the dice coefficient obtained for the ophthalmologist's 2 marked area and the proposed system extracted area.

Fig. 5
figure 5

Statistical region-merging results

Table 1 illustrates the average dice coefficient obtained for various proposed methods and the ground truth value of Ophthalmologist 1 on HRF and DRIONS-DB dataset images, which also illustrates the average of each method (considering both datasets). Figure 6 shows the corresponding bar chart. Table 2 illustrates the average dice coefficient obtained for various proposed methods, the ground truth value of Ophthalmologist 2, and the average for each method. Figure 7 shows the corresponding bar chart.

Table 1 Average dice coefficient for OD contour detection w.r.t ophthalmologist
Fig. 6
figure 6

Dice coefficient obtained for Ophthalmologist 1 marked area and proposed systems

Table 2 Average dice coefficient for OD contour detection w.r.t Ophthalmologist 2
Fig. 7
figure 7

Dice coefficient obtained for Ophthalmologist 2 marked area and proposed systems

Ophthalmologist

Proposed Systems

Otsu

KM

SRM

Ophthalmologist 1

0.82185

0.8572

0.9116

Ophthalmologist 2

0.7958

0.8432

0.8937

Average

0.80882

0.8502

0.90265

Table 3, Figs. 8 and 9 summarize the performance comparison of nine methods, where proposed systems are superior to the others in terms of location accuracy and computational time.

Table 3 Computational complexity (run time) and dice factor of the proposed framework and existing approaches
Fig. 8
figure 8

Bar chart of computational complexity (run time) and dice factor of the proposed framework and existing approaches

Fig. 9
figure 9

Line chart of computational complexity (run time) and dice factor of the proposed framework and existing approaches

Conclusions

This work presented methods for optic disc extraction from fundus images. Proposed techniques are tested on HRF and DRIONS-DB databases. The region of interest detection and extraction of a region of interest (Otsu, KM, and SRM) methods are presented for optic disc extraction. The result obtained from the proposed methods illustrates better performances. Furthermore, proposed methods are applied to all types of retinal images including healthy as well as glaucoma images. The proposed methods Otsu, KM, SRM work well on HRF data sets with a dice coefficient of 82%, 86%, and 93%, respectively, and 78%, 85%, and 91%, respectively, on DRIONS-DB data sets; these efficiency makes the proposed methods are suitable for automatic extraction of the optic disc from fundus images.