Abstract
Diabetic retinopathy (DR) is caused by damage to the small blood vessels of the retina in the posterior part of the eye of the diabetic patient. The main stages of diabetic retinopathy are non-proliferate diabetes retinopathy (NPDR) and proliferate diabetes retinopathy (PDR). The retinal fundus photographs are widely used in the diagnosis and treatment of various eye diseases in clinics. It is also one of the main resources for mass screening of diabetic retinopathy. In this work, we have proposed a computer-based approach for the detection of diabetic retinopathy stage using fundus images. Image preprocessing, morphological processing techniques and texture analysis methods are applied on the fundus images to detect the features such as area of hard exudates, area of the blood vessels and the contrast. Our protocol uses total of 140 subjects consisting of two stages of DR and normal. Our extracted features are statistically significant (p < 0.0001) with distinct mean ± SD as shown in Table 1. These features are then used as an input to the artificial neural network (ANN) for an automatic classification. The detection results are validated by comparing it with expert ophthalmologists. We demonstrated a classification accuracy of 93%, sensitivity of 90% and specificity of 100%.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Blindness is an outcome of diabetic retinopathy and its prevalence is set to continue rising. Estimated 50–65 new cases of blindness per 100,000 people happened every year [1]. The screening of diabetic patients for the development of diabetic retinopathy can potentially reduce the risk of blindness in these patients by 50%. An early detection of DR enables laser therapy to be performed to prevent or delay visual loss and may be used to encourage improvement in diabetic control. Current methods of detection and assessment of diabetic retinopathy is manual, expensive and require trained ophthalmologists [2–5]. Hence an automatic detection and treatment of the diabetic retinopathy in an early stage can prevent the blindness.
Diabetic retinopathy is a complication of diabetes and a leading cause of blindness. It occurs when diabetes damages the tiny blood vessels inside the retina, the light-sensitive tissue at the back of the eye [6]. This tiny blood vessel will leak blood and fluid on the retina forms features such as microaneurysms, hemorrhages, hard exudates, cotton wool spots or venous loops [7]. Diabetic retinopathy can be broadly classified as nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR) [6]. Depending on the presence of features on the retina, which is said above, the stages of DR can be identified [7]. A normal retina of the eye does not have any of the above said features and is shown in Fig. 1(a). In the NPDR stage, the disease can advance from mild, moderate to severe stage with various levels of features said above except less growth of new blood vessels [6]. Figure 1(b–d) shows the fundus image of three stages of NPDR. PDR is the advanced stage where the fluids sent by the retina for nourishment trigger the growth of new blood vessels. They grow along the retina and over the surface of the clear, vitreous gel that fills the inside of the eye. If they leak blood, severe vision loss and even blindness can result [6, 7]. Figure 1(e) shows the fundus image of PDR eye. We are proposing a system to identify these two stages of diabetic retinopathy against the normal case.
Since 1982, the quantification of diabetic retinopathy and detection of features such as exudates and blood vessels on fundus images were studied [8–12]. A computer based algorithms to detect the individual features of the fundus image were developed. In all these work, image processing techniques such as image preprocessing, segmentation, 2D matched filters and image thresholding techniques were widely used. These systems are not able to integrate all the features of the diabetic retinopathy in a single entity for the automatic detection and classification of different stages with respect to the normal subject. Algorithms for the computer based identification of microaneurysms were developed [13], but work does not provide any hope on the automatic identification of stages in DR and also lack in usage of large database. An automatic system for the detection of symptoms pertaining to the abnormality is proposed in [14–16]. Global and local thresholding values were used to segment exudates lesions from the red-free images. These methods cannot be used to develop the features for automatic detection system as it is difficult to set a constant threshold value. The system proposed in [17] can detect and quantify only microaneurysms which can be useful in detecting only mild stages.
An algorithm to detect the optic disk, blood vessels and fovea is developed in [18]. This algorithm can be applied only on the normal retinal images. Ege et al have located the optic disc, fovea and four red and yellow abnormalities (microaneurysms, hemorrhages, exudates and cotton-wool-spots) in 38 color fundus images [19]. In this work the symptoms of abnormalities are graded previously by the ophthalmologist hence it is not useful in development of an automated system for the detection of diabetic retinopathy. An image analysis system for the automatic diagnosis of diabetic retinopathy is developed in [20]. The methods developed in this work will not be able to detect large variations in the features of the abnormal retinal images.
Many investigations have been carried out on the computer assisted analysis of the retinal fundus images [21, 22]. But these systems were not able to provide accurate investigations on different stages of diabetic retinopathy. The automatic detection of microaneurysms, hard exudates, cotton wool spots, and hemorrhages for the pathology detection were studied [23–26]. These methods failed when applied with large database of abnormal fundus images. There are many algorithms and techniques proposed to extract the features from the fundus images [27–29].
A novel semi-automatic algorithm was applied to detect and measure exudates in fundus images [30]. Methods proposed in this paper use morphological image processing techniques. This method also cannot work for a constant threshold value when converted binary image. A prototype on automated diagnosis and understanding of retinal images was presented [31]. In recent years many researchers proposed system for the automatic identification of features for diabetic retinopathy, which are useful for the treatment [23, 32–35]. These methods can only useful in subjective analysis of the diabetic retinopathy. The methods proposed in all the researches discussed above are not reliable and robust as it does not provide any objective measurement on the features. The above discussed methods are mainly useful in analysis of the specific features on the retina, but do not provide a system as whole for the automatic detection of different stages of diabetic retinopathy. The investigations and algorithms so far developed are unable to detect an early stage of retinopathy (NPDR) accurately.
We are proposing a system for automated classification of normal, NPDR and PDR retinal images by automatically detecting the blood vessels, hard exudates and texture. The proposed system is shown in block diagram which is shown in Fig. 2. The objective measurement such as blood vessels area, exudates area and contrast is computed from the processed retinal images. These objective measurements are finally fed to an Artificial Neural Network (ANN) classifier for the automatic classification. Materials and methods section explains the methods in detail and results are tabulated and explained in Results section. Discussion and conclusions are presented in Discussion and Conclusions sections respectively.
Materials and methods
The images for this work were taken in a TOPCON non-mydriatic retinal camera with model number TRC-NW200. The built-in CCD camera provides up to 3.1 megapixels of high quality imaging. The inbuilt imaging software is used to store the images in the JPEG format. The data set is acquired at Department of Ophthalmology, Kasturba Medical College, Manipal, India. The images were photographed and certified by the doctors in the department. The ethics committee consisting of senior doctors has approved the data for this research purpose. The images were taken in a resolution of 560 × 720. In this work retinal fundus photograph of normal, NPDR, PDR cases were used. Total of 140 fundus images were collected for this study with an age group of 24–57 years. The data set is divided into two sets, one set of 90 arbitrarily chosen samples are used for training and remaining 50 samples are used for testing.
Image processing techniques
The fundus images were processed using different image processing techniques such as image enhancement, morphological image processing and texture analysis. They are explained briefly in the following sections.
Image preprocessing
Each image was preprocessed to remove non-uniform background. Non uniform illumination and variation in the pigment color of the eye are two major reasons for this non-uniformity. This was corrected by applying adaptive histogram equalization to the image before applying the image processing operations [36]. This technique adjusts the local variation in contrast by increasing the contrast in lower contrast area and lowering the contrast in high contrast area.
Morphological image processing
Morphological image processing is a type of processing in which the spatial form or structure of objects within an image is modified. In the morphological dilation and erosion operations, the state of any given pixel in the output image is determined by applying a rule to the corresponding pixel and its neighbors in the input image. With dilation, an object grows uniformly in spatial extent, whereas with erosion an object shrinks uniformly [37].
An essential part of the dilation and erosion operations is the structuring element (SE) used to probe the input image. A structuring element is a matrix consisting of only 0s and 1s that can have any arbitrary shape and size. Figure 3 shows diamond, disc and octagon shaped structuring element with radius R = 3, where R for octagon SE specifies the distance from the structuring element origin to the sides of the octagon, as measured along the horizontal and vertical axes and R for disc SE specifies the radius. Dilation followed by erosion is called a close operation. Closing of an image with a compact structuring element without holes (zeros), such as a square or circle, smoothes contours of objects, eliminates small holes in objects, and fuses short gaps between objects. An open operation consists of erosion followed by dilation. Opening of an image smoothes contour of objects, eliminates small objects, and breaks narrow strokes. The close operation tends to increase the spatial extent of an object, while the open operation decreases its spatial extent [36].
Texture analysis
Texture is a measure of properties such as smoothness, coarseness, and regularity of pixels in an image. Texture can also be defined as a mutual relationship among intensity values of neighboring pixels repeated over an area larger than the size of the relationship [38]. Conventional texture recognition system can be grouped into three classes: structural, statistical and spectral. Structural texture analysis is more complex as compared to the statistical approach [38]. Statistical approaches yield characterization of textures as smooth, coarse, grainy and so on. Statistical algorithms are based on the relationship between intensity values of pixels; measures include entropy, contrast, and correlation based on the gray level cooccurrence matrix. In statistical methods, we describe features using a spatial gray level dependency (SGLD) matrix. For a two-dimensional image f(x,y) with N discrete gray levels, we define the spatial gray level dependency matrix P(d, Φ) for each d and Φ, and is given by
where
The term p i ,j is defined as the relative number of times gray level pair (i,j) occurs when pixels separated by the distance d along the angle Φ are compared. Each element is finally normalized by the total number of occurrences giving cooccurrence matrix P. A spatial gray level dependency matrix is also called a cooccurrence matrix and is shown in Eq. (1). Commonly used features that are obtained from the cooccurrence matrix are energy, entropy, correlation, inertia and local homogeneity [38].
Feature extraction
Features namely, blood vessels, exudates, and textures were extracted from the fundus image. A brief description of these features is given below.
Blood vessels detection
The detection of blood vessels is very important in identification of diabetic retinopathy through image processing approach. Morphological image processing techniques were widely used in the detection of blood vessels. In this work green channel of the fundus RGB image was used for obtaining the traces of blood vessels. The algorithm developed used morphological operation to smoothen the background, allowing veins, to be seen clearly. Diamond and disc structuring elements (SE) were used in this work.
The shape and size of SE is set according to image structures that are to be extracted and SE determined by prior knowledge through visually examining images. Together with each morphological opening, the image is subtracted from the image processed by the previous morphological opening and its intensity adjusted such that it spreads pixel intensities more evenly over the intensity range. The intensity adjustment maps the values in intensity image I(x,y) to new values in image J(x,y) such that 1% of data is saturated at low and high intensities of I(x,y). This increases the contrast of the output image J(x,y) as it spreads pixel intensities more evenly over the intensity range. After the series of openings, the background of the processed image is not as noisy as the original image and the veins can be seen clearer. Now, the perimeter and area of the features can be easily extracted. The perimeter pixels of the objects were obtained from a binary image [23]. A pixel is considered a perimeter pixel if it satisfies both of these criteria in 4 connected neighborhood.
-
The pixel is on (non-zero).
-
One (or more) of the pixels in its neighborhood is off.
The area of the features is determined by thresholding the image making the background black and the features white. Otsu’s method chooses the threshold to convert a grayscale image to binary by minimizing the intra-class variance of the black and white pixels [23]. Both perimeter and area are the number of white pixels. Figure 4(b) shows the blood vessel detected images of normal, NPDR and PDR fundus images of Fig. 4(a).
Detection of hard exudates
Hard exudates in the retinal fundus image are distinct yellow-white intra-retinal deposits which can vary in its size for different stages of retinopathy. It is shown that, the green channel of the RGB image was able to distinguish the exudates in better way from other features [18]. The exudates appear as the brightest and high contrast regions in the fundus image. The method proposed by Walter et al was adapted in the detection of exudates in this work [29]. The blood vessels appearing as dark regions in the fundus images were removed during the detection of the exudates. So the vessels were eliminated by the morphological closing operation using octagon structuring element, which is shown in Fig. 3(c). After the closing operation on green channel of fundus image with an octagon structuring element, we get an image with no blood vessels and some candidate regions of the exudates. The size of the structuring element was chosen such that it is larger than the maximum size of the blood vessel in the fundus image. A local variance of each pixel within a window W of the closed image was then computed. The variance image consists of all the bright regions with varying intensity values. But our system has to detect the brightest objects and its borders. So applying a threshold to the variance image, results in a binary image with the brightest object and borders of the white objects thus detected. In order to fill the gaps within the border of the detected exudates area, dilation operation was applied on the binary image.
But this will results in false indication of exudates area due to the dilation of the smaller bright objects. In order to correct this, the dilated image was once again morphologically eroded with the same structuring element which was used for the dilation. This would bring back the detected smaller brighter elements back to same area of indication. But the resultant image is also having a brightest spot corresponding to the optic disc. This can be removed by detecting the optic disc.
Optic disc was detected using the intensity component of the fundus image [18, 29]. The variance image was averaged to get the brightest pixel in the image, which was optic disc center. Normally the diameter of the optic disc is 2.5 μm. The optic disc occupies nearly 80 × 80 pixels around the optic disc center. The circular area around the optic disc center of the final image of the exudates detection steps is identified and is blocked. Finally the area occupied by the only detected exudates was computed by summing entire white pixels. Figure 4(c) shows the detection of exudates in retinal images for normal, NPDR and PDR respectively (Fig. 4(a)).
Contrast
The quantity contrast gives the measure of the amount of intensity variation in the image. This is given by [38].
p i ,j is the elements of the co-occurrence matrix shown in Eq. (1). Contrast is 0 for a constant image. This value for NPDR is lower than normal and PDR due to the presence of more exudates. In PDR the variation is higher as compared to the NPDR due to the presence of more blood vessels and haemorrhages.
Artificial neural network classification
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system [39]. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Data enters at the inputs and passes through the network, layer by layer, until it arrives at the outputs. During normal operation, that is when it acts as a classifier, there is no feedback between layers. Hence, it is termed as feedforward neural networks.
If the nature of the classification is more complex, a three layer feedforward neural network, with sigmoid activation function is more suitable [39]. This sigmoid transfer function is a prerequisite for the use of the powerful backpropagation learning algorithm. In the present case, the nature of boundary between different classes are not clearly known, and therefore the three layer network with sigmoid activation function is chosen as the classifier. Figure 5 shows the block diagram of the neural network classifier used. During the training phase, the connection weights of the last two layers are modified according to the ‘delta rule’ of the back propagation algorithm [39]. We have used four layer feedforward neural networks. The NN architecture had four input neurons, two hidden layers with eight neurons each and two output neurons. The output neuron will classify three classes as ‘01’ for Normal, ‘10’ for NPDR and ‘11’ for PDR. The network was trained with given set of training data and later tested with remaining testing samples. During the training phase, each output of the ANN is a real value in the range 0–1.0, whereas the ‘desired’ output is either 0 or 1.0. During the recall phase, the output signal is approximated to binary levels by comparing it with threshold value of 0.5. The mean square error of the ANN was set to 0.001.
Results
The features such as blood vessels area, blood vessels perimeter, hard exudates area and contrast (texture) corresponding to three classes were extracted using the proposed algorithms. The range of values for these parameters is shown in Table 1. The values of blood vessels area show the gradual increase from normal to PDR. Similarly, the number of pixels on the borders of the blood vessels is less in the normal group and more for the PDR. It can also be seen from the results that, the exudates area is zero for all the normal images indicating that there is no exudates in them. The contrast values are high for the normal and PDR case, due to the high variation in the pixel intensity.
The p-value can be obtained using ANOVA (ANalysis Of VAriance between groups) test. ANOVA uses variances to decide whether the means are different. The result of ANOVA test of perimeter and area for different kinds of images is shown in Table 1 and Fig. 6. It can be seen from the Table 1 that, our features are clinically significant (p < 0.0001). The graphical representation of the features as shown in the box plot for the three classes is shown in Fig. 6. We can clearly understand from the box plot that, median values are distinct for each group for the three parameters.
The results of the ANN classification is shown in Table 2. The training consisted of 4,000 iterations. Table 3 shows the result of sensitivity, specificity, accuracy, positive predictive values for the three classes of eye images using neural network classifier.
Our results show that the classifier is able to identify all the normal class. In the case, of NPDR and PDR, our classifier is able to identify their class up to 90%. The sensitivity of the system is 90% and specificity is 100%, indicating that the result is clinically significant.
Discussion
An automated screening system was developed to analyze digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR) [30]. Recursive region growing segmentation algorithms combined with the Moat Operator, was used to automatically detect features of NPDR. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of haemorrhages and microaneurysms. The sensitivity and specificity for exudate detection were 88.5 and 99.7%, respectively.
Wang et al have used colour features on Bayesian statistical classifier to classify each pixel into lesion or non-lesion classes [21]. Experimental results indicate that, they were able to achieve 100% accuracy in terms of identifying all the retinal images with exudates, while maintaining 70% accuracy in correctly classifying the truly normal retinal images as normal.
A computer system was developed using image processing and pattern recognition techniques to detect early lesions of diabetic retinopathy (hemorrhages and microaneurysms, hard exudates, and cotton-wool spots) [35]. This system was able to diagnose diabetes retinopathy with an accuracy of more than 90%. Classification of nonproliferative diabetic retinopathy (NPDR) based on the three types of lesions namely, hemorrhages and microaneurysms, hard exudates, and cotton-wool spots was proposed [32]. This method was accurate in classifying the different stages to the tune of 82.6, 82.6, and 88.3%.
The recent work by Wong et al [25] have classified normal and three stages of diabetic retinopathy as moderate, severe and very severe or PDR with an accuracy of 84% and sensitivity of 91%.
Our system is more comprehensive as compared to the other works discussed so far. Our system can automatically detect the three classes with an average efficiency of 93%. It also, detects diabetic retinopathy in an early stage (NPDR) with an accuracy of 90% and hence helps in preventing the loss of vision.
In this work, we have used 140 fundus images. The clinical efficiency of our system can be improved by taking more retinal images under uniform lighting conditions with same orientation. And also, the system can identify the unknown class better by using few more features such as microaneurysms and hemorrhages etc.
Conclusions
In this work we have investigated and proposed a computer-based system to identify normal, NPDR and PDR. The system proposed demonstrated a classification accuracy of 93%, sensitivity of 90% and specificity of 100%. The results demonstrated here indicate that the system can help the ophthalmologist to detect diabetes retinopathy at the early stage and hence enables laser therapy to be performed to prevent or delay visual loss. The accuracy of the system can be further improved using more input features, diverse images and good environmental illumination conditions.
References
Olson, J. A., Strachana, F. M., Hipwell, J. H., Goatman, K. A., McHardy, K. C., Forrestera, J. V., and Sharp, P. F., A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet. Med. 20:7528–534, 2003.
The Early Treatment Diabetic Retinopathy Study Research Group, Photocoagulation for diabetic macular edema, early treatment diabetic retinopathy study report no. 1. Arch. Opthology. 103:1796–1806, 1985.
Ferris, F. L. III., How effective are treatment for diabetic retinopathy? JAMA. 269:1290–1291, 1993.
Williams, R., Nussey, S., Humphery, R., and Thomson, G., Assessment of non-mydriatic photography in detection of diabetic retinopathy. Br. Med. J. 293:1140–1142, 1986.
Higgs, E. R., Harney, B. A., Kelleher, A., and Reckless, J. P., Detection of diabetic retinopathy in the community using a non-mydriatic camera. Diabet. Med. 8:6551–555, 1991.
Ronald, P. C., and Peng, T. K., A textbook of clinical ophthalmology: a practical guide to disorders of the eyes and their management, 3rd edition. World Scientific Publishing Company: Singapore, 2003.
Frank, R. N., Diabetic retinopathy. Prog. Retin. Eye Res. 14:2361–392, 1995.
Akita, K., and Kuga, H., A computer method of understanding ocular fundus images. Pattern Recognition. 15:6431–443, 1982.
Tamura, S., and Okamoto, Y., Zero-crossing interval correction in tracing eye-fundus blood vessels. Pattern Recognition. 21:3227–233, 1988.
Ward, N. P., Tomlinson, S., and Taylor, C. J., Image analysis of fundus photographs—the detection and measurement of exudates associated with diabetic retinopathy. Ophthalmology. 96:80–86, 1989.
Chaudhuri, S., Chatterjee, S., Katz, N., Nelson, M., and Goldbaum, M., Detection of blood vessels in retinal images using two dimensional matched filters. IEEE Trans. Medical. Imaging. 8:263–269, 1989.
Phillips, R., Spencer, T., Ross, P., Sharp, P., and Forrester, J., Quantification of diabetic maculopathy by digital imaging of the fundus. Eye. 5:130–137, 1991.
Spencer, T., Phillips, R. P., Sharp, P. F., and Forrester, J. V., Automated detection and quantification of microaneurysms in fluorescein angiograms. Graefe’s Arch. Clin. Exp. Ophtalmol. 230:36–41, 1991.
Phillips, R., Forrester, J., and Sharp, P., Automated detection and quantification of retinal exudates. Graefe arch. Clin. Exp. Ophthalmol. 231:90–94, 1993.
Goldbaum, M., Moezzi, S., Taylor, A., Chatterjee, S., Jeff, B., Edward, H., and Ramesh, J., Automated diagnosis and image understanding with object extraction, object classification, and inferenceing in retinal images. Proceedings of IEEE International Conference of Image Processing 3:695–698, 1996.
Gardner, G., Keating, D., Williamson, T., and Elliott, A., Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool. Br. J. Ophthalmol. 80:940–944, 1996.
Frame, A. J., Undill, P. E., Cree, M. J., Olson, J. A., McHardy, K. C., Sharp, P. F., and Forrester, J. F., A comparison of computer based classification methods applied to the detection of microaneurysms in ophthalmic fluorescein angiograms. Comput. Biol. Med. 28:225–238, 1998.
Sinthanayothin, C., Boyce, J., and Williamson, C. T., Automated localization of the optic disk, fovea, and retinal blood vessels from digital colour fundus images. Br. J. Ophthalmol. 38:902–910, 1999.
Ege, B., Larsen, O., and Hejlesen, O., Detection of abnormalities in retinal images using digital image analysis. In Proceedings of the 11th Scandinavian Conference on Image Processing. pp. 833–840, 1999.
Sinthanayothin, C., Image analysis for automatic diagnosis of diabetic retinopathy. PhD Thesis, King’s College of London. 1999.
Wang, H., Hsu, W., Goh, K., and Lee, M., An effective approach to detect lesions in colour retinal images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2:181–187, 2000.
Hunter, A., Lowell, J., Owens, J., and Kennedy, L., Quantification of diabetic retinopathy using neural networks and sensitivity analysis. In Proceedings of Artificial Neural Networks in Medicine and Biology. pp. 81–86, 2000.
Kandiraju, N., Dua, S., and Thompson, H. W., Design and implementation of a unique blood vessel detection algorithm towards early diagnosis of diabetic retinopathy. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC’05). IEEE Computer Society: Los Alamitos, CA. pp. 26–31, 2005.
Huiqi, L., and Opas, C., Automatic location of optic disc in retinal images. In Proceedings of the International Conference on Engineering in Medicine and Biology Society. pp. 3144–3148, 2001.
Lee, S. C., Lee, E. T., Kingsley, R. M., Wang, Y., Russell, D., Klein, R., and Warn, A., Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts. Arch Ophthalmology. 119:4509–515, 2001.
Osareh, A., Mirmehdi, M., Thomas, B., and Markham, R., Medical image understanding and analysis, BMVA Press: Surrey, UK. 2001.
Osareh, A., Mirmehdi, M., Thomas, B., and Markham, R., Classification and localisation of diabetic-related eye disease. In 7th European Conf. on Computer Vision. pp. 502–516, 2002.
Osareh, A., Mirmehdi, M., Thomas, B., and Markham, R., Comparative exudate classification using support vector machines and neural networks. In The 5th International Conf. on Medical Image Computing and Computer-assisted Intervention. pp. 413–420, 2002.
Walter, T., Klein, J.-C., Massin, P., and Erginay, A., A contribution of image processing to the diagnosis of diabetic retinopathy—detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging. 21:101236–1243, 2002.
Sinthanayothin, C., Boyce, J. F., Williamson, T. H., and Cook, H. L., Automated detection of diabetic retinopathy on digital fundus image. Diabet. Med. 19:105–112, 2002.
Osareh, A., Mirmehdi, M., Thomas, B., and Markham, R., Automated identification of diabetic retinal exudates in digital colour images. Br. J. Ophthalmol. 87:101220–1223, 2003.
Larsen, M., Godt, J., Larsen, N., Lund-Andersen, H., Sjolie, A. K., Agardh, E., Kalm, H., Grunkin, M., and Owens, D. R., Automated detection of fundus photographic red lesions in diabetic retinopathy. Invest Ophthalmol Vis Sci. 44:2761–766, 2003.
Usher, D., Dumskyj, D., Himaga, D., Williamson, T. H., Nussey, S., and Boyce, J., Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening. Diabetic Medicine. 21:184–90, 2004.
Zhang, X., and Chutatape, A., Detection and classification of bright lesions in color fundus images. International Conference on Image Processing. 1:139–142, 200424–27 October.
Englmeier, K. H., Schmid, K., Hildebrand, C., Bichler, S., Porta, M., Maurino, M., and Bek, T., Early detection of diabetes retinopathy by new algorithms for automatic recognition of vascular changes. Eur. J. Med. Res. 9:10473–488, 2004.
Ganzalez, R. C., and Woods, R. E., Digital image processing, Second edition. Prentice Hall: New Jersey, 2001.
Pratt, W. K., Digital image processing, Third edition. Wiley: New York, 2001.
Kulakarni, A. D., Artificial neural networks for image understanding. Van Nostrand Reinhold: New York, ISBN:0-442-00921-6.
Haykin, S., Neural networks, a comprehensive foundation, Second edition, Pearson Education: Upper Saddle River, NJ, 1999.
Acknowledgement
This project was supported by Tote Fund, Singapore.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Nayak, J., Bhat, P.S., Acharya U, R. et al. Automated Identification of Diabetic Retinopathy Stages Using Digital Fundus Images. J Med Syst 32, 107–115 (2008). https://doi.org/10.1007/s10916-007-9113-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10916-007-9113-9