Abstract
Automated classification of retinal artery (A) and vein (V) is of great importance for the management of eye diseases and systemic diseases. Traditional colour fundus images usually provide a large field of view of the retina in color, but often fail to capture the finer vessels and capillaries. In contrast, the new Optical Coherence Tomography Angiography (OCT-A) images can provide clear view of the retinal microvascular structure in gray scale down to capillary levels but cannot provide A/V information alone. For the first time, this study presents a new approach for the classification of A/V in OCT-A images, guided by the corresponding fundus images, so that the strengths of both modalities can be integrated together. To this end, we first estimate the vascular topologies of paired color fundus and OCT-A images respectively, then we propose a topological message passing algorithm to register the OCT-A onto color fundus images, and finally the integrated vascular topology map is categorized into arteries and veins by a clustering approach. The proposed method has been applied to a local dataset contains both fundus image and OCT-A, and it reliably identified individual arteries and veins in OCT-A. The experimental results show that despite lack of color and intensity information, it produces promising results. In addition, we will release our database to the public.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Vascular morphological changes in the retina are frequently associated with a variety of diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), pathological myopia and other systemic diseases such as cardiovascular diseases and hypertension [1, 2]. A number of studies [3, 4] suggested that different systematic diseases and their severity could affect arteries and veins differently. For example, a low artery-vein ratio (AVR) is a direct characteristic sign of DR [5] whilst a high AVR has been linked to high level of cholesterol [6, 7]. However, the calculation of AVR requires the detection and classification of retinal vessels into arteries (A) and veins (V). Manual annotation of A/V is time consuming and prone to human errors [8]. There is increasing demand on automated methods for the A/V detection and classification.
In the past decade, extensive effort has been made to automate the process of A/V detection and classification in color fundus images. Vazquez et al. [9] utilized a tracking strategy based on the minimal path approach to support the A/V classification. Dashtbozorg et al. [7] proposed an A/V classification method based on graph nodes and intensity features. Estrada et al. [1] employed a graph method to estimate the vessel topology with domain-specific knowledge to classify the A/V. Huang et al. [10] introduced an A/V classification framework by using a linear discriminate analysis classifier. Zhao et al. [11] formalized the A/V classification as a pairwise clustering problem. Ma et al. [12] proposed a multi-task neural networks for retinal vessel segmentation and A/V classification.
However, all the aforementioned methods are only applicable to color fundus images. Color fundus imagery are not able to capture fine vessels as well as capillaries, which is prominent in the center of the retina, also known as fovea and parafovea, see Fig. 1(a–b). Fluorescein angiography (FA) can resolve the whole retinal vasculature including capillaries, but it is invasive and also has side effects [13, 14]. In contrast, optical coherence tomography angiography (OCT-A) is a new emerging non-invasive imaging modality that enables observation of microvascular details up to capillary level, as shown in Fig. 1(c). It thus opens up a new avenue to study the relation between retinal vessels and various vessel-related diseases. In particular, the microvasculature distributed within the parafovea is of great interest as the abnormality there often indicates different diseases such as the early stage of DR and hypertension [4, 15, 16].
OCT-A images provide a huge potential for the quantification of fine vessels and capillaries for improved diagnosis and monitoring of disease. However, existing methods cannot discriminate arteries from veins in OCT-A for two reasons. First, most of the vessels have relatively small calibres which is a challenging task for current methods to accurately segment them. Second, A/V classification is impossible due to the lack of color and intensity features, on which most of A/V classification algorithms are based [17, 18].
To overcome the above issues, we propose a novel A/V classification method for OCT-A images guided by the color information in fundus images. Our method consists of three phases: vessel topology estimation, registration of vessel structures extracted from OCT-A and color fundus images, and A/V classification on the registered vasculature. This paper makes three main contributions. 1) This is the first attempt to use color fundus images to guide the A/V classification in OCT-A, and thus exploit their complementary information for the task; 2) We use the topological message passing approach to achieve multi-modal image registration, so as to obtain a comprehensive vascular topology map; 3) We made our dataset publicly available, which contains paired fundus and OCT-A images, manual annotations of vessel topology and A/V classification.
2 Method
The proposed new framework comprises three steps: (1) Estimation of vascular topology in both colour fundus and corresponding OCT-A images, (2) Topology maps fusion using message passing; and (3) Classification of AV in OCT-A images. Figure 2 provides an overview of the proposed method.
2.1 Vascular Topology Estimation
Figure 3 illustrates the pipeline of the topology estimation in an OCT-A image. We first segment the blood vessels in both fundus and OCT-A images, respectively by using a pre-trained CS-Net [19], which has been validated for the segmentation of vessels on both fundus and OCT-A images. An undirected graph G with weighted edges is represented as \(G =(V, E, \omega )\), where V is a set of nodes, edge set \({E\subseteq V\,\times \,V}\) indicates all the connections of the relevant nodes, and \(\omega \) represents the similarity among the nodes in V. A \(|V|\,\times \,|V|\) symmetric matrix \(A=\{a_{ij}\}\) is used to represent the weighted graph G, which is named an adjacency matrix. The value of \(a_{ij}\) is derived by a similarity measure defined in the feature space of the nodes [20]. Here, we define \(a_{ij}=0\) for \(i\,=\,j\), which indicates that the generated graph G does not include self-loop. The Dominant Sets are then employed to detect which nodes (and the segments they are representing) belong to the same vessel branch. A dominant set can be determined by solving a standard quadratic program below:
where A is the adjacency matrix of G. \(\mathbf {x^*}\) is a strict local solution of Eq. (1). If the i-th element of the \(\mathbf {x^*}\) is larger than zero, it means that the i-th node of G is in the dominant set S identified by \(\mathbf {x^*}\). Effective optimization approaches for solving Eq. (1) can be found in [21, 22]. The nodes in the dominant set S of G (and the segments represented by them) are identified as they are in the same vessel branch. The identification of other vessel branches in G is carried out by iteratively solving the dominant set problem on the updated graph G defined as \(G = G \setminus S\). Figure 3(d) shows the vascular network with topological information.
2.2 Topology Maps Fusion via Message Passing
The proposed classification of A/V on OCT-A requires the guidance of topology information of the fundus image. As such we map the vascular topology of OCT-A to that from fundus image so as to form a comprehensive topology map. This involves registration and topological consistency enforcement processes.
We first use a rigid registration algorithm [23] for mapping the significant points between OCT-A and fundus images. We consider two significant points from different image modalities within a six-pixel tolerance as a matched pair. However, this coarse registration approach does not guarantee spatial consistency, and may lead to a large portion of potential mismatches because of the non-linear deformation between OCT-A and fundus. To this end, we propose a novel method to remove correspondences inconsistent with a locally smooth deformation model, inspired by Gaussian Process regression (GPR) [24].
In practice, to find a geometrically consistent set of correspondences \(S_n\) between fundus image \( I^{F}\) and OCT-A \( {I^{O}} \), we first define a set of correspondences \(S^0_{n} = \left\{ {\mathbf {p}_l^{F}\leftrightarrow \mathbf {p}_l^{O}}\right\} _{1\le l\le L }\) of the L overlapped significant points. In the example of Fig. 4 (Iteration #1), the selected \(\mathbf {p}_l^{F}\) points are shown in green. We treat \(S^0_{F}\) as being a reliable set and use the GPR to estimate the mean \(m_{S_n^0}(\mathbf {p}^{F})\) and covariance \(\sigma _{S_n^0}^2(\mathbf {p}^{F})\) of the location of a point \(\mathbf {p}^{F}\) in \(I_{O}\), and it is computed by
where \(\mathbf {k}\) is the vector \([k(\mathbf {p}_1^F,\mathbf {p}^F),...,k(\mathbf {p}_L^F,\mathbf {p}^F)]^T\), and k is a kernel function which defines a mapping composed of an affine and a non-linear transformation as in [24, 25], \(\beta ^{-1}\) denotes a measurement noise variance, \(\varGamma _{S_n^0}\) is the \(L\,\times \,L\) symmetric matrix with elements \(\varGamma _{i,j} = k\left( \mathbf {p}_i^F, \mathbf {p}_j^F\right) +\beta ^{-1}\delta _{i,j}\), and \(\mathbf {P}_{S_n^0}^O\) is the \(L\,\times \,D\) matrix \([\mathbf {p}_1^O,...,\mathbf {p}_L^O]^T\), where D is the dimension of the image.
Afterwards, all correspondences that are consistent with this GPR are added to \(S_n^0\). A correspondence is treated as valid if the Mahalanobis distance between the corresponding points between \(\mathbf {p}^{O}\) and \({m_{s_n^0}}(\mathbf {p}^F)\) is small enough. This gives us an augmented correspondence set \(S_n^1\), such as the one depicted by Fig. 4 (Iteration #2). The process using \(S_n^1\) to compute the regression of Eq. 2 is then repeated until the set stabilizes, typically after 3 to 4 iterations, as shown in Fig. 4 (Iteration #3). Finally, it yields two sets of image points \((\mathbf {p}_i^F,\mathbf {p}_i^O)\) and of geometrically consistent correspondences \(S_n\). Typically, all the significant points \(\mathbf {p}^F\) in the fundus image would be contained in the stabilized set, so i is usually equal to the number of the significant points in fundus image \(I_F\).
We now turn to enforcing topological consistency constraint by message passing on the result introduced in Sect. 2.1. Topological consistency means that if one subtree of fundus vessel network contains a set of significant points \(\mathbf {p}_m^F\), and there is a edge set \(E^F = \left\{ E_{ij}^F, {i,j}\in m\right\} \), then we can find the corresponding point set \(\mathbf {p}_m^O\) and edge set \(E^O = \left\{ E_{ij}^O, {i,j}\in m\right\} \) in OCT-A. For each subtree in fundus, the corresponding subtree can be found by searching for those containing all the edges in \(E^O\) in OCT-A. The topological message passing is achieved by linking corresponding subtree pair, as show in Fig. 2(c).
2.3 A/V Classification in OCT-A
A topology map is obtained after mapping the topology results from an OCT-A to its corresponding fundus - the complete vessel network is divided into several subgraphs each with a label. The final goal is to differentiate the labels into artery and vein categories.
Again, the dominant sets-based classification method (DOS) introduced in Sect. 2.1 is employed to classify these individual branches into two clusters, A and B. Note, the features suggested in [7] are utilized to compute the weights \(\omega \). For each subgraph i, the probability of its being cluster A: \(P^i_A\), is computed by the number of vessel pixels classified by DOS as A: \(P^i_A=n_A(i)/(n^i_A+n^i_B)\), where \(n^i_A\) is the number of pixels classified as A, and \(n^i_B\) is the number of pixels classified as B. In this work, the subgraphs are determined as ‘artery’ if the average intensity value in the fundus image is larger than 0.48, otherwise, the subgraphs are assigned as ‘vein’.
3 Experiments
Datasets: The proposed A/V classification method was evaluated on 22 pairs of OCT-A and fundus images from 22 eyes with pathological myopia. OCT-A images were acquired using an Angiovue Spectral-domain-OCT angiography system (Optovue Fremont, USA), with resolutions of \(400\,\times \,400\) pixels. All analyzed OCT-A images were \(6\,\times \,6\) mm\(^2\) scans. Color fundus images were captured using Canon CR-2 (Cannon, Japan) with a 45\(^\circ \) field-of-view and a resolution of \(1623\,\times \,1642\) pixels. One senior image expert was invited to manually label the vascular topology (each single tree is marked with a distinct color), arteries and veins on both image types. We made our dataset publicly available [https://imed.nimte.ac.cn/ACRO.html].
3.1 Evaluation of A/V Classification
Figure 5 demonstrates the results of A/V classification on two example OCT-A images by using the proposed method. Overall, our method correctly distinguishes most of the arteries (red) and veins (blue), when compared with the corresponding manual annotations. In order to demonstrate the effectiveness of the proposed A/V classification more objectively, Table 1 reports the sensitivity (Se), and balanced accuracy (Acc) scores in terms of centerline pixel-level. Se illustrates the ability of a given method to detect arteries, and Acc indicates the overall classification performance, and thus reflects the trade-off between sensitivity and specificity. Our method obtains promising performance of A/V classification on OCT-A, with Se = 0.906 and Acc = 0.890, respectively.
We have also evaluated our method against the state-of-the-art A/V classification methods over four public retinal vessel datasets: INSPIRE [26], VICAVR [9], DRIVE [27], and WIDE [1], to validate the superiority our method in A/V classification. It is clearly shown that the proposed method obtains competing results in terms of Se and Acc. The results of the existing methods were quoted from the relevant papers.
3.2 Evaluation of Topology Estimation
Our A/V classification relies on the prior results of topology reconstruction, and false topology estimation may cause incorrect A/V classification. Therefore, it is important to establish the performance on topology estimation. The last two columns of Fig. 6 illustrates the results of vascular topology estimation on OCT-A. It can be seen that our method is able to trace most vascular structures correctly: only a few significant points were incorrectly traced in red circles, as shown in the last column of Fig. 6. These errors were located at crossovers since it may suffer from failures at the tiny vessel segmentation and skeletonization stages, leading to misrepresentation of the topological structures. The percentage of the correctly identified (True Positive, TP) relevant significant points, i.e., bifurcation (BIF) and crossovers (CRO), is presented in Table 2. Compare to color fundus, although there is a lack of color and intensity information in OCT-A, our results showed that our method still achieves promising results in OCT-A. Th accuracy of significant point identification is 87.1% in OCT-A images.
4 Conclusion
An automated method for classification of vessels as arteries or veins in OCT-A is indispensable, to understand disease progression and facilitate for the management of many diseases. In this paper, we have demonstrated a novel A/V classification method in OCT-A, guided by the color and intensity information from the corresponding fundus images. We utilized the observation that paired fundus and OCT-A images share partial vascular topological networks. After estimating the vascular topology of both color fundus and OCT-A images, Gaussian process regression was applied to register and translate A/V classification results from color fundus image to OCT-A images. The proposed algorithm accurately estimated topology and classified vessel types in OCT-A. The significance of our method is that it is the first attempt to classify A/Vs in OCT-A. Future work will focus on testing the proposed technique for the diagnosis of eye-related disease in clinical settings.
References
Estrada, R., Tomasi, C., Schmidler, S., Farsiu, S.: Tree topology estimation. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1688–1701 (2015)
Zheng, Y., et al.: Automatic 2-D/3-D vessel enhancement in multiple modality images using a weighted symmetry filter. IEEE Trans. Med. Imaging 37(2), 438–450 (2017)
Vázquez, S., et al.: Improving retinal artery and vein classification by means of a minimal path approach. Mach. Vis. Appl. 24(5), 919–930 (2013)
Alam, M., Toslak, D., Lim, J.I., Yao, X.: Color fundus image guided artery-vein differentiation in optical coherence tomography angiography. Invest. Ophthalmol. Vis. Sci. 59(12), 4953–4962 (2018)
Estrada, R., Tomasi, C., Schmidler, S.C., Farsiu, S.: Tree topology estimation. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1688–1701 (2014)
Zhao, Y., Rada, L., Chen, K., Harding, S.P., Zheng, Y.: Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images. IEEE Trans. Med. Imaging 34(9), 1797–1807 (2015)
Dashtbozorg, B., Mendonça, A.M., Campilho, A.: An automatic graph-based approach for artery/vein classification in retinal images. IEEE Trans. Image Process. 23(3), 1073–1083 (2013)
Zhao, Y., et al.: Retinal vascular network topology reconstruction and artery/vein classification via dominant set clustering. IEEE Trans. Med. Imaging 39(2), 341–356 (2020)
Vázquez, S., Cancela, B., Barreira, N., Saez, M.: Improving retinal artery and vein classification by means of a minimal path approach. Mach. Vis. Appl. 24(5), 919–930 (2013)
Huang, F., Dashtbozorg, B., Romeny, B.M.H.: Artery/vein classification using reflection features in retina fundus images. Mach. Vis. Appl. 29(1), 23–34 (2017). https://doi.org/10.1007/s00138-017-0867-x
Zhao, Y., et al.: Retinal artery and vein classification via dominant sets clustering-based vascular topology estimation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 56–64. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_7
Ma, W., Yu, S., Ma, K., Wang, J., Ding, X., Zheng, Y.: Multi-task neural networks with spatial activation for retinal vessel segmentation and artery/vein classification. MICCAI 2019. LNCS, vol. 11764, pp. 769–778. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_85
Talu, S., Calugaru, D.M., Lupascu, C.A.: Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis. Int. J. Ophthalmol. 8(4), 770 (2015)
Zhao, Y., et al.: Intensity and compactness enabled saliency estimation for leakage detection in diabetic and malarial retinopathy. IEEE Trans. Med. Imaging 36(1), 51–63 (2017)
Zahid, S., et al.: Fractal dimensional analysis of optical coherence tomography angiography in eyes with diabetic retinopathy. Invest. Ophthal. Vis. Sci. 57(11), 4940–4947 (2016)
Zhao, Y., et al.: Automated tortuosity analysis of nerve fibers in corneal confocal microscopy. IEEE Trans. Med. Imaging 39, 2725–2737 (2020)
Niemeijer, M., et al.: Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs. IEEE Trans. Med. Imaging 30(11), 1941–1950 (2011)
Xie, J., Zhao, Y., Zheng, Y., Su, P., Liu, J., Wang, Y.: Retinal vascular topology estimation via dominant sets clustering. In: International Symposium on Biomedical Imaging, pp. 1458–1462. IEEE (2018)
Mou, L., et al.: CS-Net: channel and spatial attention network for curvilinear structure segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 721–730. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_80
Xie, J., et al.: Topology reconstruction of tree-like structure in images via structural similarity measure and dominant set clustering. In: Conference on Computer Vision and Pattern Recognition, vol. 10, pp. 8505–8513 (2019)
Pavan, M., Pelillo, M.: Dominant sets and pairwise clustering. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 167–172 (2006)
Zemene, E., Pelillo, M.: Interactive image segmentation using constrained dominant sets. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 278–294. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_17
Chen, J., Tian, J., Lee, N., Zheng, J., Smith, R.T., Laine, A.F.: A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010)
Quiñonero-Candela, J., Rasmussen, C.E.: A unifying view of sparse approximate gaussian process regression. J. Mach. Learn. Res. 6(Dec), 1939–1959 (2005)
Serradell, E., Glowacki, P., Kybic, J., Moreno-Noguer, F., Fua, P.: Robust non-rigid registration of 2D and 3D graphs. In: Conference on Computer Vision and Pattern Recognition, pp. 996–1003. IEEE (2012)
Qureshi, T., Habib, M., Hunter, A., Al-Diri, B.: A manually-labeled, artery/vein classified benchmark for the drive dataset. In: 2013 IEEE 26th International Symposium on Computer-Based Medical Systems, pp. 485–488 (2013)
Estrada, R., Allingham, M.J., Mettu, P.S., Cousins, S.W., Tomasi, C., Farsiu, S.: Retinal artery-vein classification via topology estimation. IEEE Trans. Med. Imaging 34(12), 2518–2534 (2015)
Acknowledgment
This work was supported by Zhejiang Provincial Natural Science Foundation of China (LQ20F030002, LZ19F010001), Ningbo “2025 S&T Megaprojects” (2019B10033, 2019B10061), National Natural Science Foundation of China (61906181).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Xie, J. et al. (2020). Classification of Retinal Vessels into Artery-Vein in OCT Angiography Guided by Fundus Images. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12266. Springer, Cham. https://doi.org/10.1007/978-3-030-59725-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-59725-2_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59724-5
Online ISBN: 978-3-030-59725-2
eBook Packages: Computer ScienceComputer Science (R0)