Abstract
Registration of pre-operative and post-recurrence brain images is often needed to evaluate the effectiveness of brain gliomas treatment. While recent deep learning-based deformable registration methods have achieved remarkable success with healthy brain images, most of them would be unable to accurately align images with pathologies due to the absent correspondences in the reference image. In this paper, we propose a deep learning-based deformable registration method that jointly estimates regions with absent correspondence and bidirectional deformation fields. A forward-backward consistency constraint is used to aid in the localization of the resection and recurrence region from voxels with absence correspondences in the two images. Results on 3D clinical data from the BraTS-Reg challenge demonstrate our method can improve image alignment compared to traditional and deep learning-based registration approaches with or without cost function masking strategy. The source code is available at https://github.com/cwmok/DIRAC.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Registration of pre-operative and post-recurrence brain MRI images plays a significant role in discovering accurate imaging markers and elucidating imaging signatures for aggressively infiltrated tissue, which are crucial to the treatment plan and diagnosis of intracranial tumors, especially brain gliomas [11, 26]. To better understand the location and extent of the tumor and its biological activity after resection, pre-operative and follow-up structural brain MRI scans of a patient first need to be aligned accurately. However, deformable registration between the pre-operative and follow-up scans, including post-resection and post-recurrence, is challenging due to possible large deformations and absent correspondences caused by tumor’s mass effects [7], resection cavities, tumor recurrence and tissue relaxation in the follow-up scans.
Conventional registration methods mostly deal with the absent correspondence issue by (1) excluding the similarity measure of pathological regions [3, 5], 2) replacing the pathological images with quasi-normal appearance [9, 18, 30] or 3) joint registration and segmentation framework [4, 27]. Excluding the pathological regions often requires manual delineation [3] or initial seed [4, 17] of the tumor regions in brain scans, which are often prohibitive and daunting to acquire in terms of labour cost and resources. Replacing the pathological image with the quasi-normal appearance, alternately, avoids the prerequisite of a prior pathological segmentation. However, modeling the tumor-to-quasi-normal appearance with a statistical model [9, 18] often requires extra image scans, i.e., image scans from a healthy population. Moreover, existing approaches based on quasi-normal images require accurate registration to a common atlas space for quasi-normal reconstruction. Ironically, accurate alignment with images suffered from mass effect is very hard to achieve without reconstruction. Therefore, the registration and reconstruction problems with quasi-normal approaches need to be interleaved in a costly iterative optimization process. Alternatively, an unsupervised approach [27] to accommodate resection and retraction of tissue was proposed for registering pre-operative and intra-operative brain images. Their method alternates between registering the brain scans using the demons algorithm with an anisotropic diffusion smoother and segmenting the resection using the level set method in the space with high image intensity disagreement. Chitphakdithai et al. [4] extended this idea to a simultaneous registration and resection estimation approach with the expectation-maximization algorithm and a prior on post-resection image intensities. Nevertheless, these methods rely on the costly iterative optimization, which can be up to \(\sim 3.5\) h per case [17].
While recent deep learning-based deformable registration (DLDR) methods have achieved remarkable registration speed and superior registration accuracy [2, 6, 10, 13, 15, 23,24,25], these registration algorithms are incapable of accurately registering pre-operative and post-recurrence images due to the absent correspondence problem. A learning-based registration method for images with pathology was presented in [8] which dealt with missing correspondence by joint estimating the vector-momentum parameterized stationary velocity field (vSVF) and quasi-normal image to drive the registration. Nevertheless, the reconstruction of the quasi-normal image requires explicit tumor segmentation in the training phase. Moreover, the large deformation caused by the mass effect of tumor is difficult to model without resorting to complex multi-stage warping pipelines.
In this paper, we present an unsupervised joint registration and segmentation learning framework, in which a large deformation image registration network and a forward-backward consistency constraint are leveraged to estimate the valid and absent correspondence regions along with the dense deformation fields in a bidirectional manner, for pre-operative and post-recurrence registration. Instead of using a manual delineation or image intensity disagreement to segment the pathological regions, our method leverages the forward-backward consistency constraint of the bidirectional deformation fields to explicitly locate regions with absent correspondence and excludes them in the similarity measure in an unsupervised manner. We present extensive experiments with a pre-operative and post-recurrence brain MR dataset, demonstrating that our method achieves accurate registration accuracy in brain MR scans with pathology.
2 Methods
Our goal is to establish a dense non-linear correspondence between the pre-operative scan and the post-recurrence scan of the same subject, where regions without valid correspondence are excluded in the similarity measure during optimization. Our method builds on the previous DLDR method [24] and extends it to accommodate the absent correspondence issue in the pre-operative and post-recurrence scans.
2.1 Bidirectional Deformable Image Registration
Let B and F be the pre-operative (baseline) scan B and post-recurrence (follow-up) scan defined over a n-D mutual spatial domain \(\varOmega \subseteq \mathbb {R}^n\). In this paper, we focus on 3D deformable registration, i.e., \(n = 3\) and \(\varOmega \subseteq \mathbb {R}^3\) and assume that B and F are affinely aligned to a common space.
Figure 1 depicts an overview of our method. We parametrize the deformable registration problem as a bidirectional registration problem \(\boldsymbol{u}_{bf} = f_\theta (B, F)\) and \(\boldsymbol{u}_{fb} = f_\theta (F, B)\) with CNN, where \(\theta \) is a set of learning parameters and \(\boldsymbol{u}_{bf}\) represents the displacement field that transform B to align with F, i.e., \(B(x+\boldsymbol{u}_{bf}(x))\) and F(x) define similar anatomical locations for each voxel \(x\in \varOmega \) (except voxels with absent correspondence). The proposed method works with any CNN-based DLDR methods. In order to accommodate the large deformation and variation of anatomical structures caused by the tumor’s mass effect, we parametrize an example of the function \(f_\theta \) with the conditional deep Laplacian pyramid image registration network (cLapIRN) [24], which is capable of large deformation and rapid hyperparameter tuning for the smoothness regularization in a wide range of applications [12]. Despite the multi-resolution optimization strategy used in the cLapIRN, vanilla cLapIRN is incapable of accurately registering images with absent correspondence, i.e., missing correspondence caused by the tumor resection and recurrence, edema and cavity. Therefore, instead of measuring the similarity of B and F for every voxel \(x \in \varOmega \), our method estimates the regions with absent correspondence in both B and F domains using the bidirectional displacement fields and the forward-backward consistency constraint, and only measures the similarity on regions with valid correspondence during optimization.
2.2 Forward-Backward Consistency Constraint
Conventionally, regions with absent correspondence can be detected by comparing the appearance or image intensities of the warped scan to the target scan or an atlas [18, 27]. However, corresponding regions in the pre-operative and post-recurrence scans may have different intensity profiles, which make their approaches less robust in practice. Therefore, we depart from approaches with spatial prior and extend the forward-backward consistency [19, 21, 28, 29] instead. We design a forward-backward consistency constraint to locate regions with absent correspondence in the baseline and follow-up scans. The forward-backward (inverse consistency) error \(\delta _{bf}\) from B to F is defined as:
We estimate the regions with absent correspondence by checking the consistency of the forward and backward displacement fields. For any voxel x, if there is a significant violation of inverse consistency in x, i.e., \(\delta _{bf}(x)>\tau _{bf}\), the voxel x is either without valid correspondence or the displacement field is not accurately estimated. \(\tau _{bf}\) is the pre-defined threshold and is defined as follows:
where the first term grants a tolerance interval that allows estimation errors to increase with the overall complexity of the registration and \(\alpha \) is a constant. Then, we create a binary mask \(\boldsymbol{m}_{bf}\) to mark voxels with absent correspondence as follows:
where \(\boldsymbol{A}\) denotes an averaging filter of size \((2p+1)^3\) and \(\star \) denotes a convolution operator with zero-padding p. Since the estimated registration fields will fluctuate during learning, we apply an averaging filter to the estimated forward-backward error to stabilize the estimation of the binary mask as well as to alleviate the effect of outliers to the mask estimation. For the mask \(\boldsymbol{m}_{fb}\) in the backward to forward direction, we can define it in a symmetric way with \(\boldsymbol{u}_{fb}\) and \(\boldsymbol{u}_{bf}\) exchanged. We set \(\alpha =0.015\) and \(p=4\) in all our experiments. The values of \(\alpha \) and p are determined by measuring the forward-backward error of the pathological regions from a vanilla cLapIRN model.
2.3 Inverse Consistency
Since the decision of regions with absent correspondence is highly dependent on the inverse consistency error in our method, we further enforce the inverse consistency on the regions with valid correspondence. Mathematically, the inverse consistency loss \(\mathcal {L}_{\text {inv}}\) is defined as:
where the measure of inverse consistency error \(\delta \) is masked with the regions with valid correspondence \((1-\boldsymbol{m})\) via elementwise multiplication.
2.4 Objective Function
Given the deformation fields \(\phi _{bf} = Id + \boldsymbol{u}_{bf}\) and \(\phi _{fb} = Id + \boldsymbol{u}_{fb}\), where Id is the identity transform. The objective of our proposed method is to compute the optimal deformation fields that minimize the dissimilarity measure of \(B(\phi _{bf})\) and F as well as B and \(F(\phi _{fb})\) in regions with valid correspondence. Specifically, we adopt the negative local cross-correlation (NCC) with masks to exclude the similarity measure of regions without valid correspondence as shown in Eq. 5.
To encourage smooth solution and penalize implausible solutions, we adopt a diffusion regularizer:
Hence, the complete loss function is therefore:
where \(\lambda _{reg}\), \(\lambda _{inv}\) and \(\lambda _{m}\) are the hyperparameters to balance the loss functions. N denotes the number of voxels in the mutual spatial domain \(\varOmega \) and the last term is to avoid the trivial solution where all voxels are marked in \(\boldsymbol{m}_{bf}\) and \(\boldsymbol{m}_{fb}\). During training, we follow the conditional registration framework in [24] to sample \(\lambda _{reg} \in [0,1]\) and set \(\lambda _{reg} = 0.3\) in the inference phase. Formally, the optimal learning parameters \(\theta ^*\) is estimated by minimizing the complete loss \(\mathcal {L}\) function using a training dataset D, as follows:
3 Experiments
Data and Pre-processing. We evaluate our method on the brain tumor MR registration task using the 3D clinical dataset from the BraTS-Reg challenge [1], which consists of 160 pairs of pre-operative and follow-up brain MR scans of glioma patients taken from different timepoint. Each timepoint contains native T1, contrast-enhanced T1-weighted (T1ce), T2-weighted and FLAIR MRI. 140 pairs of scans are associated with 6 to 50 manual landmarks in both scans and 20 scans with landmarks in the follow-up scan only. All scans have carried out standard processing, including skull stripping, affine spatial normalization and resampled to the \(1 \ \text {mm}^3\) isotropic resolution. We use the DeepMedic [14] to segment the tumor core in each pre-operative scan. The tumor segmentation map is used in cost function masking for baseline methods. For learning-based methods, we further resample the scans to size of \(160\,\times \,160\,\times \,80\) with \(1.5\,\times \,1.5\,\times \,1.94 \ \text {mm}^3\) isotropic resolution in the training phase and upsample the solutions to \(1 \ \text {mm}^3\) isotropic resolution with bilinear interpolation in the evaluation. We perform 5-fold cross-validation and divide the 140 pairs of scans into 5 folds with equal size. In each group, we join 4 folds of data and the additional 20 pairs of scans as training set and validation set, and 1 fold as the test set. Specifically, for each group, we split the dataset into 122, 10, and 28 cases for training, validation and test sets.
Implementation. Our proposed method and the other baseline methods are implemented with PyTorch 1.9 and deployed on the same machine, equipped with an Nvidia Titan RTX GPU and an Intel Core (i7-4790) CPU. We build our method on top of the official implementation of 3-level cLapIRN with default parameters available in [22]. We set \(\lambda _{reg}\), \(\lambda _{inv}\) and \(\lambda _{m}\) to 0.3, 0.5 and 0.01, respectively. We use Adam optimizer with a fixed learning rate 0.0001. All learning-based methods are trained from scratch.
Measurement. We register each pre-operative scan to the corresponding follow-up scan of the same patient, propagate the landmarks of the follow-up scan using the resulting deformation field and measure the mean target registration error (TRE) of the paired landmarks with Euclidean distance in millimetres. We divide the landmarks into two sets: 1) landmarks within 30mm from the tumor region (Near tumor), and 2) landmarks outside the 30mm tumor region (Far from tumor), using tumor segmentation maps and morphological dilation. We further measure the robustness of the registration. We follow [1] to define the robustness for a pair of scans as the relative number of successfully registered landmarks, i.e., 1 if the average distance of all the landmarks in the target and warped images is reduced after registration and 0 means none of the distances is reduced. As the local deformation at voxel p is invertible if and only if the Jacobian determinant of p (\(|J_\phi |(p)\)) is larger than zero, we also measure the number of percentage of the voxels with Jacobian determinant smaller or equal to 0 (denoted as \(\%|J_\phi |_{\le 0}\)). We also measure the elapsed time in seconds for computations of each case in the inference phase (\(\text {T}_\text {test}\)).
Baseline Methods. We compare our method (denotes as DIRAC) with a conventional approach (denoted as Elastix [16]) and two cutting edge DLDR methods (denoted as VM [2] and cLapIRN [24]). For Elastix, we use the official implementation in the SimpleElastix library [20], which includes a 3-level iterative optimization scheme. For VM and cLapIRN, we use their official implementations with the best parameters reported in their papers. We also report the results of methods with cost function masking using the tumor core segmentation map for each method (denoted with postfix -CM). Note that the cost function masking strategy in learning-based methods is defined as excluding the similarity measure of the tumor region during the training phase, and the tumor segmentation is hidden during the inference phase, as opposed to conventional methods. All DLDR methods are trained from scratch with T1ce MR scans as input, except for our variant (denoted as DIRAC-D), which employs both the T1ce and T2-weighted scans of each case as input.
Results and Discussions. Figure 3 illustrates the box-and-whisker plots of average TRE of registered landmarks based on landmarks inside the 30 mm tumor boundary (Group 1) in the left graph as well as the one for the remaining landmarks in the right graph across the 140 subjects (Group 2). Among deformable image registration methods with single MR modality as input, our method DIRAC has the lowest mean registration error of 3.31 and 1.91 mm in groups 1 and 2, respectively, which improves the registration error of our baseline method cLapIRN significantly by 0.42 mm (−11%) and 0.17 mm (−8%) in groups 1 and 2, respectively. Among the alternative methods, methods with cost function masking (-CM) show significant improvement over their baseline method in group 1 and the improvement gain in group 2 is less significant, suggesting that implicitly or explicitly enforcing the smooth deformations inside the masked tumor regions is effective to the registration near the tumor regions. Table 1 shows a comprehensive summary of the registration error, robustness, local invertibility and runtime results across the 140 subjects. As opposed to the alternative methods using cost function masking, our methods (DIRAC and DIRAC-D) have achieved the best overall results in a fully unsupervised manner without sacrificing the runtime advantage of learning-based methods. Comparing the results of DIRAC and DIRAC-D, our variant DIRAC-D, which leverages additional MR modality, slightly improves the registration error by 1.5% and 2.6% in groups 1 and 2, respectively. Figure 2 shows qualitative examples of the registration results for each method and the estimated regions with absent correspondence by our method. The results demonstrate our method is capable of accurately locating the regions without valid correspondence, i.e., the tumor and cerebral edema in the baseline scan of subject 2, and explicitly excluding these regions in similarity measure during the training phase further reduces artefacts in the patient-specific registration.
4 Conclusion
We have proposed a unsupervised deformable registration method for the pre-operative and post-recurrence brain MR registration, which capable of joint registration and segmentation of regions with absent correspondence. We introduce a novel forward-backward consistency constraint and a pathological-aware symmetric loss function. Compared to existing deep learning-based methods, our method addresses the absent correspondence issue in patient-specific registration and shows significant improvement in registration accuracy near the tumor regions. Compared to conventional methods, our method inherits the runtime advantage from deep learning-based approaches and does not require any manual interaction or supervision, demonstrating immense potential in the fully-automated patient-specific registration.
References
Baheti, B., et al.: The brain tumor sequence registration challenge: establishing correspondence between pre-operative and follow-up MRI scans of diffuse glioma patients. arXiv preprint arXiv:2112.06979 (2021)
Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: An unsupervised learning model for deformable medical image registration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9252–9260 (2018)
Brett, M., Leff, A.P., Rorden, C., Ashburner, J.: Spatial normalization of brain images with focal lesions using cost function masking. Neuroimage 14(2), 486–500 (2001)
Chitphakdithai, N., Duncan, J.S.: Non-rigid registration with missing correspondences in preoperative and postresection brain images. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6361, pp. 367–374. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15705-9_45
Clatz, O., et al.: Robust nonrigid registration to capture brain shift from intraoperative MRI. IEEE Trans. Med. Imaging 24(11), 1417–1427 (2005)
Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 729–738. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_82
Dean, B.L., et al.: Gliomas: classification with MR imaging. Radiology 174(2), 411–415 (1990)
Han, X., et al.: A deep network for joint registration and reconstruction of images with pathologies. In: Liu, M., Yan, P., Lian, C., Cao, X. (eds.) MLMI 2020. LNCS, vol. 12436, pp. 342–352. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59861-7_35
Han, X., Yang, X., Aylward, S., Kwitt, R., Niethammer, M.: Efficient registration of pathological images: a joint PCA/image-reconstruction approach. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 10–14. IEEE (2017)
Heinrich, M.P.: Closing the gap between deep and conventional image registration using probabilistic dense displacement networks. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 50–58. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_6
Heiss, W.D., Raab, P., Lanfermann, H.: Multimodality assessment of brain tumors and tumor recurrence. J. Nucl. Med. 52(10), 1585–1600 (2011)
Hering, A., et al.: Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning. arXiv preprint arXiv:2112.04489 (2021)
Hu, X., Kang, M., Huang, W., Scott, M.R., Wiest, R., Reyes, M.: Dual-stream pyramid registration network. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 382–390. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_43
Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)
Kim, B., Kim, J., Lee, J.-G., Kim, D.H., Park, S.H., Ye, J.C.: Unsupervised deformable image registration using cycle-consistent CNN. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 166–174. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_19
Klein, S., Staring, M., Murphy, K., Viergever, M.A., Pluim, J.P.: Elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29(1), 196–205 (2009)
Kwon, D., Niethammer, M., Akbari, H., Bilello, M., Davatzikos, C., Pohl, K.M.: PORTR: pre-operative and post-recurrence brain tumor registration. IEEE Trans. Med. Imaging 33(3), 651–667 (2013)
Kwon, D., Zeng, K., Bilello, M., Davatzikos, C.: Estimating patient specific templates for pre-operative and follow-up brain tumor registration. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9350, pp. 222–229. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24571-3_27
Liu, P., Lyu, M., King, I., Xu, J.: Selflow: self-supervised learning of optical flow. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4571–4580 (2019)
Marstal, K., Berendsen, F., Staring, M., Klein, S.: Simpleelastix: a user-friendly, multi-lingual library for medical image registration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 134–142 (2016)
Meister, S., Hur, J., Roth, S.: Unflow: unsupervised learning of optical flow with a bidirectional census loss. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Mok, T.C., Chung, A.: Official implementation of conditional deep laplacian pyramid image registration network. http://github.com/cwmok/Conditional_LapIRN. Accessed 01 Mar 2021
Mok, T.C., Chung, A.: Fast symmetric diffeomorphic image registration with convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4644–4653 (2020)
Mok, T.C.W., Chung, A.C.S.: Conditional deformable image registration with convolutional neural network. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 35–45. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_4
Mok, T.C.W., Chung, A.C.S.: Large deformation diffeomorphic image registration with laplacian pyramid networks. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 211–221. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_21
Price, S.J., Jena, R., Burnet, N.G., Carpenter, T.A., Pickard, J.D., Gillard, J.H.: Predicting patterns of glioma recurrence using diffusion tensor imaging. Eur. Radiol. 17(7), 1675–1684 (2007)
Risholm, P., Samset, E., Talos, I.-F., Wells, W.: A non-rigid registration framework that accommodates resection and retraction. In: Prince, J.L., Pham, D.L., Myers, K.J. (eds.) IPMI 2009. LNCS, vol. 5636, pp. 447–458. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02498-6_37
Sundaram, N., Brox, T., Keutzer, K.: Dense point trajectories by GPU-accelerated large displacement optical flow. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 438–451. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_32
Wang, Y., Yang, Y., Yang, Z., Zhao, L., Wang, P., Xu, W.: Occlusion aware unsupervised learning of optical flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4884–4893 (2018)
Yang, X., Han, X., Park, E., Aylward, S., Kwitt, R., Niethammer, M.: Registration of pathological images. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2016. LNCS, vol. 9968, pp. 97–107. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46630-9_10
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mok, T.C.W., Chung, A.C.S. (2022). Unsupervised Deformable Image Registration with Absent Correspondences in Pre-operative and Post-recurrence Brain Tumor MRI Scans. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-16446-0_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16445-3
Online ISBN: 978-3-031-16446-0
eBook Packages: Computer ScienceComputer Science (R0)