Abstract
Purpose
Image fusion of different imaging modalities renders valuable information to clinicians. In this paper, we proposed an automatic multimodal registration method to register intra-operative ultrasound images (US) to preoperative magnetic resonance images (MRI) in the context of image-guided neurosurgery.
Methods
We employed refined correlation ratio as a similarity metric for our intensity-based image registration method. We deem MRI as the fixed image (\(I_\mathrm{f}\)) and US as the moving image (\(I_\mathrm{m}\)) and then transform \(I_\mathrm{m}\) to align with \(I_\mathrm{f}\). We utilized the covariance matrix adaptation evolutionary strategy to find the optimal affine transformation in registration of \(I_\mathrm{m}\) to \(I_\mathrm{f}\).
Results
We applied our method on the publicly available retrospective evaluation of cerebral tumors (RESECT) database and Montreal Neurological Institute’s brain images of tumors for evaluation (BITE) database. We validated the results qualitatively and quantitatively. Qualitative validation is conducted (by the three authors) through overlaying pre- and post-registration US and MRI to allow visual assessment of the alignment. Quantitative validation is performed by utilizing the corresponding landmarks in the databases for the preoperative MRI and the intra-operative US. Average mean target registration error (mTRE) has been reduced from \(5.40\pm 4.27\) to \(2.77\pm 1.13\) in 22 patients in the RESECT database and from \(4.12\pm 2.03\) to \(2.82\pm 0.72\) in the BITE database. A nonparametric statistical analysis performed using the Wilcoxon rank sum test shows that there is a significant difference between pre- and post-registration mTREs with a p value of \(0.0058\,(p<0.05)\) for the RESECT database and \(0.0483\,(p<0.05)\) for the BITE database.
Conclusions
The proposed fully automatic registration method significantly improved the alignment of MRI and US images and can therefore be used to reduce the misalignment of US and MRI caused by brain shift, calibration errors, and patient to MRI transformation matrix.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
In medical imaging, we often have chronological images of tissues (which are usually collected with different imaging modalities) that need to be aligned [1, 2]. Fusion of the information of those corresponding images is proven to provide useful information to clinicians [3,4,5]. Even though registration based on manually selected homologous landmarks can be performed on images, the corresponding images are often misaligned due to reasons such as tissue deformation and errors in landmark selection. For example, in image-guided surgery, deformation of the organs, such as the brain, can invalidate surgical planning [6,7,8,9].
Image registration is the method, which aligns corresponding misaligned images acquired in different times and/or with different sensors [10]. One can categorize image registration methods in various classes such as automatic or interactive [11]. Automatic image registration is generally faster and avoids erroneous actions of the user [12]. Another classification can be made based on the method used: intensity-based or feature-based. Intensity-based image registration methods generally work better for smaller deformations, whereas feature-based methods generally work better if the initial misalignment is large [13, 14].
An automatic intensity-based image registration method can consist of different components. One image would be chosen as the template or fixed image (\(I_\mathrm{f}\)). The other image is called the moving image (\(I_\mathrm{m}\)). During the registration process, \(I_\mathrm{m}\) should move to be registered to \(I_\mathrm{f}\). The movement of \(I_\mathrm{m}\) can be restricted and modeled by a spatial transformation. A transformation type is selected based upon the application [15,16,17]. When there is no deformation of the object scene, we can simply use a rigid transformation, which only has 6\({^{\circ }}\) of freedom [18,19,20]. When one image has deformation with respect to the other one, we can use transformations with more parameters for instance, affine or free-form B-spline transformations [21,22,23]. The image registration method should have a similarity metric to evaluate the similarity of two images after the transformation. On one end of the spectrum, the similarity metric can assume a restrictive equality relationship between image intensities and easily subtract two images as in sum of square differences (SSD). On the other end of the spectrum, it can assume a general information-based similarity between images as in mutual information (MI) [24]. Correlation ratio (CR) assumes a functional relationship between intensities of the two images and provides a compromise between these two extremes. The original CR proposed by Roche et al. [25, 26] was calculated globally for the entire volumes, whereas CR used in [27] is calculated in small local patches. More importantly, unlike the original CR, we perform a binning of intensities of the reference image and calculate histograms using Parzen windows. This allows us to reliably calculate CR from small patches. The third component of registration methods maximizes the similarity of the images by varying the parameters of the chosen transformation [25,26,27].
We proposed an automatic intensity-based image registration method using the refined version of CR. The proposed method is an extended version of the method (MARCEL) proposed in [28] which was itself based on RaPTOR (Robust PaTch-based cOrrelation Ratio) [29]. Our similarity metric measures the similarity of the images based on corresponding patches locally. We modeled movement of \(I_\mathrm{m}\) with affine transformation and utilized the covariance matrix adaptation evolutionary strategy (CMA-ES) [30] as the optimization approach. We applied our method on RESECT (REtroSpective Evaluation of Cerebral Tumors) database [31] to validate the results. Recent work has successfully performed US–US registration of the RESECT database [32]. In order to show the performance of our method on different MRIs and ultrasound data, we also applied our method to the BITE database [33], which was collected using different ultrasounds and MRI machines.
The contributions of this paper are threefold. First, ARENA uses CMA-ES, which does not need gradient of the cost function, which becomes noisy when the patch size (i.e., the number of samples) is small. Therefore, the optimization step in ARENA is less susceptible to patch size and noise. Second, we show for the first time that US and MRI images of the RESECT database can be automatically registered. And third, although CMA-ES has been successfully used in registration of other imaging modalities [34,35,36,37], it is used for registration of MRI and US for the first time in this paper, where we show that it works even for patients where a very large initial misalignment exists between US and MRI.
The main difference between MARCEL [28] and ARENA is that MARCEL uses gradient descent optimization, while ARENA uses CMA-ES. Gradient descent optimization needs the derivative of the cost function and also requires tuning the step size, whereas CMA-ES does not need the gradient and further does not need tuning of the optimization parameters. In addition, MARCEL was only tested on 5 subjects in the RESECT database, whereas ARENA is tested on all 22 subjects. This further justifies the use of CMA-ES and its ability to converge to the correct solution in all tested cases. Consequently, ARENA has the following improvements compared to MARCEL. First, ARENA is less computationally expensive and is also more straightforward to implement due to the simplicity of the CMA-ES optimization method. Second, ARENA is less sensitive to parameter tuning compared to MARCEL and RaPTOR.
This paper is organized as follows. In section two, we elaborate our method and derive the equations. In section three, qualitative and quantitative validations of the method are presented. In section four, we discuss the advantages and disadvantages of our method and avenues for the future. And finally, we provide a brief conclusion in section five.
Methods
Let \(I_\mathrm{m}\) and \(I_\mathrm{f}\) be, respectively, the moving and fixed images. In our registration problem, we fix \(I_\mathrm{f}\) and move \(I_\mathrm{m}\) so that it matches \(I_\mathrm{f}\). We transform \(I_\mathrm{m}\) with \(\mathbf T \). The optimal \(\mathbf T \), when applied to \(I_\mathrm{m}\), for each point like \(\mathbf x \) in the space of images, gives us the best alignment of \(I_\mathrm{f}\) and \(I_\mathrm{m}\). Alignment of \(I_\mathrm{f}\) and \(I_\mathrm{m}\) is measured by a dissimilarity metric D. The best alignment of \(I_\mathrm{f}\) and \(I_\mathrm{m}\) with T corresponds to minimum achievable D. In other words, our goal is to minimize the following cost function:
where \(R(\mathbf T )\) is a regularization term to enforce a smooth transformation and C is the cost function. Minimizing C by varying \(\mathbf T \) provides the transformation that aligns the fixed and moving images.
Dissimilarity metric
As explained in Eq. 1, D measures the alignment of input images, i.e., the fixed and moving images. Since CR is an asymmetric similarity metric, the order of computing CR is important. To allow either \(I_\mathrm{m}\) or \(I_\mathrm{f}\) to be the first or second image in CR, we label our input images as X and Y. D in Eq. 1 and in Eq. 2 is the amended version of RaPTOR [29]. D can vary from zero to one. In case that X and Y are the same, \(D=0\). When X and Y do not have any similarity, \(D=1\). Therefore, D is a dissimilarity metric. In Eq. 2, \(\eta \) is CR, the similarity metric proposed by Roche et al. [25]. The similarity metric needs to identify corresponding features of X and Y locally. So we calculate CR in \(N_P\) corresponding patches of X and Y.
where \(\varvec{\varOmega _i}\) represents the patch i space. The definition of CR in Eq. 2 is as follows:
where N is the total number of voxels in Y, \(\sigma ^2 = \mathrm{Var}[Y]\), \(i_t\) is the intensity of voxel number t in Y, \(N_b\) is the total number of bins, and \(\lambda _{t,j}\) is the contribution of sample t in bin j as proposed in [29].
Obviously in the calculation of D in Eq. 2, patches that have approximately the same voxel intensities or equally small variances should be discarded because they do not include any image feature. Therefore, we apply a gamma correction on patches of X and Y as the one explained in [38] after selecting patches in X and Y to increase variance of patch intensities. The gamma correction applies an experimental transformation, \(i_n=\exp (\gamma i_0)\), on the voxels of patches where \(i_n\) is new intensity of the voxel. \(\gamma \) is the correction parameter which we set it to 50 heuristically, and \(i_0\) is old intensity of the voxel. We normalize intensities of the patches right after the gamma correction. Then every pair of patches in which \(\sigma ^2 < T\) are discarded. Heuristically, we found that \(T=1\) is the best value.
Transformation
We used affine transformation to model the movement of moving image. In our formulation, no regularization is needed in Eq. 1 since the affine transformation has 12 parameters to be optimized, and the images provide many cues for reliably optimizing for those parameters. The affine transformation matrix is defined as:
As one can see in Eq. 5, the affine transformation has twelve parameters which are \(a_i,\, 1\le i \le 12\). In general, these twelve parameters can be any real number.
Optimization
The explanation in the section two defines the registration procedure as an optimization problem. Image registration, in general, is an ill-posed problem and consequently entails optimizing a highly non-convex objective function [39]. In order to tackle this problem, we deployed CMA-ES as our optimizer. In Eq. 1, C is the cost of the objective function D. The affine transformation parameters \(a_i,\, 1\le i \le 12\) in Eq. 5 are used by the optimization algorithm to minimize C in Eq. 1.
CMA-ES is similar to natural selection of the biological creatures [40]. In each iteration (generation) \(\lambda \), new candidate solutions (offsprings) \(x_k^{(g+1)} ,\, 1\le k \le \lambda \) are calculated from the best \(\mu \) out of \(\lambda \) of the last generation (parents) \(x_{i:\lambda }^{(g)} ,\, 1\le i \le \mu \).
There are \(N=12{^{\circ }}\) of freedom in the optimization established by affine transformation parameters. Hence, the parameter settings for \(\lambda \) and \(\mu \) are \(\lambda =4+\lfloor 3\ln (N)\rfloor \) and \(\mu =\lfloor \lambda /2\rfloor \). CMA-ES update equation for the generation g to \(g+1\) is presented in Eq. 6.
where \(w_i ,\, 1\le i \le \mu \) are summation weights of offsprings and they are calculated as Eq. 7.
In Eq. 6, \(\sigma ^{(g)}\in \mathbb {R}^+\) is the step size at the generation g. The so-called covariance matrix \(\varvec{C^{(g)}}\) in the generation g is a symmetric positive definite \(N\times N\) and its relationship with defined parameters is presented in Eq. 8:
For detailed explanations and equations of \(\sigma ^{(g)}\), \(\varvec{B}^{(g)}\), \(\varvec{D}^{(g)}\), \(z_k^{(g+1)}\), and \(\varvec{C^{(g)}}\) one can refer to [40, 41].
Patient data
We applied the proposed image registration method on the RESECT database [31] and the BITE database [33]. The RESECT database is an open-source clinical database that contains 23 surgical cases of low-grade gliomas resection operated at St. Olavs University Hospital. With the primary goal to help develop image processing techniques for brain shift correction, for each patient, the dataset provides preoperative T1w and T2-FLAIR MRI scans, intra-operative 3D ultrasound volumes obtained before, during, and after tumor resection, and corresponding anatomical landmarks between MRI–US pairs and US–US pairs. To demonstrate our proposed algorithm, we used the preoperative T2-FLAIR MRI and US volume before tumor resection since often this stage sets the tone for the total brain shift after craniotomy. More specifically, 22 patients from the RESECT dataset were used, where 15–16 pairs of MRI-US homologous landmarks were manually tagged. The BITE database consists of 14 patients with preoperative T1w MRI and pre-resection US. As one of the patients’ landmarks was outside the image, we excluded that patient from our experiment (as is also done in other publications [29, 42, 43]).
Registration procedure
For each patient, we first up-sampled the MRI image (\(\mathrm{resolution}=1\times 1\times 1 \,\mathrm{mm}^3\)) to the resolution of corresponding US image because of the US images considerable higher resolution (\(\mathrm{resolution}=0.24\times 0.24\times 0.24 \,\mathrm{mm}^3\)). We set the US as \(I_\mathrm{m}\) and MRI as \(I_\mathrm{f}\), and then we implemented the image registration algorithm on each patient. For better performance of our method, we used up to four levels of Gaussian pyramids to tackle the large misalignment present in some of the cases. We use [q / p] patches with the size of \(7\times 7\times 7\) in this work to calculate CR where q is the number of nonzero voxels in the US image, p is the number of pixels in each US slice, and [.] denotes rounding a number to the next smaller integer operator.
Results
Qualitative validation
By comparing the images before and after the registration, with visual inspection, we evaluated the quality of the registration. We compared the alignment of corresponding brain anatomical features for instance sulci and tumor boundaries in the MRI and US images before and after registration. Each patient data include the brain tumor in MRI and US images. We checked whether alignment of the boundary of the tumor has improved as well.
Figure 1 shows overlaid US and MRI slices of sagittal view for Patient 5, 12, 19, and 21 in the RESECT database [31]. Columns show before and after the registration, respectively. Each row corresponds to an individual patient. The arrows guide the reader to locate the improvement after the registration. The first and second rows show significant improvements in tumor and sulci region. The second and the third rows show improvements around the tumor region.
Quantitative validation
Corresponding homologous landmarks are selected manually in the US and MRI in the RESECT database by two experts. Consider N as the total number of corresponding landmarks in US and MRI volumetric images. We used the provided landmarks to calculate mean target registration error (mTRE) [44]. mTRE for each patient is defined as Eq. 9:
where \(\varvec{T}\) is the optimal affine transformation derived after implementing the image registration algorithm.
Initial mTRE of each patient before registration and the number of landmarks for each patient are reported in Table 1. Each patient has N landmarks, and affine transformation has twelve parameters. In this table, minimum achievable mTRE is the minimum mTRE we can achieve using an affine transformation for the registration. We made system of linear equations to find the optimal achievable affine transformation. In this system, the provided landmarks are known and the optimal affine transformation parameters are unknowns. Therefore, the number of known landmarks is more than the number of unknowns \(N>12\). We solved this overdetermined problem with least squares (LS). We reported LS solution for each patient in Table 1. It is worth mentioning that while the minimum achievable mTREs are calculated the similar way as the fiducial registration error (FRE) [45], they are not equal to FRE. FRE is the root mean square error (RMSE), and we calculate mean root square error (MRSE) so that it can be compared to the initial and final mTRE values calculated before and after registration, respectively.
Recently, the Correction of Brainshift with Intra-Operative Ultrasound (CuRIOUS) 2018 Challenge (curious2018. grand-challenge.org) was held in conjunction with Medical Image Computing and Computer Assisted Intervention (MICCAI) 2018 Conference and addressed the same problem of registration of pre-resection US to MRI in the RESECT database. These papers used methods such as Multilayer Perceptron (MLP) [46], Demons [47], Linear Correlation of Linear Combination (\(\mathrm{LC}^2\)) [48], Spatial Transformer Network (STN) using 3D Convolutional Neural Network (CNN) [49], Self-Similarity Context descriptors (SSC) [50], Scale-Invariant Feature Transform (SIFT) [51], symmetric block matching using Normalized Cross-Correlation (NCC) [52], and \(\mathrm{LC}^2\) using Bound Optimization BY Quadratic Approximation (BOBYQA) [53]. ARENA is compared to the first [48] and second [50] place participants in this challenge.
ARENA improved alignments for each patient. In Table 1, initial mTRE shows rather high value of standard deviation. As in Table 1, our method had a significant improvement for standard deviation. One can interpret it as ability of the method to improve a wide range of misaligned images with high mTRE values. Figure 2 shows the data in Table 1 graphically.
Furthermore, the proposed method was applied on the BITE database as well (Table 2). The method is compared with SSC [43] and \(\mathrm{LC}^2\) [42]. Figure 3 shows the results of Table 2 graphically. Note that the SSC method utilized nonlinear deformable transformation with \(10^7\) DOFs, which allows more complex deformation than an affine transformation with 12 DOFs.
The SSC method applied on the BITE database [43] has different transformations and similarity metrics than the one applied on the RESECT database [50]. In [43], authors utilized a complex transformation with \(10^7\) Degree of Freedoms (DOFs), whereas in [50], they applied a linear method and nonlinear methods to correct the brain shift. On the other hand, the rigid registration performed with \(\mathrm{LC}^2\) in [42, 48] is different from each other. In [42], the method registered 2D slices of US images to 3D MRI images using rigid registration as the initialization and the cubic spline as the deformable registration. However, in [48], the authors performed a 3D registration by initializing the registration with a translation, then they performed a rigid registration, and finally they applied the method with an affine transformation. In Table 1 and Table 2, we reported results of the rigid initialization before the principal registration in [42, 48].
In addition to the validation method, we did a statistical analysis of our results. We used the Wilcoxon rank sum test which is a nonparametric statistical analysis method [54]. In this test, the null hypothesis \(H_0\) is: The method did not have improvement in mTRE. Using the data in Table 1, the null hypothesis is \(\mu =5.40\). The alternative hypothesis \(H_1\) would be \(\mu \geqslant 5.40\). Using the initial mTRE before the image registration and the results of ARENA in Table 1, we achieved the p value of 0.0058 by applying Wilcoxon rank sum test. Considering the conventional significance level of \(\alpha =0.05\), \(p=0.0058\) shows that not only we reject \(H_0\) and \(H_1\), but also with \(\%99.42\) confidence we improved the result. With the same setting using the data in Table 2 and the mTREs of pre- and post-registration for ARENA, the p value of 0.0483 \((p<0.05)\) has been achieved.
Discussion and future work
We showed the minimum achievable mTRE values with an affine transformation to provide a lower bound for mTRE values. We have not used these values to optimize and improve ARENA. We achieved mTRE values that are very close to this minimum value in some patients (e.g., in Patient 24). However, the average minimum achievable mTRE is 0.93 mm, which is in the order of inter-observer variability (below 0.5 mm [31]) in landmark selection. Therefore, it is expected that our final mTRE values will be larger than the minimum achievable error.
In this work, we proposed to use a simple affine transformation to correct for brain shift. Nevertheless, nonlinear transformations offer more flexibility and allow us to recover the deformation more accurately. Before employing affine transformation, we used simple translation, rigid transformation, and rigid transformation with scaling parameters. We notice that none of them is able to improve mTRE for all patients. Affine transformation was the least general transformation model that could give us significant improvement in mTRE. Affine transformation is simpler and faster than nonlinear transformations, and practical in a wide range of applications.
CR and its derivatives RaPTOR and ARENA are asymmetric similarity metrics, meaning that reversing the order of images changes the similarity value and likely the results. We set the US and MRI as moving and fixed images, respectively, since this provided better results for ARENA. Since ARENA uses affine transformation, it can be simply inverted if clinicians prefer to deform the MRI to align with US.
Symmetric image registration methods are generally considered superior to asymmetric techniques [55, 56]. However, we found that asymmetric method used in this paper is superior to a symmetric version of ARENA. In addition, changing the role of the fixed and moving image substantially degrades the results. One reason is that CR is an asymmetric similarity measure, and order of the images substantially changes its value. Because affine transformation is an invertible transformation, we can move the MRI image by inverting the transformation.
Image registration with affine transformation has a good performance for structural images. But for functional data, such as tractography, nonlinear deformation is necessary to preserve the continuity of the tracts [57]. Except for the patients with large initial mTREs in the RESECT and BITE database, ARENA has a close performance to the SSC and \(\mathrm{LC}^2\). ARENA was utilized with exactly the same settings for both databases, which was not the case for either SSC or \(\mathrm{LC}^2\) [42, 43, 48, 50]. In addition, \(\mathrm{LC}^2\) method is computationally expensive and was implemented on GPU. The population size \(\lambda \) in CMA-ES optimization is an important factor, and a larger \(\lambda \) leads to better performance by costing more computation time.
CMA-ES implementation in MATLAB is not optimized, and it is relatively slow with conventional CPUs. More specifically, for each hierarchical level the optimization takes 2–5 min. Nevertheless, it is fast enough in IGNS settings where neurosurgeons generally spend about 10–20 min between collection of US images and resection of the tumor. For the next step, we plan to implement our method with GPU in order to further accelerate the registration process. Finally, we aim to further test our method on more datasets in different applications.
Conclusion
Herein, we presented ARENA, an affine registration method to align US and MRI volumetric images. We applied our method on the RESECT database and the BITE database, validated our method qualitatively and quantitatively, and compared to two recently published registration methods. The qualitative results show that the registered images have improvements in alignment of salient image features. ARENA has consistently improved the mTRE in all patients in both databases and is therefore a potentially promising registration method for use during IGNS.
References
Damas S, Cordón O, Santamaría J (2011) Medical image registration using evolutionary computation: an experimental survey. IEEE Comput Intell Mag 6(4):26–42
Ma J, Zhou H, Zhao J, Gao Y, Jiang J, Tian J (2015) Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Trans Geosci Remote Sens 53(12):6469–6481
James AP, Dasarathy BV (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112
Yang Y, Que Y, Huang S, Lin P (2016) Multimodal sensor medical image fusion based on type-2 fuzzy logic in nsct domain. IEEE Sens J 16(10):3735–3745
Golby AJ (2015) Image-guided neurosurgery. Academic Press, Cambridge
Besharati Tabrizi L, Mahvash M (2015) Augmented reality–guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg 123(1):206–211
Maurer CR, Fitzpatrick JM (1993) A review of medical image registration. Interact Image Guid Neurosurg 1:17–44
Gerard IJ, Kersten-Oertel M, Petrecca K, Sirhan D, Hall JA, Collins DL (2017) Brain shift in neuronavigation of brain tumors: a review. Med Image Anal 35:403–420
Nag S (2017) Image registration techniques: a survey. arXiv preprint arXiv:1712.07540
Maintz JA, Viergever MA (1998) A survey of medical image registration. Med Image Anal 2(1):1–36
Gong M, Zhao S, Jiao L, Tian D, Wang S (2014) A novel coarse-to-fine scheme for automatic image registration based on sift and mutual information. IEEE Trans Geosci Remote Sens 52(7):4328–4338
Johnson HJ, Christensen GE (2002) Consistent landmark and intensity-based image registration. IEEE Trans Med Imaging 21(5):450–461
Zitova B, Flusser J (2003) Image registration methods: a survey. Image Vis Comput 21(11):977–1000
Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28
Rueckert D, Aljabar P (2010) Nonrigid registration of medical images: theory, methods, and applications [applications corner]. IEEE Signal Process Mag 27(4):113–119
Sotiras A, Davatzikos C, Paragios N (2013) Deformable medical image registration: a survey. IEEE Trans Med Imaging 32(7):1153–1190
Yan CX, Goulet B, Pelletier J, Chen SJ-S, Tampieri D, Collins DL (2011) Towards accurate, robust and practical ultrasound-ct registration of vertebrae for image-guided spine surgery. Int J Comput Assist Radiol Surg 6(4):523–537
Gill S, Abolmaesumi P, Fichtinger G, Boisvert J, Pichora D, Borshneck D, Mousavi P (2012) Biomechanically constrained groupwise ultrasound to ct registration of the lumbar spine. Med Image Anal 16(3):662–674
Hacihaliloglu I, Rasoulian A, Rohling RN, Abolmaesumi P (2014) Local phase tensor features for 3-d ultrasound to statistical shape+ pose spine model registration. IEEE Trans Med Imaging 33(11):2167–2179
Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV (2018) An unsupervised learning model for deformable medical image registration. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9252–9260
Weistrand O, Svensson S (2015) The anaconda algorithm for deformable image registration in radiotherapy. Med Phys 42(1):40–53
Zhao B, Christensen GE, Hyun Song J, Pan Y, Gerard SE, Reinhardt JM, Du K, Patton T, Bayouth JM, Hugo GD (2016) Tissue-volume preserving deformable image registration for 4dct pulmonary images. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 41–49
Maes F, Loeckx D, Vandermeulen D, Suetens P (2015) Image registration using mutual information. In: Paragios N, Duncan J, Ayache N (eds) Handbook of biomedical imaging. Springer, Boston, MA
Roche A, Malandain G, Ayache N, Pennec X (1998) Multimodal image registration by maximization of the correlation ratio. PhD thesis, INRIA
Roche A, Pennec X, Rudolph M, Auer D, Malandain G, Ourselin S, Auer LM, Ayache N (2000) Generalized correlation ratio for rigid registration of 3d ultrasound with mr images. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 567–577
Rivaz H, Collins DL (2015) Deformable registration of preoperative mr, pre-resection ultrasound, and post-resection ultrasound images of neurosurgery. Int J Comput Assist Radiol Surg 10(7):1017–1028
Masoumi N, Xiao Y, Rivaz H (2017) Marcel (inter-modality affine registration with correlation ratio): an application for brain shift correction in ultrasound-guided brain tumor resection. In: International MICCAI Brainlesion workshop. Springer, pp 55–63
Rivaz H, Chen SJ-S, Collins DL (2015) Automatic deformable mr-ultrasound registration for image-guided neurosurgery. IEEE Trans Med Imaging 34(2):366–380
Hansen N, Ostermeier A (1996) Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation. In: Evolutionary computation, 1996., Proceedings of IEEE international conference on. IEEE, pp 312–317
Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I (2017) Retrospective evaluation of cerebral tumors (resect): a clinical database of pre-operative mri and intra-operative ultrasound in low-grade glioma surgeries. Med Phys 44:3875–3882
Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S (2018) Non-rigid registration of 3d ultrasound for neurosurgery using automatic feature detection and matching. Int J Comput Assist Radiol Surg 13:1525–1538
Mercier L, Del Maestro RF, Petrecca K, Araujo D, Haegelen C, Collins DL (2012) Online database of clinical mr and ultrasound images of brain tumors. Med Phys 39(6 Part1):3253–3261
Klein S, Staring M, Pluim JP (2007) Evaluation of optimization methods for nonrigid medical image registration using mutual information and b-splines. IEEE Trans Image Process 16(12):2879–2890
Winter S, Brendel B, Pechlivanis I, Schmieder K, Igel C (2008) Registration of ct and intraoperative 3-d ultrasound images of the spine using evolutionary and gradient-based methods. IEEE Trans Evol Comput 12(3):284–296
Gong RH, Abolmaesumi P (2008) 2d/3d registration with the cma-es method. In: Medical imaging 2008: visualization, image-guided procedures, and modeling. International Society for Optics and Photonics, vol 6918, p 69181M
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH (2012) Intraoperative image-based multiview 2d/3d registration for image-guided orthopaedic surgery: incorporation of fiducial-based c-arm tracking and gpu-acceleration. IEEE Trans Med Imaging 31(4):948–962
Reinhard E, Heidrich W, Debevec P, Pattanaik S, Ward G, Myszkowski K (2010) High dynamic range imaging: acquisition, display, and image-based lighting. Morgan Kaufmann, Burlington
Fischer B, Modersitzki J (2008) Ill-posed medicine–an introduction to image registration. Inverse Probl 24(3):034008
Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195
“Cma-es in matlab-yarpiz”
Wein W, Ladikos A, Fuerst B, Shah A, Sharma K, Navab N (2013) Global registration of ultrasound to mri using the lc 2 metric for enabling neurosurgical guidance. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 34–41
Heinrich MP, Jenkinson M, Papież BW, Brady M, Schnabel JA (2013) Towards realtime multimodal fusion for image-guided interventions using self-similarities. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 187–194
Daga P, Winston G, Modat M, White M, Mancini L, Cardoso MJ, Symms M, Stretton J, McEvoy AW, Thornton J, Micallef C (2012) Accurate localization of optic radiation during neurosurgery in an interventional mri suite. IEEE Trans Med Imaging 31(4):882–891
Fitzpatrick JM (2009) Fiducial registration error and target registration error are uncorrelated. In: Medical imaging 2009: visualization, image-guided procedures, and modeling, International Society for Optics and Photonics, vol 7261, p 726102 (2009)
Zhong X, Bayer S, Ravikumar N, Strobel N, Birkhold A, Kowarschik M, Fahrig R, Maier A (2018) Resolve intraoperative brain shift as imitation game. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 129–137
Hong J, Park H (2018) Non-linear approach for mri to intra-operative us registration using structural skeleton. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 138–145
Wein W (2018) Brain-shift correction with image-based registration and landmark accuracy evaluation. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 146–151
Sun L, Zhang S (2018) Deformable mri-ultrasound registration using 3d convolutional neural network. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 152–158
Heinrich MP (2018) Intra-operative ultrasound to mri fusion with a public multimodal discrete registration tool. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 159–164
Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, Teodoro P, Carvalho H, Martins J, Golland P (2018) Deformable mri-ultrasound registration via attribute matching and mutual-saliency weighting for image-guided neurosurgery. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 165–171
Drobny D, Vercauteren T, Ourselin S, Modat M (2018) Registration of mri and ius data to compensate brain shift using a symmetric block-matching based approach. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 172–178
Shams R, Boucher M-A, Kadoury S (2018) Intra-operative brain shift correction with weighted locally linear correlations of 3dus and mri. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Springer, pp 179–184
Gibbons JD, Chakraborti S (2011) Nonparametric statistical inference. In: Lovric M (ed) International encyclopedia of statistical science. Springer, Berlin, Heidelberg
Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC (2011) A reproducible evaluation of ants similarity metric performance in brain image registration. Neuroimage 54(3):2033–2044
Modat M, Cardoso MJ, Daga P, Cash D, Fox NC, Ourselin S (2012) Inverse-consistent symmetric free form deformation. In: International workshop on biomedical image registration. Springer, pp 79–88
Xiao Y, Eikenes L, Reinertsen I, Rivaz H (2018) Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 13(3):457–467
Acknowledgements
This work is funded by Natural Science Engineering Council of Canada (NSERC) Grant RGPIN-2015-04136.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interest.
Ethical standard
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all participants included in the study.
Rights and permissions
About this article
Cite this article
Masoumi, N., Xiao, Y. & Rivaz, H. ARENA: Inter-modality affine registration using evolutionary strategy. Int J CARS 14, 441–450 (2019). https://doi.org/10.1007/s11548-018-1897-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-018-1897-1