Keywords

1 Introduction

Deformable multimodal image registration has become essential for many procedures in image-guided therapies, e.g., preoperative planning, intervention, and diagnosis. Due to substantial improvement in computational efficiency over traditional iterative registration approaches, learning-based registration approaches are becoming more prominent in time-intensive applications.

Related Work. Many learning-based registration approaches adopt fully supervised or semi-supervised strategies. Their networks are trained with ground-truth deformation fields or segmentation masks [5, 12, 13, 16, 19], and may struggle with limited or imperfect data labeling. A number of unsupervised registration approaches have been proposed to overcome this problem by training unlabeled data to minimize traditional similarity metrics, e.g., mean squared intensity differences [4, 11, 15, 17, 21, 26]. However, the performances of these methods are inherently limited by the choice of similarity metrics. Given the limited selection of multimodal similarity metrics, unsupervised registration approaches may have difficulties outperforming traditional multimodal registration methods as they both essentially optimize the same cost functions. A recent trend for multimodal image registration takes advantage of the latent feature disentanglement [18] and image-to-image translation [6, 20, 23]. Specifically, translation-based approaches use Generative Adversarial Network (GAN) to translate images from one modality into the other modality, thus are able to convert the difficult multimodal registration into a simpler unimodal task. However, being a challenging topic by itself, image translation may inevitably produce artificial anatomical features that can further interfere with the registration process.

In this work, we propose a novel translation-based fully unsupervised multimodal image registration approach. In the context of Computed Tomography (CT) image to Magnetic Resonance (MR) image registration, previous translation-based approaches would translate a CT image into an MR-like image (tMR), and use tMR-to-MR registration to estimate the final deformation field \(\phi \). In our approach, the network estimates two deformation fields, namely \(\phi _\mathrm {s}\) of tMR-to-MR and \(\phi _\mathrm {o}\) of CT-to-MR, in a dual-stream fashion. The addition of the original \(\phi _\mathrm {o}\) enables the network to implicitly regularize \(\phi _\mathrm {s}\) to mitigate certain image translation problems, e.g., artificial features. The network further automatically learns how to fuse \(\phi _\mathrm {s}\) and \(\phi _\mathrm {o}\) towards achieving the best registration accuracy.

Contributions and advantages of our work can be summarized as follows:

  1. 1.

    Our method leverages the deformation fields estimated from the original multimodal stream and synthetic unimodal stream to overcome the shortcomings of translation-based registration;

  2. 2.

    We improve the fidelity of organ boundaries in the translated MR by adding two extra constraints in the image-to-image translation model Cycle-GAN.

We evaluate our method on two clinically acquired datasets. It outperforms state-of-the-art traditional, unsupervised and translation-based registration approaches.

2 Methods

In this work, we propose a general learning framework for robustly registering CT images to MR images in a fully unsupervised manner.

First, given a moving CT image and a fixed MR image, our improved Cycle-GAN module translates the CT image into an MR-like image. Then, our dual-stream subnetworks, UNet_o and UNet_s, estimate two deformation fields \({\phi _\mathrm {o}}\) and \({\phi _\mathrm {s}}\) respectively, and the final deformation field is fused via a proposed fusion module. Finally, the moving CT image is warped via Spatial Transformation Network (STN)  [14], while the entire registration network aims to maximize the similarity between the moved and the fixed images. The pipeline of our method is shown in Fig. 1.

Fig. 1.
figure 1

Illustration of the proposed method. The entire unsupervised network is mainly guided by the image similarity between \(\mathrm{{rCT}} \circ {\phi _\mathrm {os}}\) and \(\mathrm{{rMR}}\).

2.1 Image-to-Image Translation with Unpaired Data

The CT-to-MR translation step consists of an improved Cycle-GAN with additional structural and identical constraints. As a state-of-the-art image-to-image translation model, Cycle-GAN  [28] can be trained without pairwise aligned CT and MR datasets of the same patient. Thus, Cycle-GAN is widely used in medical image translation [1, 9, 25].

Fig. 2.
figure 2

Schematic illustration of Cycle-GAN with strict constraints. (a) The workflow of the forward and backward translation; (b) The workflow of identity loss.

Our Cycle-GAN model is illustrated in Fig. 2. The model consists of two generators \({\mathrm{{G}}_{\mathrm{{MR}}}}\) and \({\mathrm{{G}}_{\mathrm{{CT}}}}\), which can provide CT-to-MR and MR-to-CT translation respectively. Besides, it has two discriminators \({\mathrm{{D}}_{\mathrm{{CT}}}}\) and \({\mathrm{{D}}_{\mathrm{{MR}}}}\). \({\mathrm{{D}}_{\mathrm{{CT}}}}\) is used to distinguish between translated CT(tCT) and real CT(rCT), and \({\mathrm{{D}}_{\mathrm{{MR}}}}\) is for translated MR(tMR) and real MR(rMR). The training loss of original Cycle-GAN only adopts two types of items: adversarial loss given by two discriminators (\( {{\mathcal {L}}_{{D_{CT}}}}\) and \({{\mathcal {L}}_{{D_{MR}}}}\)) and cycle-consistency loss \({{\mathcal {L}}_{{cyc}}}\) to prevent generators from generating images that are not related to the inputs (refer to  [28] for details).

However, training a Cycle-GAN on medical images is difficult since the cycle-consistency loss is not enough to enforce structural similarity between translated images and real images (as shown in the red box in Fig. 3(b)). Therefore, we introduce two additional losses, structure-consistency loss \( {\mathcal {L}}_{MIND}\) and identity loss \({\mathcal {L}}_{identity}\), to constrain the training of Cycle-GAN.

MIND (Modality Independent Neighbourhood Descriptor)  [8] is a feature that describes the local structure around each voxel. Thus, we minimize the difference in MIND between translated images \(G_{CT} (I_{rMR})\) or \(G_{MR} (I_{rCT})\) and real images \(I_{rMR}\) or \(I_{rCT}\) to enforce the structural similarity. We define \({\mathcal {L}}_{MIND}\) as follows:

$$\begin{aligned} \begin{aligned} {L_{MIND}}({G_{CT}},{G_{MR}})&= \frac{1}{{{N_{MR}}|R|}}\sum \nolimits _x {||M({G_{CT}}({I_{rMR}})) - M({I_{rMR}})|{|_1}} \\&+ \frac{1}{{{N_{CT}}|R|}}\sum \nolimits _x {||M({G_{MR}}({I_{rCT}})) - M({I_{rCT}})|{|_1}} \end{aligned} \end{aligned}$$
(1)

where M represents MIND features, \(N_{MR}\) and \(N_{CT}\) denote the number of voxels in \(I_{rMR}\) and \(I_{rCT}\), and R is a non-local region around voxel x.

Fig. 3.
figure 3

CT-to-MR translation examples of original Cycle-GAN and proposed Cycle-GAN tested for (a) pig ex-vivo kidney dataset and (b) abdomen dataset.

The identity loss (as shown in Fig. 2(b)) is included to prevent images already in the expected domain from being incorrectly translated to the other domain. We define it as:

$$\begin{aligned} \mathcal {L}_{identity}=\Vert G_{M R}(I_{M R})-I_{M R}\Vert _{1}+\Vert G_{C T}(I_{C T})-I_{C T} \Vert _{1} \end{aligned}$$
(2)

Finally, the total loss \(\mathcal {L}\) of our proposed Cycle-GAN is defined as:

$$\begin{aligned} \mathcal {L}=\mathcal {L}_{D_{M R}}+\mathcal {L}_{D_{C T}}+\lambda _{c y c} \mathcal {L}_{c y c}+\lambda _{identity} \mathcal {L}_{identity}+\lambda _{M I N D} \mathcal {L}_{MIND} \end{aligned}$$
(3)

where \(\lambda _{c y c}\), \(\lambda _{identity}\) and \(\lambda _{MIND}\) denotes the relative importance of each term.

2.2 Dual-Stream Multimodal Image Registration Network

As shown in Fig. 3, although our improved Cycle-GAN can better translate CT images into MR-like images, the CT-to-MR translation is still challenging for translating “simple” CT images to “complex” MR images. Most image-to-image translation methods will inevitably generate unrealistic soft-tissue details, resulting in some mismatch problems. Therefore, the registration methods that simply convert multimodal to unimodal registration via image translation algorithm are not reliable.

In order to address this problem, we propose a dual-stream network to fully use the information of the moving, fixed and translated images as shown in Fig. 1. In particular, we can use effective similarity metrics to train our multimodal registration model without any ground-truth deformation.

Network Details. As shown in Fig. 1, our dual-stream network is comprised of four parts: multimodal stream subnetwork, unimodal stream subnetwork, deformation field fusion, and Spatial Transformation Network.

In subnetwork, original CT(rCT) and MR(rMR) are represented as the moving and fixed images, which allows the model to propagate original information to counteract mismatch problems in translated MR(tMR).

Through image translation, we obtain the translated MR(tMR) with similar appearance to the fixed MR(rMR). Then, in , tMR and rMR are used as moving and fixed images respectively. This stream can effectively propagate more texture information, and constrain the final deformation field to suppress unrealistic voxel drifts from the multimodal stream.

During the network training, the two streams constrain each other, while they are also cooperating to optimize the entire network. Thus, our novel dual-stream design allows us to benefit from both original image information and homogeneous structural information in the translated images.

Fig. 4.
figure 4

Detailed architecture of UNet-based subnetwork. The encoder uses convolution with stride of 2 to reduce spatial resolution, while the decoder uses 3D upsampling layers to restore the spatial resolution.

Specifically, UNet_o and UNet_s adopt the same UNet architecture used in VoxelMorph  [4] (shown in Fig. 4). The only difference is that UNet_o is with multimodal inputs but UNet_s is with unimodal inputs. Each UNet takes a single 2-channel 3D image formed by concatenating \(I_m\) and \(I_f\) as input, and outputs a volume of deformation field with 3 channels.

After Uni- and Multi-model Stream networks, we obtain two deformation fields, \({\phi _\mathrm {o}}\) (for rCT and rMR) and \({\phi _\mathrm {s}}\) (for tMR and rMR). We stack \({\phi _\mathrm {o}}\) and \({\phi _\mathrm {s}}\), and apply a 3D convolution with size of \(3\,\times \,3\,\times \,3\) to estimate the final deformation field \({\phi _\mathrm {os}}\), which is a 3D volume with the same shape of \({\phi _\mathrm {o}}\) and \({\phi _\mathrm {s}}\).

To evaluate the dissimilarity between moved and fixed images, we integrate spatial transformation network (STN)  [14] to warp the moving image using \({\phi _\mathrm {os}}\). The loss function consists of two components as shown in Eq. (4).

$$\begin{aligned} \mathcal {L}_ {total}(I_{r M R}, I_{r C T}, \phi _\mathrm {os})=\mathcal {L}_{sim}(I_{r M R}, I_{r C T} \circ \phi _\mathrm {os})+\lambda \mathcal {L}_{smooth}(\phi _\mathrm {os}) \end{aligned}$$
(4)

where \(\lambda \) is a regularization weight. The first loss \(\mathcal {L}_{sim}\) is similarity loss, which is to penalize the differences in appearance between fixed and moved images. Here we adopt SSIM  [22] for experiments. Suggested by  [4], deformation regularization \(\mathcal {L}_{smooth}\) adopts a L2-norm of the gradients of the final deformation field \(\phi _\mathrm {os}\).

3 Experiments and Results

Dataset and Preprocessing. We focus on the application of abdominal CT-to-MR registration.We evaluated our method on two proprietary datasets since there is no designated public repository.

  1. 1)

    Pig Ex-vivo Kidney CT-MR Dataset. This dataset contains 18 pairs of CT and MRI kidney scans from pigs. All kidneys are manually segmented by experts. After preprocessing the data, e.g., resampling and affine spatial normalization, we cropped the data to \(144\times 80\times 256\) with 1 mm isotropic voxels and arbitrarily divided it into two groups for training (15 cases) and testing (3 cases).

  2. 2)

    Abdomen (ABD) CT-MR Dataset. This 50-patient dataset of CT-MR scans was collected from a local hospital and annotated with anatomical landmarks. All data were preprocessed into \(176\times 176\times 128\) with the same resolution (\({1\,\mathrm{mm}}^{3}\)) and were randomly divided into two groups for training (45 cases) and testing (5 cases).

Implementation. We trained our model using the following settings: (1) The Cycle-GAN for CT-MR translation network is based on the existing implementation [27] with changes as discussed in Sect. 2.1. (2) The Uni- and Multi-modal stream registration networks were implemented using Keras with the Tensorflow backend and trained on an NVIDIA Titan X (Pascal) GPU.

3.1 Results for CT-to-MR Translation

We extracted 1792 and 5248 slices from the transverse planes of the Pig kidney and ABD dataset respectively to train the image translation network. Parameters \(\lambda _{c y c}\), \(\lambda _{identity}\) and \(\lambda _{MIND}\) were set to 10, 5, and 5 for training.

Table 1. Quantitative results for image translation.

Since our registration method is for 3D volumes, we apply the pre-trained CT-to-MR generator to translate moving CT images into MR-like images slice-by-slice and concatenate 2D slices into 3D volumes. The qualitative results are visualized in Fig. 3. In addition, to quantitatively evaluate the translation performance, we apply our registration method to obtain aligned CT-MR pairs and utilize SSIM  [22] and PSNR  [10] to judge the quality of translated MR (shown in Table 1). In our experiment, our method predicts better MR-like images on both datasets.

3.2 Registration Results

Affine registration is used as the baseline method. For traditional method, only mutual information (MI) based SyN  [2] is compared since it is the only metric (available in ANTs  [3]) for multimodal registration. In addition to SyN, we implemented the following learning-based methods: 1) VM_MIND and VM_SSIM which extends VoxelMorph with similarity metrics MIND  [8] and SSIM  [22]. 2) M2U which is a typical translation-based registration method. It generates tMR from CT and converts the multimodal problem to tMR-to-MR registration. It’s noteworthy that the parameters of all methods are optimized to the best results on both datasets.

Two examples of the registration results are visualized in Fig. 5, where the red and yellow contours represent the ground truth and registered organ boundaries respectively. As shown in Fig. 5, the organ boundaries aligned by the traditional SyN method have a considerable amount of disagreement. Among all learning-based methods, our method has the most visually appealing boundary alignment for both cases. VM_SSIM performed significantly worse for the kidney. VM_MIND achieved accurate registration for the kidney, but its result for the ABD case is significantly worse. Meanwhile, M2U suffers from artificial features in the image translation, which leads to an inaccurate registration result.

Fig. 5.
figure 5

Visualization results of our model compared to other methods. Upper: Pig Kidney. Bottom: Abdomen (ABD). The red contours represent the ground truth organ boundary while the yellow contours are the warped contours of segmentation masks. (Color figure online)

The quantitative results are presented in Table 2. We compare different methods by the Dice score  [7] and target registration error (TRE)  [24]. We also provide the average run-time for each method. As shown in Table 2, our method consistently outperformed other methods and was able to register a pair of images in less than 2 s (when using GPU).

Table 2. Quantitative results for Pig Kidney Dataset and Abdomen (ABD) Dataset.

3.3 The Effect of Each Deformation Field

In order to validate the effectiveness of the deformation field fusion, we compare \(\phi _\mathrm {s}\), \(\phi _\mathrm {o}\) and \(\phi _\mathrm {os}\) together with warped images (shown in Fig. 6). The qualitative result shows that \(\phi _\mathrm {s}\) from the unimodal stream alleviates the voxel drift effect from the multimodal stream. While \(\phi _\mathrm {o}\) from the multimodal stream uses the original image textures to maintain the fidelity and reduce artificial features for the generated tMR image. The fused deformation field \(\phi _\mathrm {os}\) produces better alignment than both streams alone, which demonstrates the effectiveness of the joint learning step.

Fig. 6.
figure 6

Visualizations of the deformation field fusion. (a) moving image; (h) fixed image; (b/d/f) deformation fields; (c/e/g) images warped by (b/d/f), corresponding average Dice scores (%) of all organs are calculated. The contours in red represent ground truth, while yellow shows the warped segmentation mask. (Color figure online)

4 Conclusion

We proposed a fully unsupervised uni- and multi-modal stream network for CT-to-MR registration. Our method leverages both CT-translated-MR and original CT images towards achieving the best registration result. Besides, the registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. We evaluated the method on two clinical datasets, and it outperformed state-of-the-art methods in terms of accuracy and efficiency.