Introduction

Prostate cancer (PCa) is a leading cause of cancer-related deaths in males in the USA and Canada [1]. Accurate and early diagnosis of aggressive PCa is critical for adequate patient management. Transrectal ultrasound (TRUS) and Magnetic Resonance Imaging (MRI) are complementary imaging modalities in visualizing anatomy of the prostate and characterizing the tissue for cancer presence. While MRI is the ideal imaging tool for PCa staging and characterization [2], TRUS is the most widely used modality due to its real-time nature, low cost and ubiquity. It is also the primary modality used for interventional applications such as biopsy and brachytherapy. In this paper, we present and compare two practical software tools that can be used for non-rigid registration of prostate TRUS and MRI data to enable joint use of these modalities.

The concept of MRI–TRUS fusion targeted prostate biopsy was first introduced in 2002 by Kaplan et al. [3]. MRI is typically acquired weeks prior to the biopsy, with the patient in a different position (supine vs. lateral decubitus) and often with an endorectal coil, leading to substantial differences in prostate shape between the MR and TRUS volumes. This leads to the need for a non-trivial technique to consolidate the data. Image registration can be used to bring these two modalities in alignment.

Over the last decade, MRI–TRUS fusion biopsy has evolved, and several solutions have been implemented in commercial products [4]. Strong evidence exists that targeted prostate biopsy, enabled in particular by such fusion systems, improves accuracy of PCa sampling [4]. In a recent study, Puech et al. conclude that software-based image registration does not currently offer any advantages over cognitive registration done by visual re-identification of the biopsy targets between the two modalities [5]. In contrast, a study of Delongchamps et al. confirmed the utility of software registration but produced no evidence that deformable registration leads to any tangible improvements over rigid registration [6]. Most commercial MRI–TRUS fusion products implement linear registration only [4]. Further studies are needed to evaluate the overall clinical value of software registration as well as specific registration methods.

Comparison studies of image registration algorithms for the purposes of targeted prostate biopsy are challenging. Commercial tools are typically constrained to the manufacturer-specific registration algorithms, which are often not described in sufficient detail, and do not allow exporting of the registration results. Numerous registration algorithms have been proposed in the literature for MRI–TRUS fusion [79], but very few academic papers are accompanied by a software implementation (the study by Moradi et al. [8] is the only study known to us that uses a publicly available registration tool) that could be easily used in a comparison study or considered for translation into clinical research setting. Therefore, we believe open-source solutions that could readily be applied to MRI–TRUS fusion studies would greatly benefit the community.

MRI–TRUS registration approaches of prostate images can be categorized into intensity-based and segmentation-based methods. Efficient and accurate 3D non-rigid MRI–TRUS registration is inherently challenging because of the intermodality nature of the problem and the low signal-to-noise ratio of TRUS. To the best of our knowledge, the only fully intensity-based approach for MRI–TRUS fusion is the method by Sun et al. [9]. All other methods rely on TRUS segmentation [7, 8, 10]. Similar to all intensity-based approaches, the method proposed by Sun et al. [9] requires homologous anatomical features to appear in both images. The challenge with MRI–TRUS fusion is that since the imaging physics are substantially different between the two modalities, there may be parts of the anatomy that can be visible in one image but not the other.

MRI can be segmented in advance of the procedure without sacrificing the procedure time. TRUS images are typically segmented during brachytherapy workflow. In the biopsy workflow, manual segmentation of the prostate gland is considered acceptable in the commercial fusion tools. Therefore, we can bypass difficulties associated with multimodal intensity-based registration using a method that relies on the availability of the prostate gland segmentation. However, especially for prostate interventions, even experts are prone to over- and under-segmentation of the anatomy, as discussed in [11]. This is also evident in the results presented in this paper. The discrepancy can be attributed to the poor visibility of the prostate boundary near the base and apex in TRUS. Therefore, a method that is robust to this potential variability, or that can handle missing data in regions where the prostate boundary is not clear, would be highly valuable, as mid-gland segmentation can be done robustly [11].

Contributions In this paper, we present two approaches to registration of prostate images for MRI–TRUS fusion. The first method described in “Registration of signed distance maps with B-spline regularization” section represents the deformation field interior to the prostate using B-splines. The second method is presented in “Biomechanically constrained surface registration” section and relies on biomechanical modeling to directly regularize the internal deformation field and explicitly accounts for missing surface data [12]. We validate and compare the results of these registration methods using the data collected for 11 PCa patients who underwent standard MRI and TRUS imaging as part of their clinical care. We make both approaches available as open-source tools to facilitate development and evaluation of registration methodologies and to support clinical research in image-guided prostate interventions.

Methods

The registration approaches we propose consider clinical setup consisting of the two stages:

  1. 1.

    Pre-processing (planning) stage: The MRI exam of the patient is analyzed to identify the planned biopsy targets. The prostate gland can be contoured in MRI, and post-processing of the segmentation can be applied to recover a smooth surface.

  2. 2.

    Intra-procedural stage: A volumetric sweep of the prostate gland with TRUS is obtained, followed by reconstruction of a volumetric image. The prostate is segmented on the volumetric image, and it is used to generate a smooth surface of the gland. The MRI and TRUS surfaces are then set as input to either of the registration methods described further to compute displacements that can be used for target position computation or fused MRI–TRUS display.

In this section, we describe image acquisition and the various processing steps in detail and discuss our approach to the retrospective evaluation of the registration techniques.

Image acquisition and pre-processing

The imaging data used in this evaluation were collected as part of a HIPAA-compliant prospective study that was approved by the institutional review board of the Brigham and Women’s Hospital (BWH). Clinical indication for both MRI and TRUS imaging was histologically confirmed PCa, with low dose rate radiation brachytherapy as a preferred treatment option. TRUS image acquisition was performed during brachytherapy prostate volume studies, with the goal of confirming suitability of the patient for the procedure (volume of the prostate gland is within the clinically acceptable range, and there is no interference of the pubic arch with the brachytherapy needle insertion plan). Per standard clinical protocol, no anesthesia was administered to the patient during either MRI or TRUS imaging.

Multiparametric MRI data were collected using the standard imaging protocols established at our institution [2]. All MR imaging exams were performed on a GE Signa HDx 3.0T system (GE Healthcare, Waukesha, WI) with the patient in a supine position using a combination of 8-channel abdominal array and endorectal coils (Medrad, Pittsburgh, PA). The imaging study included anatomical T2-weighted imaging (T2WI) (FRFSE sequence, TR/TE \(=\) 3500/102 ms over a 16 \(\hbox {cm}^2\) FOV, reconstructed pixel size \(0.3 \times 0.3 \times 3\) mm), which was the series used for registration experiments presented in this work. The total time of the multiparametric MRI exam was about 45 min.

TRUS imaging was done in a separate session, with the patient in a lithotomy position. Per standard clinical setup, the TRUS probe (BK 8848) was attached to a motorized mover (Nucletron EndoCavity Rotational Mover (ECRM)) and mounted on a rigid stand with the enclosure for the TRUS probe (Nucletron OncoSelect stepper). Imaging was performed using the sagittal array of the probe rotated by the ECRM. Camera link and OEM research interfaces of the BK ProFocus US scanner (BK Medical) were used to collect radiofrequency (RF) TRUS concurrently with the clinical image acquisition. A position tracking device equipped with accelerometer, magnetometer and gyroscope (Phidget Spatial 3/3/3) was attached to the handle of the probe to track sagittal array orientation during motorized sweep. Synchronous collection of the RF and tracking data was performed using Public Library for UltraSound research (PLUS) [13] on a workstation equipped with a camera link interface (Dalsa X64 CL Express). The total time of the TRUS image collection was less than 5 min.

The following pre-processing steps were applied to prepare the data before applying the registration procedure. We used PLUS for converting RF TRUS data into B-mode images and for 3D reconstruction of the TRUS volumes from the tracked data using the gyroscope sensor tracking information. Volumetric TRUS images were reconstructed at 0.2 mm isotropic voxel size. TRUS and axial T2WI MRI volumes were brought into initial alignment by rigidly registering three fiducial points (left-most, right-most and anterior points identified on the mid-gland axial slice of the prostate) placed manually in reconstructed volumes using 3D Slicer [14] and were aligned with the T2WI MRI images. The prostate gland was contoured manually in both TRUS and T2WI volumes using the 3D Slicer Editor module. For the purposes of simplifying the segmentation procedure, TRUS volumes were resampled to the resolution of the T2WI dataset (3 mm slice thickness). The manually segmented masks were resampled back to the 0.2 mm isotropic spacing and smoothed by applying a recursive Gaussian image filter with \(\sigma =3\). The resulting masks were then used as input for the two registration tools we describe next. We make three of the datasets used in the evaluation publicly available.Footnote 1

Registration of signed distance maps with B-spline regularization

We used the BRAINSFit [15] registration module of 3D Slicer, which we earlier adapted to prostate MRI intensity-based hierarchical registration [16]. Over the last few years, this module has been used to support clinical trials of MRI-guided in-bore transperineal prostate biopsy at BWH [17]. To apply this registration approach to MRI–TRUS registration, we implemented additional pre-processing of the segmentations and modified the registration parameters as follows. First, the isotropic segmentation masks were cropped using a fixed size \(({\approx }10\,\hbox {mm})\) margin around the bounding box of the segmentation to reduce computation time of the subsequent steps. Maurer signed distance transformation [18] as implemented in Insight Toolkit (ITK) was applied to the smooth segmentations of the prostate gland in both MRI and TRUS. We chose Maurer implementation of the distance transformation due to its improved (linear time) performance as compared to other implementations available. The resulting distance maps were registered using the standard BRAINSFit module of 3D Slicer (v4.3.1) with affine and B-spline (isotropic grid of six control points) registration stages applied in sequence. We used the mean squared difference similarity metric with a fixed number of 10,000 samples. All of the processing was done either in 3D Slicer or using standard classes of ITK. This approach was developed by the team at the BWH, and thus will be further referred as such in the text.

The registration tool implementing the approach above is available as a module within SlicerProstate extension of 3D Slicer software.Footnote 2

Biomechanically constrained surface registration

Triangulated surfaces required by this algorithm were reconstructed from the smooth segmentation masks by first applying the marching cubes algorithm, followed by an edge collapse-based incremental decimation using ITK [19].

The following approach was developed independently by the team at the University of British Columbia (UBC) and will be further referred to as UBC. For the registration, we recast the registration problem as a probability density estimation, where point on the source surface represents centroids of a Gaussian mixture model (GMM) [12], and the target surface represents observations from that model. We use the following notations:

\(X_{N\times 3}\)

Observations, i.e., prostate surface points on US

\(Y_{M\times 3}\)

GMM centroids, i.e., prostate surface points on MRI

\(\varPhi _{M\times J},U_{J\times 3}\)

FE model interpolation matrix, nodal displacements

\(K_{3J\times 3J}\)

Stiffness matrix

\(P_{M\times N}\)

Posterior probabilities of GMM components

\(\mathbf {x}_{3N \times 1},\mathbf {y}_{3N \times 1}\) and \(\mathbf {u}_{3J\times 1}\)

Rasterized representations of \(X, Y\) and \(U\)

\(\mathrm {diag}{(\mathbf {v})}\)

Diagonal matrix of any vector \(\mathbf {v}\)

\(I\)

Identity matrix

\(\tilde{P} = \mathrm {kron}{(P,I_{3\times {3}})}\)

Kronecker tensor product of a matrix \(P\) and \(I_{3\times {3}}\)

\(\mathbf {1}\)

Column vector of all ones

Similar to the BWH method in “Registration of signed distance maps with B-spline regularization” section, we follow an affine followed by a non-rigid registration approach to perform surface-based registration. Henceforth, we refer to these methods as UBC-aff and UBC methods, respectively. UBC-aff is exactly the “affine” solution detailed and derived by Myronenko and Song [12]. The non-rigid component of registration is constrained by minimizing the volumetric strain computed using a finite element (FE) model. To create this model, a tetrahedral volumetric mesh is automatically generated from the triangulated MRI segmentation using TetGen [20]. In place of setting boundary conditions, we drive the surface of the model using implicit surface-to-surface forces, from source to target. These forces arise naturally by minimizing the negative log-likelihood function based on the GMM approach, with an added biomechanical regularizer. The objective function to minimize is as follows:

$$\begin{aligned} E(U,\sigma ^2) =\,&\frac{1}{2\sigma ^2}\sum _{m,n=1}^{M,N}P(y_m|x_n)\left\| x_n - (y_m+v_m)\right\| ^2 \nonumber \\&+ \frac{3N_P}{2}\log (\sigma ^2)+ \frac{\beta }{2}\mathbf {u}^{T}K\mathbf {u}, \end{aligned}$$
(1)

where \(x_m\) is a point on the fixed observation surface (segmented TRUS), \(y_m\) a point on the moving surface (segmented MRI), and \(v_m\) is the displacement of \(y_m\) induced by the FE model. Since the first term only involves points on the surface of the model, we use FE interpolation \(v_m={\varPhi _m}U\) to relate surface displacements to nodal displacements. \(P(\cdot )\) denotes the GMM probability density function, responsible for “softly” weighting correspondences between the two surfaces. In the second term, \(N_P=\sum _{m,n} P(y_m|x_n)\), and \(\sigma ^2\) is the variance of the Gaussian components. The derivation of the expression up to this point follows directly from that of Myronenko and Song [12]. The last term is the added regularizer, which represents the linearized strain energy of an FE model. Note that since the motion of all three coordinates is coupled, we use a rasterized form of the displacements: \(\mathbf {u} = [u_{0x}, u_{0y}, u_{0z}, \ldots , u_{Jx}, u_{Jy}, u_{Jz}]^T\). The stiffness matrix, \(K\), can be computed directly from the FEM’s tetrahedral structure and a constitutive material model. We currently assume a linear material, but the approach can be readily generalized. The free parameter, \(\beta \), controls the trade-off between the tightness of the surface-to-surface fit and regularization.

There are two unknowns in this model: the volumetric displacement, \(U\) and the variance of correspondence, \(\sigma ^2\). Initially, the displacements are set to zero, and the Gaussian variance is estimated from the data as in [12]. These are optimized using an expectation maximization (EM) algorithm. In the expectation step, we compute how likely an observation corresponds to a GMM centroid using

$$\begin{aligned} P(x_n|y_m+\phi _mU) = \frac{\exp {\left( -\frac{1}{2}\frac{\left\| x_n -(y_m+\varPhi _mU)\right\| ^2}{\sigma ^2}\right) }}{\sum _{j=1}^M\exp {\left( -\frac{1}{2}\frac{\left\| x_n -(y_j+\varPhi _jU)\right\| ^2}{\sigma ^2}\right) } + c}, \end{aligned}$$
(2)

where \(c=(2\pi \sigma ^2)^{D/2}\frac{w}{1-w}\frac{M}{N}\) and \(0\,{\le }\,w\,{\le }\,1\) is the estimate of outliers/missing data [12]. It can be shown that minimizing Eq. 1 w.r.t \(\mathbf {u}\) results in the following system of equations:

$$\begin{aligned} \left[ \tilde{\varPhi }^{T}\mathrm {diag}\!\left( \tilde{P}1\right) \tilde{\varPhi } \!+\! \beta \sigma ^2K\right] \mathbf {u} \!=\! \left[ \tilde{\varPhi }^{T}\tilde{P}\mathbf {x} \!-\!\tilde{\varPhi }^{T}\mathrm {diag}\!\left( \tilde{P}1\right) \mathbf {y}\right] . \end{aligned}$$
(3)

The algorithm iterates between expectation (updating Eq. 2) and maximization steps (updating \(\sigma ^2,\mathbf {u}\)) until it converges to the solution. Updating \(\sigma ^2\) is exactly as in [12] and is excluded for brevity. The algorithm was implemented in Matlab (MathWorks, Natick, MA), with a mex-interface to TetGen.

There are a few free parameters in this registration scheme. The first is \(w\in [0,1)\), which controls a background uniform distribution in the GMM to account for noise/outliers. Due to the noise in the TRUS, we use \(w=0.1\) for the experiments conducted in the paper, although we have found results to be relatively insensitive for values within a reasonable range (e.g., \([0.05, 0.15]\)). The next two involve the constitutive law of the FE model, which affect the stiffness matrix \(K\). We assume a linear material, with Young’s modulus \(E=48\) kPa, and Poisson’s ratio of \(\nu =0.49\). These values are derived from a study by Krouskop et al. [21]. For linear materials, the Young’s modulus can be factored out of the stiffness matrix and combined with the last parameter: \(\beta \), the regularization weight. This controls the amount to which the FE is used to limit deformation. Small values of \(\beta \) allow large deformations, leading to better surface-to-surface fitting but perhaps unrealistic deformations. This parameter should be tuned depending on the context, increased until an acceptable amount of deformation is observed. For our experiments, we use \(\beta =0.03\) (so \(\beta E = 1.44\)). Thus, for linear materials, there are three free parameters: \(w, \nu \) and the product \(\beta E\).

For the purpose of evaluating the capabilities of the UBC method in registering partial data, partial surface datasets were created for each case by cropping the full surface 10 mm from the end points using planes perpendicular to the prostate gland main axis. The choice of data to be discarded was motivated by the practical difficulties in accurate segmentation of the prostate at apex and base [11].

This registration tool implementing the method above is provided as a set of MATLAB scripts and C++ libraries under nonrestrictive open-source license.Footnote 3

Evaluation setup

The two registration tools described above were applied to the MRI and TRUS datasets collected for PCa patients. Identical parameters were used for each of the algorithms across the datasets used in the evaluation. Quantitative assessment was done based on the observed computation time and TRE. Computation time was measured for each of the processing steps. The accuracy of registration was evaluated using Target Registration Error (TRE) measured between the corresponding landmarks identified by an interventional radiologist specializing in abdominal image-guided interventions with over 10 years of experience in both MRI and ultrasound-guided procedures (K.T.). The landmarks were localized independently from the process of gland segmentation. The landmarks used in the evaluation included anatomical landmarks that could be consistently identified in each patient (entry of the urethra at base (coded as UB) and apex (coded as UA) of the prostate gland, and verumontanum (coded as VM)) as well as patient-specific landmarks (calcifications or cysts). The landmarks were marked using a setup where both MRI and volume reconstructed TRUS images were shown to the operator side by side using 3D Slicer to facilitate consistent identification.

Table 1 Volumes of the prostate gland segmentation in MRI \((V_\mathrm{MR})\) and US \((V_\mathrm{US})\)
Fig. 1
figure 1

Example of the registration result for case 10 using BWH method. The green outline corresponds to the smoothed surface of the segmented prostate gland in the US image (both rows). Top row shows views of the TRUS volume, bottom row corresponds to the registered MRI volume for the same case. Annotations show examples of the landmarks used in the evaluation: urethra entry at base (red arrow) and apex (yellow arrow)

Normality testing was performed using Shapiro–Wilk test, and statistical comparisons were done using paired t test. Statistical analysis and plotting were performed using R version 3.0.1.Footnote 4 Registration experiments were performed on a MacBook Pro laptop (early-2011 model, 2.3 GHz Intel Core i7, 8 GB RAM, SSD, OS X 10.9.5). C++ code was compiled using XCode 6.1 clang-600.0.54 compiler in Release mode. MATLAB version 2013b was used for the UBC registration tool.

Results

Evaluation was conducted using imaging data collected for 11 patients. Volumes of the segmented prostate gland for the cases used in the evaluation are shown in Table 1. The volume of the gland segmented in TRUS was typically smaller than the one in MRI, the difference exceeded 20 % in four out of 11 cases.

Computation time was as follows. BWH registration pre-processing took on average 35 s (range 31–51 s), while registration (including resampling) took 40 s (range 32–56 s). Pre-processing for the UBC method was comparable and on average took 40 s (range 23–64 s). Average registration time for the UBC method was 93 s (range 33–248 s) while using the full surface data, and 60 s (range 19–137 s) when partial data was used. No statistically significant correlation was observed between the volume of the prostate gland segmentation and the registration time. A representative example of a registration result is shown in Fig. 1. Visualization of the displacement fields obtained with both methods for the same case is in Fig. 2.

Fig. 2
figure 2

Visualization of the deformation field for case 10 using both BWH (top row) and UBC (bottom row) methods. The green outline corresponds to the MR surface before registration, and the purple outline is the intersection of the TRUS prostate surface with the image plane. Note that for the UBC method deformation is restricted to the inside the gland segmentation, while BWH method produces continuous smooth deformation field that extends beyond the prostate segmentation

The total of 48 landmarks across all cases were identified for the purposes of TRE assessment. In the majority of the cases (six out of 11), the landmarks corresponding to the UA and/or UB anatomical locations were outside the gland segmentation (also see Fig. 4 showing landmarks located outside the gland segmentation). Only landmarks that were inside the gland in both MRI and TRUS segmented volumes (the total of 37) were considered in the quantitative evaluation, to ensure the same set of landmarks is used in evaluating both methods. Among those landmarks, mean initial TRE was 7.8 mm (range 1.7–15.3 mm). The detailed summary of the TRE statistics is shown in Table 2.

Table 2 Summary statistics in mm of the initial Target Registration Error (TRE) (Init.), TRE following affine registration using BWH (BWH-aff) and UBC (UBC-aff) methods, and using deformable registration using BWH (BWH-bspline) and UBC methods with full (UBC-full) and partial (UBC-part) surface information. Significant reduction in TRE was observed as a result of affine registration, and deformable registration component did not produce improvements

There was no sufficient evidence to reject the hypothesis about the normality of the observed errors based on Shapiro–Wilk test \((p>0.05)\). Both UBC and BWH led to significantly smaller TREs as a result of an affine registration step \((p<0.0001)\), leading to mean residual TRE of about 3.5 mm (range 0.1–7.3 mm) (the UBC-affine step refers the “affine” version of [12]). The deformable component of the registration did not result in a statistically significant improvement of mean TRE. Comparison of the registration results obtained using BWH and full surface UBC methods do show a statistically significant difference between them \((p<0.05)\). However, the difference between the means was only 0.3 mm. A detailed summary of the UBC and BWH registration results for each landmark is shown in Fig. 4. No significant difference was observed between the TREs corresponding to the registration results obtained with the UBC method while comparing full and partial surface registration results. At the same time, we observed that visually the results can be noticeably different, as illustrated in Fig. 3.

Fig. 3
figure 3

Example of the surface registration result using UBC method. The surface of the prostate gland in TRUS used for registration is shown as a wireframe, and the registered surface is colored by the displacement magnitude. The registration result that used full surface information is on the left panel, whereas the partial surface registration result is on the right. The differences are most apparent at the apex (yellow arrow) and the base (red arrow) of the gland

Fig. 4
figure 4

Summary of the TREs for the datasets used in the evaluation. Each point corresponds to a single landmark (“UA” is urethra at apex, “UB”—urethra at base, “VM”—verumontanum, “Other” corresponds to case-specific landmarks identified for calcifications or cysts), with the BWH method TRE plotted on the vertical axis, and UBC method (using full surface data in the top, and partial data in the bottom panel). Red points correspond to the landmarks that were marked outside the gland segmentation. Note that UBC TREs for the landmarks located outside the gland include only the affine registration component, since the deformation can only be estimated inside the tetrahedral mesh. For this reason, only those landmarks that were located inside the gland in both MRI and TRUS were considered in the quantitative evaluation

Discussion

In this paper, we presented two software tools that we believe are practical for MRI–TRUS fusion in clinical research concerned with prostate interventional applications. The approaches we implemented both rely on the availability of the prostate gland segmentation, but are quite different in the methodology and capabilities. The BWH approach has been implemented based on the easily accessible “off-the-shelf” components of 3D Slicer and ITK. The UBC approach has the benefit of utilizing a biomechanical model, which has the potential to lead to a more realistic displacement field and can handle partial surface information. However, its implementation required significantly more custom code development.

To the best of our knowledge, only two of the currently available commercial tools, Urostation (Koelis) and Artemis (Eigen), support elastic registration [4]. While both of these operate on segmented prostate gland, none is using distance map representation or biomechanical model for registration, or is capable of handling partial surface data. Numerous MRI–TRUS approaches have been presented in academic literature, but most are not accompanied with reliable implementations for testing. We are aware of only one publication that has an open-source implementation [8]. A major innovation of our work is in streamlining translation of the MRI–TRUS fusion capability into the clinical research workflows. Possibly the closest work to ours in terms of developing an open-source translational system for prostate interventions is by Shah et al. [22]. Our work is complementary in that while Shah et al. investigate system integration, we focus solely on software registration tools.

Once image data are collected and the prostate gland is segmented, all the processing steps for both methods can be completed without user interaction in under 5 min. The mean error we observed is in the order of 3 mm, which is the slice thickness for our MRI data. We also note that we did not attempt to quantify the error in localization of the anatomical landmarks, as we did not have resources to conduct a multi-reader study. Such study would require clinical experts that are familiar with both MRI and TRUS appearance of the prostate. This expertise is rare at our institution, since clinical reads of prostate MRI is done by the radiology department, while most of the TRUS-guided prostate procedures are done by either radiation oncology or urology departments. Overall, we believe our tools are suitable for prospective evaluation in the context of clinical research prostate biopsy applications.

Our comparison did not reveal significant differences between the two approaches in terms of TRE that are of practical value. Our evaluation was complicated by the possible inconsistencies in the segmentation of the prostate gland, and the difficulties in placement of some of the anatomical landmarks that resulted in UA/UB points being located outside the prostate gland. Accurate and consistent segmentation of the prostate is challenging in TRUS, especially at the apex and base of the gland [11, 23]. We note that the differences between the prostate volumes estimated from 3D TRUS and MR have been recognized earlier in a number of studies. The average TRUS/MR volume ratio we observed was 0.87, which is similar to Smith et al. [11] who reported average ratio of 0.9. While we cannot with absolute certainty determine the sources of variability, there are several factors that could have contributed to the difference. First, TRUS images have poor contrast at apex and base, potentially leading to under-segmentation of these areas. Second, the actual physical volumes of the gland could be affected by the compression of the prostate gland to a different degree by both endorectal MR coil and ultrasound probe. Heijmink et al. [24] observed average reduction of 17 % in prostate volume due to the use of endorectal MR coil.

We adopted 3D Slicer for implementing the BWH approach presented here. 3D Slicer includes a variety of registration tools and integrates ITK, thus enabling reuse and sharing of the existing technology. Extensions framework of 3D Slicer allows for contributing new functionality without the need to change the core of the application, thus various MRI–TRUS specific registration algorithms can be contributed by the interested groups. The PLUS toolkit [13] and OpenIGTLink [25] are tightly integrated with 3D Slicer and thus data collection of tracking and intra-procedural imaging data can be implemented for a variety of devices using libraries such as PLUS [13]. This is supportive of our longer-term goal of providing an open-source solution in 3D Slicer for MRI–TRUS-guided prostate interventions. We make the registration tools available under BSD-style open-source license, permitting unrestricted academic and commercial use.

Our work has several limitations. Image acquisition was done during prostate brachytherapy volume studies. More complex approaches based on electromagnetic or optical tracking would be required for freehand TRUS volumetric reconstruction. The use of data supplied by prostate volume studies could have potentially introduced selection bias toward smaller prostate volumes. Further evaluation on a larger biopsy cohort is warranted. We did not assess the consistency of landmark identification and prostate gland segmentation and did not evaluate the sensitivity of the registration tools to variability in the segmentation or extent of missing data.

Conclusions

We proposed open-source tools that can be used as a component of a system for MRI–TRUS fusion-guided prostate interventions. Our registration tools implement novel registration approaches and produce acceptable registration results, aiming to reduce the barriers in development and deployment of interventional research solutions for prostate image-guided interventions and facilitate comparison with similar tools.