Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction and Background

In image-guided neurosurgery (IGNS), the head of the patient is rigidly attached to the operating room table and fixed relative to the frame of reference of a tracking system. The preoperative images can be registered with the patient’s anatomy using different methods, including the selection of predefined anatomical landmarks on the patient using a tracked pointer and skin surface matching. This registration procedure is performed before sterile draping of the patient, since relevant features are not typically accessible after draping is completed. After this initial patient registration however, there can be a significant loss of navigation accuracy due in part to draping, attachment of skin retractors, and the duration of surgery as reported in [1]. In addition, registration accuracy is also affected by ‘brain shift’, which can be caused by a number of factors including CSF drainage, swelling and resection. Brain shift at the cortical surface can range from almost no detectable shift up to 50 mm [2].

Several solutions have been proposed to improve patient-to-image registration during surgery, including: intraoperative MRI, intraoperative ultrasound with automatic registration to preoperative data [3] and computer-vision based techniques to register the surface of the operating field with preoperative data [4, 5].

In this paper, we propose a method that allows the surgeon to correct patient registration manually without having to remove his attention from the surgical field or the need to introduce additional equipment in the operating room (OR). To do so, we rely on a surgical microscope, the navigation system, and the tracked navigation pointer, which are already present in the OR in IGNS. The tracked surgical microscope is used to produce an augmented reality (AR) image. It provides the surgeon with a single image that contains corresponding features from the patient and the preoperative data, allowing him to visualize the discrepancy. The surgeon uses the tracked navigation pointer to trace corresponding features in both images. These traces can then be used to establish a correction matrix for the patient registration.

The main contribution of this paper lies in its innovative use of AR to allow the surgeon to specify corresponding features on the patient and in preoperative data directly within his field of view without having to rely on the help of a technician. To our knowledge, it is the first time AR is used in this way to improve patient registration in IGNS.

The advantages of the proposed registration paradigm are threefold:

  1. 1.

    The surgeon can correct the registration at any moment during the surgery without having to remove his attention from the surgical field and without the intervention of a technician.

  2. 2.

    The method is robust because it is based the surgeon’s extensive knowledge of the anatomy and of the specificities of the patient on the operating room table.

  3. 3.

    In the future, this method could be used to provide a starting point for automated methods that might further refine the patient-to-image registration.

In neurosurgery, the features that are most likely visible in both rendering of preoperative scans and live video of the operating field are sulci and blood vessels. Although the method presented in this paper can apply to both types of features, we focus our attention on blood vessels.

2 Materials and Methods

In this section we first describe the surgical context in which our AR-based registration method can be used. Then we give an overview of the system that is used to produce AR images before describing the registration method itself.

2.1 Surgical Context

Figure 1 illustrates the surgical context in which our method is used. The patient is rigidly attached to the operating table by way of a Mayfield® clamp for example, and a reference tool acts as the origin of the IGNS system’s frame of reference. The patient’s preoperative imaging data is registered to this coordinate system, typically using a patient-to-image landmark registration that yields transform P.

Fig. 1.
figure 1

(a) Surgeon using a tracked surgical pointer to trace features of the anatomy with the help of an AR view displayed within the microscope oculars or on the navigation system. The AR view is obtained by combining live video images from the tracked microscope and 3D rendering of preoperative images registered to the reference of the tracking system. (b) Transformation model used to render preoperative images from the point of view of the microscope: P: Initial patient to IGNS system registration transform, M: Transform between the IGNS system reference and the tracker tool attached to the microscope, E: Extrinsic calibration transform that maps the tracker tool to the optical center of the microscope, I: intrinsic calibration transform that projects 3D points in microscope space to the image plane.

An AR view is obtained by merging live video images captured from the microscope (the real image) and a 3D volume rendering of preoperative patient data computed from the point of view of the microscope (the virtual image). Before rendering, the patient data is transformed to the space of the microscope’s optics by concatenating following transforms: (1) P, the patient registration, (2) M, the microscope transform obtained directly from the tracking system and (3) E, the extrinsic calibration transform discussed below. Once the data is in microscope space, it can be rendered using a standard direct volume rendering technique and a perspective projection model I whose parameters are estimated during a preoperative calibration procedure also described below.

2.2 Microscope Calibration

The preoperative microscope calibration procedure enables the estimation of the projection model of the microscope’s optics I as well as the rigid transform E between the tracker tool attached to the microscope and the optical center of the microscope. The calibration procedure consists of capturing a series of microscope images of a checkerboard pattern printed on a flat board to which we rigidly attached a tracker tool compatible with the IGNS system. For every image, we record the transform of the tool attached to the board. The parameters of the optical model of the microscope are estimated using OpenCV’s implementation of the method presented by Zhang [6]. The extrinsic transform E is obtained by combining microscope poses computed by Zhang’s method and the tracker tool transforms recorded from the IGNS system. An optimization procedure borrowed from the field of robotics allows for the simultaneous computation of (1) the transform from the tracker tool to the board and (2) the extrinsic calibration matrix E. For more details about the optimization procedure, we refer the reader to [7].

2.3 Merging Real and Virtual Images

The process of merging real and virtual images to produce the final AR view is illustrated in Fig. 2.

Fig. 2.
figure 2

(a) Phantom used to illustrate the method. (b) Virtual image rendered from the point of view of the microscope. (c) Real image captured from the microscope. (d) Mask that is used to determine the opacity of real image per pixel. (e) Resulting AR view obtained by combining the masked real image and the virtual image. (f) Close-up on a vessel that shows alignment of real and virtual images (diameter of circle is ~ 12 mm).

The AR view in this example is produced with the 3D nylon printed patient phantom shown in Fig. 2a. Parameters of the tracked surgical microscope obtained by way of the calibration procedure outlined in the previous section are used to produce a 3D rendering of preoperative patient data from the point of view of the microscope (Fig. 2b). After capturing an image from a USB digital camera (FireFly MV, Pointgrey, Richmond, BC, Canada), attached to one of the optical ports of the microscope (Fig. 2c), we compute a mask (Fig. 2d) that is used to alpha-blend the real and virtual images to produce the final AR view (Fig. 2e). The mask is created by computing the pixel-wise maximum opacity between a blurred circular transparent region and Sobel-filtered version of the real image. The center of the circular region is updated in real-time to follow the projection of the tip of the tracked surgical pointer on the microscope image, allowing the surgeon to control the area of the real image that is transparent. The Sobel filter is used to extract edges in the real image to maintain occlusion cues and create the perception that the elements of the virtual image are located below the surface rather than floating above it, a problem often reported with augmented reality images [8].

2.4 Curve Tracing and Registration

Once we have produced an AR view of the surgical field in the OR, we can apply our method to correct for the misalignment between real and virtual images discussed above. The system allows the surgeon to use the tracked surgical pointer of the IGNS system to trace one or more corresponding piecewise linear curves in the real and virtual parts of the AR images as illustrated in Fig. 3.

Fig. 3.
figure 3

(a) MR + CTA-based phantom with simulated craniotomies exposing the cortex and superficial blood vessels. (b) Using the tracked surgical pointer of the IGNS system, the surgeon can trace piecewise linear curves (orange curves) along the surface of the vessels. (c) The area around the surgical pointer becomes transparent, revealing corresponding misregistered vessels in the CTA, which can be traced in a similar way. (d) After both real and virtual images have been traced, the curves can be registered using the iterative closest point algorithm. Applying the resulting transform to the CTA aligns it with the virtual image (Color figure online).

Curves on the real image are traced by simply moving the surgical pointer along the surface of the tissues of interest and capturing the 3D position of the tip of the pointer. The surgeon triggers the acquisition of control points of the curve by pressing a USB foot pedal connected to the navigation system. The use of a foot pedal allows to avoid bringing a new piece of equipment within the sterile field.

Capturing the corresponding curve in the virtual image is slightly more complicated. Misregistration of the patient might cause the features of interest to lie below the surface of the patient’s tissues for example. In this case, it is not possible to reach those areas with the tip of the pointer.

To determine the exact position of the point to capture, we use the concept of 3D picking. When the user presses the foot pedal, a ray is traced starting from origin of the virtual camera, going through the pointer tip and find the first vessel along the line of sight. The vessel is identified by finding the first voxel along the ray with intensity higher than a predefined threshold. The point that is picked on the vessel becomes the virtual coordinate of the next curve control point. This method allows tracing of elements of the virtual image without having to touch the tissues with the tip of the pointer.

Once the curves have been traced on the real and AR images, the corresponding curves are used to compute a correction of the initial patient registration using the iterative closest point (ICP) algorithm [9]. The registration transform computed is thus rigid. Furthermore, since control points in one curve are not matched to the closest control point in the other curve but rather to the closest location along the curve, the number of points in both datasets don’t need to match. In this work, we use the open source implementation of the ICP algorithm provided in the Visualization Toolkit (VTK) software package.

3 Experiment

We validate our method with a simple user study in the laboratory. The goal of the study is to show that registration accuracy can be improved with our method in a controlled lab environment. We test our method using a 3D printed phantom that is based on MRI and CT DSA imaging of a patient operated for the ablation of an AVM at the Montreal Neurological Hospital (Fig. 2a). The phantom represents the whole head of the patient and has simulated craniotomies that expose the cortex and superficial blood vessels. It was designed with 8 conical recesses around the simulated craniotomies that are used as landmarks. The position of the apex of the recesses is known, which allows for a very accurate landmark registration of the phantom with preoperative data. For more details about the fabrication of the phantom, we refer the reader to [10].

Prior to the experiment, we registered the phantom to its CT data by capturing the world space position of the phantom’s built in landmarks with the tracked pointer. The registration transform is then computed using Horn’s method [11]. We obtained a fiducial registration error (FRE) of 1.12 mm. The tracked microscope has also been calibrated according to the method described above. A cross-validation yielded a reprojection error of 0.37 mm for camera calibration. After completing these 2 initial steps, we are able to produce an accurate AR view of the simulated craniotomy of the phantom.

The user study consists in every subject attempting to correct simulated patient mis-registration 5 times using our method. The subject is initially trained and asked to explore the AR view to find vessels that are visible in both the real image and the rendered image. The subject is then asked to trace the surface of those vessels in the real image.

For each of the trials, we apply an artificial offset transform to the patient’s preoperative data, simulating the loss of navigation accuracy that can result from initial phases of the surgery. The subject then needs to correct for this offset by tracing blood vessels on the virtual part of the image. The offset transform is composed of a translation and a rotation. The translation is obtained by choosing a random direction in the plane perpendicular to the optical axis of the microscope. The rotation is defined around the same axis and the sign of the angle is chosen randomly. The amplitude of the rotation and translation for each of the trial are listed in Table 1. One of the hypotheses we pose in this study is that our method may improve the registration only for shifts larger than a certain threshold. For this reason, the amplitude of the artificial shift we used in the experiment is decreasing with every trial. The maximum values for amplitudes are motivated by practical reasons. In the OR, shifts larger than 15 mm can happen and have been reported in the literature. However, random shifts of larger amplitude can cause the features of the virtual image to be out of the field of view of the microscope. If such case should happen in the OR, the surgeon could reposition the microscope and would still be able to use our method. However, for the purpose of our analysis it was not possible to move the microscope during this experiment.

Table 1. Amplitude of the offset for translation and rotation of each of the trials of the study.

4 Results

We ran the user study described above with 5 subjects who are all medical imaging experts. We used the set of 8 landmark points embedded in the phantom to measure the accuracy of the registration correction obtained with our method. We compute d off , the distance between the original landmark position and its position after imposing the artificial offset (red cross in Fig. 4a), and d cor , the distance between the original landmark position and their position after applying the correction computed using the proposed method (green cross in Fig. 4a). For each of the trials, we compute the root mean square (RMS) of d off and d cor over the 8 points. Figure 4b shows a plot of the resulting RMS(d cor ) as a function of RMS(d off ), where each of the points represents one trial of one of the subjects. This plot is an indication of how registration accuracy of our method varies with the original offset. We also computed the mean RMS corrected distance over all trials and all subjects and obtained 4.06 ± 0.91 mm.

Fig. 4.
figure 4

(a) Illustration of the landmarks that are used to compute RMS offset distance and RMS corrected distance. Blue crosses show the position of the original landmarks, red crosses show the position of the landmarks after applying the artificial offset and the green crosses represent the position of the landmarks after applying our method. (b) Corrected RMS distance as a function of the RMS offset distance for each trial (red squares) and corresponding linear fit (black dashed line) (Color figure online).

5 Discussion

Results of this preliminary study show that the initial offset distance has little influence on the accuracy of the resulting registration after the proposed manual correction. This suggests that our technique could be used to correct for arbitrarily large misalignment of the patient with the preoperative data, such as when the navigation setup is accidentally displaced during the operation.

In [1], Stieglitz et al. reviewed the literature on accuracy of patient registration. They report errors ranging between 2.7 and 6.2 mm, with a median of 4.0 mm. The mean registration error obtained with our method (4.06 ± 0.91 mm) is thus comparable with the outcome of standard initial registration methods.

In this study, each subject was asked to perform the task only 5 times. A greater number of trials per subject would be desirable, but in practice, since we have only 1 phantom available, we found that subjects tend to produce the same trace for every trial and 5 trials per subject was sufficient to account for the variability of the traces that can be obtained. In a future study, we will perform each trial with a different phantom.

One of the findings from our study is that the curves traced by the subjects are very noisy. This might be due to the relatively primitive tracing tools available so far in our system. If the tools are refined, by allowing for Bezier curves or by using computer vision methods to automatically snap the curve to features of images, it will be possible to significantly improve the accuracy of the registration.

6 Conclusion and Future Work

We have presented a simple system that can allow surgeons to correct the loss of navigation accuracy during an operation by leveraging their knowledge of the anatomy and taking advantage of a set of tools already present in the OR (tracked surgical pointer, surgical microscope, and navigation system). Through a user study in the lab, we have shown that our technique can produce registration accuracies comparable to state of the art methods used for initial registration of the patient before surgical draping. The next step is to bring our system to the OR where its accuracy could be compared to other registration correction methods.

One of the main advantages of our method is its robustness that comes from the fact that it relies on the surgeon’s knowledge of the anatomy and it is inherently manual. In the future, we would like to study how this robust method can be used to constrain other more automatic methods such as ultrasound-based automatic registration. It would be particularly interesting to use the curves traced with our method to regularize the computation of non-linear registration between preoperative MR scans and intraoperative ultrasound and correct for brain shift.