Introduction

Augmented reality (AR) visualization is becoming increasingly studied and used in medical and clinical applications. Here we focus on its use within the domain of image-guided surgery (IGS), and specifically image-guided neurosurgery (IGNS) for vascular malformations. In neurovascular surgery, the surgeon uses a neurosurgical microscope, which enables a magnified view of the region of interest; however, the microscope view provides no information below the visible surface. Therefore, the burden is with the surgeon to map pre-operative images from the neuronavigation system to the patient on the operating room (OR) table in order to understand the topology and locations of vessels below the visible surface of the brain. This type of spatial mapping is not trivial, is time consuming, may disrupt the surgical workflow, and may be prone to error due to variability between patient anatomies and may be further complicated due to vascular anomalies. Furthermore, it is made more difficult with the frequent repositioning of the microscope during the surgery [1]. Augmented reality has been proposed as a solution to some of the shortcomings of traditional IGNS systems. In AR neurosurgery, virtual objects (e.g. pre-operative images of vessels) are merged with the real world (e.g. the surgical field of view). The augmented reality view allows surgeons to perceive the anatomy under the visible surface. In neurovascular surgery in particular, AR may allow the surgeon to better understand the topology and angioarchitecture of the vessels, their location, and the type of vessels (arteries or veins) lying below the cortical surface of the patient. This type of visualization can aid in clinical decision-making, may reduce surgical time, and may increase surgical precision.

In this work, we describe a developed augmented reality IGS system for neurovascular surgery. The main contribution of this work is its focus on exploring and testing AR visualization techniques that are not only understandable in terms of depth perception but that are useful to neurosurgeons operating within the OR. Although we describe each of the AR system components (which are needed to create an AR view), we focus on how to combine real and virtual to create a useful augmented reality scene. We present the initial results of using our system in the OR at the Montreal Neurological Institute & Hospital (MNI/H) and based on our experiences give an analysis of the feasibility of AR technology for neurovascular surgery. Furthermore, we describe possible avenues of future work, which should ensure the successful translation of such systems from laboratory to clinic.

Background

In this section, we describe the neurovascular disorders to which we applied our augmented reality system, and review related work in the field.

Neurovascular surgery

In neurovascular surgery, surgeons treat different vascular diseases and vessel anomalies of the brain and spinal cord. In this paper, we focus on three types of neurovascular disorders: aneurysms, arteriovenous malformations (AVMs), and arteriovenous fistulae (AVFs). Neuronavigation in neurovascular surgery is used to localize vessels of interest and to use the pre-operative images that are mapped to the patient to plan the craniotomy, resection corridor, and treatment.

Aneurysms

An aneurysm is a balloon-like bulge due to weakness in the wall of a blood vessel. As an aneurysm grows in size, risk of rupture, stroke, and death increases. Treatment in the form of embolization or surgery (blocking or closing off the vessels) is recommended to prevent this rupture. Surgical treatment of an aneurysm involves performing a craniotomy and clipping the aneurysm base or neck to exclude it from the circulation.

Arteriovenous malformations (AVMs)

AVMs are abnormal collections of blood vessels in the brain. The central part of the AVM, the nidus, is made up of abnormal vessels that are hybrids between true arteries and veins. AVMs are fed by one or more feeding arteries (feeders) and are drained by one or more major draining veins (drainers). These feeding and draining vessels often have weakened walls and therefore may leak or rupture. They may also be unusually winding or large. In many cases, AVM treatment is recommended in order to protect against hemorrhage, which may lead to stroke, permanent disability or death. Clinical research indicates that the risk of AVM hemorrhage is between 2 and 4 % every year [2]. Treatments for AVMs include: (1) radiation, (2) embolization and/or (3) surgery.

Neurosurgery for AVMs involves identifying the margins of the malformation and tying off or clipping the feeder vessels, obliterating the draining veins and removing or obliterating the nidus, in this order. Therefore, a detailed understanding of the arterial inflow from feeders and venous drainage from drainers is important for clinical evaluation and management of AVMs [3]. The use of techniques that aid in the characterization of the pattern and distribution of feeding arteries by quantification of the relative blood flow is necessary as it is not always easy to identify whether a vessel is a feeding artery or an arterialized draining vein [4].

Arteriovenous fistulae (AVFs)

AVFs are abnormal connections of vessels in the brain or spinal cord that occur when one or more arteries are directly connected to a vein or sinus, bypassing the capillary network. AVFs are similar in pathology to AVMs; however, they are found not in the brain or spinal cord but within the dura mater or arachnoid space. The abnormal connection between vessels is problematic as there is a transfer of high-pressure arterial blood into the veins or venous sinuses causing an increase in the venous pressure and swelling in the brain or spinal cord.

Similar to AVMs, AVFs can be treated by embolization and/or surgery. Neurosurgery for AVFs involves physically disconnecting the fistula within the dura and obliterating the draining vein using a cauterizer.

AR in neurosurgery

Neurosurgery was one of the first applications of image-guided surgery systems and is currently the most common application of AR surgical systems [5]. It has been used for different types of neurosurgery including transsphenoidal surgery (where, for example, pituitary tumours are removed through the nose and the sphenoid bone) [6], microscope-assisted IGNS [79], endoscopic neurosurgery [10, 11], otolaryngology, and ENT (ear, nose, and throat) surgery [1, 12], and craniotomy planning [13]. One of the first AR systems was proposed by Gleason et al. [13] where an augmented reality view of the patient was created by combining three-dimensional segmented virtual objects from pre-operative patient scans (e.g. tumours, ventricles, and the brain surface) with live video images of the patient. The system was proposed to guide surgeons to locate particular areas of interest and help a surgeon plan a resection corridor to a lesion. To avoid looking away from the surgical scene for image guidance, Edwards et al. [1, 14] developed a microscope-assisted guided intervention (MAGI) neuronavigation system that allowed for stereo projection of virtual images into a neurosurgical microscope for ENT and neurosurgery. In a related effort, Birkfellner et al. [15, 16] developed Varioscope AR, a custom-built head-mounted operating microscope in which virtual objects are projected onto the focal plane of the main lens via two miniature VGA displays. In the work of Paul et al., virtual models including tumours, sulci, etc. were merged with the image from the microscope onto the neuronavigation system or were projected into the ocular of the microscope. Numerous other AR systems have been described in the literature for use in neurosurgery. For a detailed review of augmented reality visualization in image-guided surgery, the reader is referred to [5].

Our work has focused on the visualization of the virtual and real worlds in neurovascular surgery and on the evaluation of these visualizations in real clinical cases. We focus on this because the spatial understanding and depth perception of 3D virtual objects within image-guided surgery scenes needs further investigation [10]. The only other work in AR IGNS that we are aware of that considered visualization and depth perception was the MAGI (microscope-assisted guided intervention) system developed by Edwards et al. [1, 14]. In their system, stereoscopic projection of virtual images into a neurosurgical microscope was used to create a visualization where the virtual object appeared at the correct depth within the real scene. They determined that subjects predicted the correct depth of objects with an accuracy of about 12 mm with users tending to see objects as deeper than they actually were. In comparison, our work examined monoscopic depth cues (e.g. fog and edges) that could give relative depth perception of the cerebral vasculature and developed visualization techniques to combine virtual vessels with images of the real scene in such a way that the vessels appear to lie below the visible surface of the patient. We are also one of the few groups who have described the use of their augmented reality system in real clinical cases for neurosurgery.

To the best of our knowledge, only two other publications have looked at using AR for neurovascular surgery. Cabrilo et al. [7, 8] presented two studies in neurovascular surgery that use the Multivision augmented reality function of the Zeiss OPMI Pentero neurosurgical microscope. In the latter study, 28 patients with 39 aneurysms underwent AR guided surgery. The AR view was created by injecting segmented virtual models of the patients’ vessels, aneurysms, aneurysm necks, skulls, and heads into the eyepiece of the neurosurgical microscope. In one study, the use of AR for AVM surgery [8] was examined and in the second AR for aneurysm surgery [7]. The AR visualization was thought to enhance the minimally invasiveness of the procedure by enabling a more tailored surgical approach and optimal clipping. Furthermore, based on the surgeons’ feedback, the authors found that the AR view was useful for positioning the surgical clip in 33 cases (92.3 %) and had a major impact in five surgeries (16.7 %).

In the AVM study by Cabrilo et al. [8], AR was used for five patients who were undergoing surgery for AVMs. Segmented models of the patient’s skull, AVM nidi, and feeding and draining arteries were injected into the oculars of the neurosurgical microscope. Based on surgeon feedback the authors concluded that although AR was useful in performing tailored craniotomies, guiding dissection and localizing draining arteries; however, it was thought not to provide useful information about the feeding arteries due to the complexity of AVM angioarchitechture. The authors concluded that adding hemodynamic information into the AR view could make AR more useful in this type of surgery.

System description

Our neuronavigation system, AR IBIS (Augmented Reality Intra-operative Brain Imaging System) is made up of three major components: a workstation, an optical tracking system, and a camera (Fig. 1). The Linux x86 workstation runs Ubuntu 12.04 (64-bit), with an Intel Core i7-3820 @ 3.6 GHz on a quad-core processor with 32 GB RAM. The graphics card is a GeForce GTX 670 and the video capture card is a Conexant cx23800. The custom-built neuronavigation and visualization software, IBIS, is written in C++ and uses the Visualization Toolkit, Qt user interface platform, and Insight Registration and Segmentation Toolkit. IBIS has previously been described as intra-operative ultrasound neuronavigation system used in brain tumour resections to account for brain shift [17]. Tracking is done using a Polaris N4 infrared optical system (Northern Digital, Waterloo, Canada). Video capture of the surgical scene is done using a Sony HDR XR150 that is outfitted with passive reflective spheres (Traxtal Technologies Inc., Toronto, Canada). The camera transmits live images using an S-video cable to the workstation at 30 frames/sec.

Fig. 1
figure 1

Left the workstation and infrared tracker in the operating room theatre. Centre the video camera is used to capture images of the patient for AR visualization. Right a close-up of the camera

In order to set-up an IGNS system capable of AR visualization, in addition to tracking of surgical tools and patient-to-image registration, tracking and calibration of a video capture device are needed.

Tracking

Tracking in IGS localizes objects by determining their position and orientation in space. In our system, the Polaris camera (Waterloo, Canada) uses stereo triangulation to locate passive reflective spheres on a set of tracked tools, including the video camera. Recent work has shown that tracked tools used in IGNS interventions should be calibrated as close to the reference tool and the front of the camera‘s digitizing volume as possible to minimize error contributions to both tool calibration and tracking accuracy [18]. This is done whenever AR IBIS is brought into the OR.

Creating the AR view

In order to create an AR view, two things are needed (1) tracking the camera’s position in the navigation system space (as mentioned above) and (2) calibrating the optical device or camera.

Camera calibration

Calibration is done in two parts, extrinsic calibration and intrinsic calibration. The extrinsic camera calibration determines the transform between the passive reflective spheres attached to the camera (3D world coordinates) to the optical centre of the camera (3D camera coordinates). The intrinsic calibration of the camera estimates the projection matrix of the camera giving the mapping between 3D camera coordinates and 2D image coordinates. Intrinsic calibration is done using the OpenCV implementation of Zhang’s method [19]. In Zhang’s method, a planar calibration grid is positioned at different orientations and grid corners are determined.

For Extrinsic calibration, we obtain homologous 3D world coordinates (based on tracking) and 2D image coordinates (based on a computer vision technique). This is done by attaching a visual marker to a calibrated pointer and manually moving the pointer with marker around in the view of both the optical tracking system and the camera. The marker is placed in such a way, that the centre of the marker that is determined in 2D image space, is also the tip of the 3D pointer, which is tracked in the frame of reference of the navigation system. Therefore, for each pose of the pointer, we have the corresponding 3D world coordinates with the 3D image space coordinates. To ensure coverage of the area of interest, approximately 500 points are captured in a volume 15–40 cm from the camera (the distance within which a patient’s head would be from the camera). Given this set of homologous points, Levenberg–Marquardt optimization (implemented using OpenCVs SolvePnp function) is used to determine the pose of the camera that minimizes the sum of squared distances (SSD) between 3D world coordinates and the projection of the corresponding 3D camera coordinates to image space. Experimental results obtained using the tracked video camera show a reprojection error of 2.00 mm at 16.61 cm from the camera and 4.45 mm at 40.36 cm.

Patient-to-image registration

During surgery, neuronavigation systems provide guidance through an environment where both the patient and surgical tools are tracked, as described above. Surgical navigation systems relate the real-world coordinates of a patient to those of the pre-operative images using a rigid body transformation, i.e. a patient-to-image registration. This allows a surgeon to point to a specific location on the anatomy of the patient and see the corresponding anatomy in the pre-operative images on the navigation system. The current landmark registration protocol that we use at the MNI/H in frameless stereotactic IGNS procedures involves choosing nine corresponding landmark pairs on both a patient’s preoperative images and their anatomy in the operating room. The landmarks include: (i) bridge of the nose (BN), (ii) right medial canthus (RMC), (iii) right lateral canthus (RLC), (iv) right tragus valley (RTV), (v) right tragus (RT), (vi) left medial canthus (LMC), (vii) left lateral canthus (LLC), (viii) left tragus valley (LTV), (ix) left tragus (LT). Due to different factors related to image acquisition and the technician or surgeon choosing the landmarks, the accuracy of this registration technique has been reported to vary between 1 and 7 mm [2022] depending on the neuronavigation system used.

AR IBIS for neurovascular surgery

In the following section, the system is described in terms of augmented reality visualization using the DVV (Data, Visualization Processing, View) taxonomy [23] for describing mixed reality image-guided surgery systems.

Data

Typically, for neurovascular IGNS, computed tomography angiography (CTA) images are used. In CTA, a radiopaque contrast dye (bolus) is injected into a large blood vessel and computer-processed X-rays are used to produce images of arteries and veins in the body. For navigation, these images are then processed in order to show only the cerebral vasculature of the patient (not bone or tissue). This is done by acquiring a CT volume prior to bolus injection and using the image as a mask to subtract everything but the vessels from the images, resulting in digitally subtracted CTA volumes (DS-CTA) (see Fig. 2). Depending on the type of surgery, either a combined venous-arterial phase or two or more separate phases will be processed and visualized intra-operatively for neuronavigation.

Fig. 2
figure 2

Coronal (a), axial (b) and sagittal (c) slices of a DS-CTA of an 18-year-old male with a left frontal AVM (indicated with the cross hair). The DS-CTA vessels, volume-rendered from a slightly rotated left lateral view, are shown in d

Visualization processing

Visualization processing occurs on two different levels in our system: processing of the CTA data (virtual object) and processing of the live camera image (real world). To combine the virtual CTA volumes with the live camera image, we extract edges from the live camera images and use transparency to show the virtual volume- rendered vessels below the brain surface of the patient (Fig. 3). In the following section, we describe the processing involved in creating the augmented reality scene.

Fig. 3
figure 3

To create the AR view, the virtual volume-rendered vessels are combined with a live camera image. Edges are extracted from the camera image in the area of the surgical target and transparency is used in this area to show the vessels of interest below the surface. Here the AR view is shown on a 3D nylon-printed phantom [24]

Live camera image processing

Studies have shown that using alpha blending between virtual and real-world objects causes problems for viewers in terms of understanding the scene spatially and in particular in understanding the depth of the virtual object in the scene (e.g. [25, 26]). Salient features, such as edges and contours, extracted from the camera image have been used to provide better spatial and depth understanding in AR visualizations (e.g. [27]). In our work to show the virtual vessels, the image opacity is modulated such that in the area of the surgical target (i.e. the particular vessels of interest) more of the virtual object or vessels are shown. In areas not close to the surgical target, more of the live camera image with the surgical field is shown.

As well as modulating opacity, edges are extracted from the live camera image. To do this first, the live camera image is blurred using a Gaussian smoothing function, and then edges are extracted using a Sobel filter. This two-step method is implemented using a two-pass GLSL (OpenGL Shading Language) fragment shader.Footnote 1

In the first step, a discrete \(5\times 5\) Gaussian filter with \(\sigma =1.0\) pixel is used to reduce detail and noise from the original video images, which contain many specular highlights caused by the bright OR lights:

$$\begin{aligned} G\left( {x,y} \right) =\frac{1}{273}\left[ {{\begin{array}{lllll} 1&{}\quad 4&{}\quad 7&{}\quad 4&{}\quad 1 \\ 4&{}\quad {16}&{}\quad {26}&{}\quad {16}&{}\quad 4 \\ 7&{}\quad {26}&{}\quad {41}&{}\quad {26}&{}\quad 7 \\ 4&{}\quad {16}&{}\quad {26}&{}\quad {16}&{}\quad 4 \\ 1&{}\quad 4&{}\quad 7&{}\quad 4&{}\quad 1 \\ \end{array} }} \right] \times I_i \end{aligned}$$

Next the edges are extracted from the images using a Sobel filter, which computes the approximation of the gradient of the image intensity function.

$$\begin{aligned} G_x =\left[ {{\begin{array}{lll} 1&{}\quad 0&{}\quad {-1} \\ 2&{}\quad 0&{} {-2} \\ 1&{}\quad 0&{}\quad {-1} \\ \end{array} }} \right] \times I_i ,\quad G_y\!=\!\left[ {{\begin{array}{lll} 1&{}\quad 2&{}\quad 1 \\ 0&{}\quad 0&{}\quad 0 \\ {-1}&{}\quad {-2}&{}\quad {-1} \\ \end{array} }} \right] \times I_i \end{aligned}$$

Given a centre point, \(\vec {c}\) under which the surgical target should be seen, a factor \(f\) is used to fade out the transparency, \(\alpha \), starting from radius \(r_1 \) to radius \(r_2 \), around \(\vec {c}\), such that

$$\begin{aligned} f=\left\{ \begin{array}{l@{\quad }l} 0.0,&{}d<r_1 \\ \exp -\frac{\left( {\frac{d-r_1 }{r_2 -r_1}} \right) ^{2}}{0.25},&{}r_1 >d>r_2 \\ 1.0,&{}d>r_2 \\ \end{array} \right. \end{aligned}$$

where \(d=|\!|\vec {p}-\vec {c}|\!|\) and \(\vec {p}\) is the image space coordinate of pixel considered around point \(\vec {c}\) (Fig. 3).

Virtual vessel volume processing

In our system, volume rendering is used to generate virtual images of the CTA vessels. We render either a combined arterial and venous phase DS-CTA or of two or more separate vascular volumes. A transfer function, which defines the RGBA value for each voxel, is used to colour code the volumes. Furthermore, based on our previous results on how to best perceptually display volumetric angiography data [28], a number of visualization processing techniques using depth enhancing cues are available to display the volumetric vessel data. These include edges, aerial perspective (or fog), and colour coding (Fig. 4). Depicting edges of the vessels may aid in relative depth perception as it allows for a better understanding of local occlusions. To depict edges, a transfer function that maps low values of the DS-CTA (vessel edge voxels) to black is used. Aerial perspective, where distant parts of a volume are shifted towards the background colour, has been shown to give a good understanding of relative depth and is implemented with a OpenGL fragment shader as described in [29]. The transfer function is also used to colour code the volume in a particular way by mapping certain voxel values to certain colours. Colour coding can also be achieved when we use more than one volume (i.e. one arterial phase and one venous phase) and map different volumes to different colours.

Fig. 4
figure 4

Different visualization techniques for combining the live camera image with the virtual vessels (green/rainbow) are shown. The use of simple alpha blending (top left) between the real and virtual worlds is problematic and does not provide spatial information. Therefore, more sophisticated techniques such as modulating transparency in the area of interest of the images alone, and using edges (from the virtual vessels and/or camera image) and using fog are applied

In Fig. 5, we show a colour coding based on the flow of the bolus (a radio-opaque contrast substance used in the CTA) through the vessels. Vessels that light up earlier (typically arteries) are colour-coded as red, going through orange, yellow, green, and blue for veins. Note that colour coding is related to flow speed and only approximately identifies veins and arteries; segmentation errors are possible in vascular anomalies where the bolus flows quickly into the venous side. The information of the bolus is obtained from a 4D DS-CTA (i.e. a series of 3D volumes over time). By combining the 3D volumes, which are already normalized over the time sequence, we get an indication of the blood flow through the vessels.

Fig. 5
figure 5

Colour coding of a vascular DS-CTA volume based on bolus flow

View

The view component of an AR IGS system deals the interaction tools (the interface & how the user can interact with the system), the display, and the perception location. The perception location is the where the end-user must look in order to take advantage of the AR view, in AR IBIS the display is the display of the neuronavigation system which is a flat screen monitor. In order to interact with the neuronavigation system a mouse is used. There are many ways in which the user can interact with the system to create new views and renderings of the data. The most relevant to the AR visualization pertain to either manipulating the virtual vessels or manipulating the live camera image. In terms of the virtual vessels, it is possible to change the transparency of the rendered vessels, use a transfer function to change the colour of the vessels, depict more or less edges, and turn fog on and off. The changes that can be made to the camera image are changing transparency and the size of the circular window that shows the vessels below the surface of the camera image, turning edge detection on and off, and increasing the blur size of the Gaussian blur to add or reduce the amount of edges that are detected.

AR in the OR

An initial assessment of using AR IBIS in neurovascular surgery was presented in [30]. The system was initially used in three surgical cases performed by two surgeons: an aneurysm, an AVM, and an AV fistula (Fig. 6). The first use of the system was in a craniotomy to clip two aneurysms in a 52-year-old female (Fig. 6, 1a). Volume-rendered vessels (Fig. 6, 1b) were merged with camera images of the dura (Fig. 6, 1c), creating the AR scene (Fig. 6, 1d). Edge extraction did not work due to the specular highlights from the OR lights and bloody dura images. The visualization techniques were improved in the following cases. In the second case, surgery to remove an AVM in a 41-year-old female was done (Fig. 6, 2a). Volume-rendered vessels were colour-coded in purple (to add contrast), as in Fig. 6, 2b, and merged with images of the dura (Fig. 6, 2c) to create an AR view (Fig. 6, 2d). The surgeon placed a marker on the AVM to indicate its location and aid in planning the resection corridor. In the third case, a craniotomy for a type II Borden dural AVF was done (Fig. 6, 3a). Two CT-DSA volumes, arterial and venous phase, were rendered (Fig. 6, 3b) and merged with images of the dura (Fig. 6, 3c) creating an AR view (Fig. 6, 3d). During the first cases, the system was used prior to opening the dura, as well as during resection.

Fig. 6
figure 6

Images from the first three surgical cases of our AR system: an aneurysm (1-top), an AVM (2-middle), and an AVF (3-bottom). The malformations are depicted on the X-ray angiography in the first column (a). In each of the cases, volume-rendered vessels (column b) were combined with images taken from the intra-operative camera view (column c) to create an AR view on the workstation of the neuronavigation system (column d). The AR views depicted here are of the top of the dura prior to bringing in the surgical microscope

Based on the use of the system in the OR and comments from the neurosurgeons during surgery and discussions post-operatively, three uses for AR became evident: (i) tailoring the craniotomy, (ii) localizing the anatomy of interest (i.e. the aneurysm, feeding arteries/draining veins or the fistula), and (iii) planning the resection corridor. In Fig. 7, we show when AR has been deemed useful by our surgeons within the typical workflow of neurovascular image-guided surgery. Prior to the craniotomy, by projecting the virtual vessels onto the patient skin, the surgeon can best determine the shape and extent of the craniotomy in order to make it as large as needed to allow appropriate and direct access to the region and anatomy of interest. In terms of intra-operative localization of vessels, markers (e.g. coloured points) placed on particular vessels can help indicate to the surgeon where the vessel is in relation to the area they need to resect, and thus help them plan a resection corridor that accounts for the anatomy below the surface that is not readily visible. Lastly, visualization of the vessels, aneurysms, and malformations may aid in treatment planning. This includes: (1) determining the risk of treatment; (2) deciding which vessels to are to be exposed and clipped first and the size of the hemoclip to use on such a vessel; and (3) selecting the appropriate clip for difficult aneurysms. In this work, we focus on the first two uses of AR and in the following section describe how AR was used in a neurosurgical case for an AVM that has not been previously described.

Fig. 7
figure 7

The typical workflow of a neurovascular surgery case involves planning and performing the craniotomy, planning and resecting to the anatomy and vessels of interest, and clipping and obliterating vessels. Neuronavigation is typically used for localization and guidance. AR offers a visualization that can be useful in each of the surgical steps involved in planning the extent of the craniotomy, the resection corridor and the treatment. (Note that the images come from different surgical cases)

Case study

The system was trailed in the second-stage resection of a left frontal AVM (Spetzler and Martin Classification 3B) in an 18-year-old male. This young man who had previously, just a few weeks before, undergone a Stage 1 skeletonization and incomplete resection of a 3.5-cm AVM located in the left supplementary motor region, required intra-operative AR visualization to aid in locating a couple of deep feeding arteries. It was important to localize these vessels early in the resection in order to expedite the surgery and potentially minimize blood loss.

Ethics

Ethics approval for using AR IBIS in the OR was received from the Research and Ethics Board at the MNI/H. Informed consent was obtained from the patient prior to surgery. Whenever AR IBIS is brought into the OR, it is used in parallel with the commercial Medtronic StealthStation S7 (Medtronic, Louisville, CO, USA).

Methods

The StealthStation and IBIS are brought in at the same time and are used in parallel in the OR. Patient-to-image registration is done simultaneously on both systems, and both systems track the same surgical tools. This ensures that there are no additional steps in the typical workflow of setting up the neuronavigation system. During this case, patient-to-image registration was done by the resident who chose the landmarks on the patient, which had been previously selected on the CT by the neuronavigation team. All imaging/visualizations used by the surgeon were recorded. We also recorded any comments and feedback concerning the AR system, and post-operatively the surgeon filled out a questionnaire about the usefulness of the system.

Data and processing

The patient had a 3D rotational X-ray angiography and a 4D-CTA. The vascular data were extracted from a 4D DS-CTA that was converted into a single colour-coded volume that showed blood flow as described in “Virtual vessel volume processing” section. Prior to surgery, the surgeon asked us to place markers on vessels that he identified as the deep feeders (Fig. 10). Fog was used to give a perception of depth and also to remove clutter in the image by reducing the number of vessels far from the surface that were visualized.

System use

We detail the use of AR IBIS, the StealthStation, and the PACs system viewer (Inteleviewer, Intelerad Medical Systems Inc. Montreal, Canada).

Use of commercial system

The commercial Medtronic system was used four times during surgery. It was first used to plan the extension of the craniotomy from the first surgery. Second, it was used prior to the opening of the dura. Using the neuronavigation system at this time allowed the surgeon to plan how much more of the patient’s dura would have to be opened in order to access the remainder of the AVM. Next the surgeon used the neuronavigation system to visualize the cortex to plan the corridor of resection. At this point, the surgeon asked to have the reconstructed vessels on the navigation system rotated and positioned in order to better see the feeding arteries. Lastly, after some resection, the surgeon used the system to locate deep feeding arteries. By using the pointer and looking at its representation on the neuronavigation system, the surgeon was able to get a better idea of the location of the feeding vessels. What was not visualized on the commercial system was the position of the deep feeding arteries in relation to the visible cortex and vessels of the patient.

Use of imaging on the PACS system

Often during neurovascular surgery, surgeons will refer to X-ray angiography data on the PACS system. X-ray angiography, which has a higher resolution, depicts smaller arteries and veins and gives a better indication of blood flow through the vessels than does DS-CTA. However, DS-CTA is used for navigation. During this case, the surgeon referred to the pre-operative X-ray angiography images three times in order to determine where one of the feeding arteries was located with respect to the large draining vein. Although the feeders were readily visible on the X-ray angiography, mapping that information back to patient was challenging.

Use of AR IBIS

Augmented reality was used three times during the surgery. First, the system was used after patient-to-image registration and prior to draping and sterilization. At this time, the surgeon looked at the extent of the malformation and the location of the deep feeders and draining vein under the skull. The colour scheme was also discussed, and the surgeon noted that the large draining vein was in an arterial colour rather than venous, due to it being filled with blood during the arterial phase and not venous, a common phenomena in AVMs and AVFs.

The second use of AR was prior to bringing in the microscope on top of the cortex. During this time the surgeon noticed that the virtual vessel overlay on the real scene was registered with an accuracy of about 1 mm. Furthermore, a number of visualization parameters were changed based on comments from the surgeon. These were: (1) merging of the virtual and real to show more of the surface rather than more vessels (i.e. smaller window showing virtual objects) and (2) using fog to be able to see the selected markers on the feeding arteries. Based on the virtual information, the surgeon placed a micropad on the brain surface above the virtual marker of a deep feeding artery to help with the resection approach and vessel localization (Fig. 8). AR was used again, after some resection, to help localize the deep feeding arteries. The AR view, which aligned in one view the surgical field and the vascular images and markers, aided the surgeon in planning the resection corridor to the deep feeding arteries.

Fig. 8
figure 8

Left Screen shot from IBIS during navigation, the blue sphere represents the surgical pointer. The pink spheres are markers that were placed on deep feeding arteries. Right In the top image the vessels rendered from the point of view of the camera. On the bottom the augmented reality view is shown

Results and discussion

We describe the results of using the system for this surgical case and draw on our experiences to discuss the use of AR technology and the different components for neurovascular surgery.

System

Patient-to-image fiducial registration error for this surgical case was 3.44 mm root-mean-square error (RMS). Calibration and reprojection error, which describes the mismatch between virtual images and the real world in the focal plane, was 2.02 mm. Based on comment from the surgeon the overall AR misalignment was approximately 1–2 mm.

Note that the misregistration noted may include the phenomena of brain shift. We have found however that even after significant amounts of resection, AR visualization is useful in localization and guidance. In our experience, we have noticed that surgeons will account for the phenomena of brain shift by doing the necessary transformations in their heads in order to continue to use guidance from IGS systems. Cabrilo et al. [7] also found that brain shift did not significantly affect their AR visualization. They noted that the injected image always led to the aneurysm and as a slight translation of the virtual image was often appreciated during clipping. Although in our current system, we do not account for the phenomena of brain shift it would be possible to do so by using the intra-operative US platform of IBIS together with AR component [17].

The current set-up of AR IBIS uses an external camera to capture images of the scene and renders the AR view on the workstation. Although numerous researchers [7, 8, 14, 16, 31] have used the surgical microscope so that images are projected into the field of view of the surgeon, our perception location is the neuronavigation workstation monitor. This offers advantages over the microscope projection alone, the most important of which is that we are able to manipulate both the live image and the vessels in a more sophisticated way to improve depth and spatial understanding of the scenes. We are working on a version of the system that will enable injection of information into the microscope, but we will continue to provide more sophisticated renderings on the neuronavigation system by combining the microscope view with the volume-rendered vessels.

Visualization

The visualization of the vessels allowed the surgeon to get an idea of the blood flow through the vasculature. At the same time, the surgeon noticed that the large draining vein was visualized as an artery, i.e. red not blue (Fig. 9). This is because although the 4D DS-CTA allows for the visualization of the bolus through the vasculature of the brain, early draining AVM veins enhance in the arterial phase and thus become colour-coded to look like arteries. This parallels what the surgeon sees in the OR; draining veins appear red due to the fact that they are arterialized veins that contain arterial blood due to the quick shunting of blood through the AVM fistualae. This mimicking of arteries makes it difficult for the surgeon to intra-operatively know for certain whether the vessel is a feeding artery or a draining vein. For navigation and intra-operative identification purposes, it is better to colour the draining veins blue, rather than allow them to be visualized as the arterialized veins that they are. For this reason, we are currently looking at using a priori knowledge in our renderings rather than 4D DS-CTA in order to label the volumes based on surgeon input. Such a colour scheme should reduce confusion between arteries and veins leading to safer resections (Fig. 10).

Fig. 9
figure 9

Based on the virtual information the surgeon placed a micropad on brain surface above virtual marker of feeding artery to help with the resection approach and vessel localization

Fig. 10
figure 10

Vessels overlaid on the patient skin prior to draping (left). The AR view is used at this step to help tailor the extent of the craniotomy. On the right, we see vessels overlaid on the cortex, here the AR view is used to determine the optimal resection corridor. The blue arrows point to the pink markers that indicate the location of deep feeding arteries. The orange arrow indicates the major arterialized vein, shown as red and not blue

As per the surgeons request, sphere markers were placed on a number of manually identified deep feeding arteries (Fig. 9). The markers were useful in localizing the vessel in plane, however, not in depth. In other words, the surgeon knew where to resect, but from the AR view alone it was not possible to determine how deep below the actual surface the deep feeding arteries were located. Furthermore, one of the vessels in most views was below the large draining vein, and thus we used a transfer function and fog to make the vessels somewhat transparent in order to show the marker. This caused the marker to appear as if it were on top of the large draining vein and not below. More sophisticated techniques that give a better perception of markers on the vasculature in 3D space are needed and are a current focus of our work.

The use of such simple markers to indicate anatomy of interest suggests that non-photorealistic (NPR) techniques that use simple lines or points to represent areas of interest may be sufficient for AR guidance. Although in principle NPR techniques allow surgeons to localize particular vessels and areas of interest to tailor craniotomies and plan resection corridors, they do not aid in the overall understanding of the angioarchitecture of the malformation, which has been determined to be a positive effect of AR in AVM surgery [8]. In AR IBIS, we have focused on developing visualization techniques that would allow depth perception of vessels. Based on comments from the surgeons and preliminary studies [29], we have found that using particular visualization techniques can offer better spatial and depth understanding of the vasculature. However, although our methods allow for good relative depth perception, new techniques need to be developed for absolute depth perception. Both neurosurgeons we have worked with have commented that although our system helps them to determine a resection corridor, more information on the exact depth of a vessel would be most helpful.

Usefulness

Based on comments intra- and post-operatively from the surgeon and a post-operative questionnaire that he filled out, we accessed the usefulness of the AR view for planning the craniotomy, the resection corridor, and identifying feeding and draining vessels during the different stages of surgery. Unlike the work by Cabrilo et al. [8], the surgeon found our system to be useful in the identification of feeders. This dissimilarity may be due to the fact that we marked the feeders, allowing the surgeon to easily identify their location even within the complexity of the AVM angioarchitecture. However, due to the draining vein being visible already on the cortex of the patient, the AR view did not add much in terms of localizing the large draining vein. Furthermore, because the draining vein was coloured as an artery and not a vein, identification colour was incorrect and the surgeon gave a low score for drainer identification. The surgeon further commented that flow visualization techniques which can be useful to help identify arteries and veins when there is no abnormal AV shunting (e.g. in cases of aneurysms and vascular tumours) and that it must be realized that these techniques can offer false information when abnormal AV shunting occurs. Therefore, new visualization techniques that use apriori knowledge to identify important vessels and their direction of flow need to be developed to address this problem.

Conclusions

We have developed an AR system and tested it in a number of neurovascular surgery cases. Based on our evaluations, AR seems a feasible and viable technology that can improve on traditional IGNS systems. In recording the different ways in which the Medtronic, PACS and IBIS systems were used, we saw the shortcomings of traditional neuronavigation systems and a benefit of using AR. By bringing together into one view pre-operative patient information and the intra-operative field of view, it can aid in the localization of pertinent anatomy and planning of surgical resections.

Our first results suggest a number of avenues for future work. New visualization techniques that depict absolute depth so that the surgeon knows exactly how far below the surface a particular vessel lies are needed. Visualization of feeders and drainers based on manual segmentations and not blood flow alone would aid in the intra-operative identification of vessels. Using the surgical microscope at later points in surgery to capture live images of the brain, so that there is no disruption of workflow to move the microscope out of the way and bring the camera in, should also be explored. Overall, with the further development of robust visualization techniques and rigorous evaluation of their usefulness in the OR, AR should become regularly used in the OR.