Abstract
In mini-invasive surgery and in endoscopic procedures, the surgeon operates without a direct visualization of the patient’s anatomy. In image-guided surgery, solutions based on wearable augmented reality (AR) represent the most promising ones. The authors describe the characteristics that an ideal Head Mounted Display (HMD) must have to guarantee safety and accuracy in AR-guided neurosurgical interventions and design the ideal virtual content for guiding crucial task in neuro endoscopic surgery. The selected sequence of AR content to obtain an effective guidance during surgery is tested in a Microsoft Hololens based app.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
- Minimally invasive surgery
- Augmented reality and visualization
- Computer assisted intervention
- Neuroendoscopy
1 Introduction
During the last 15 years, neuronavigation has become an essential neurosurgical tool for pursuing minimal invasiveness and maximal safety [1]. Unfortunately, ergonomics of such devices are still not optimal [2]. The neurosurgeon has to look away from the surgical field at a dedicated workstation screen. Then, the operator is required to mentally transfer the information from the “virtual” environment of the navigation system to the real surgical field. The virtual environment includes virtual surgical instruments and patient-specific virtual anatomy details (generally obtained from pre-operative 3D images).
Intraventricular endoscopy is a routine technique for the therapy of cerebral-spinal-fluid (CSF) dynamic disorders such as hydrocephalus in which membranes are fenestrated in order to restore physiological CSF flow for the patient. Endoscopic interventions are also the mainstay for the treatment of paraventricular cysts that may cause relevant mass effect: in this case an endoscopic fenestration may be required in order to re-establish regular CSF spaces or when paraventricular tumors need biopsy, which might also be accompanied by hydrocephalus treatment [3]. In this context, it is important to mention that endoscopy will be applied through one borehole as entry point. The determination of this entry point will impact the safety and efficacy of the procedure. Thus, endoscopic procedures have frequently been used together with navigation systems in order to apply these goals. Neurosurgical navigation enables, through the registration of the patient’s anatomy. the identification of instruments, endoscopes and microscopes in spatial relation to the patient’s anatomy [1, 2, 4,5,6,7,8,9,10,11].
In commercial neuronavigation systems the navigation information, also in the form of augmented views of external or endoscopic cameras, is normally presented on a stand-up monitor. This means that the practicing surgeon must turn away from the operation field for perceiving surgical navigation information [10, 12,13,14].
In order to allow an uninterrupted concentration on the area of intervention, wearable AR devices are starting to be tested to enter the surgical room [11, 15].
The purpose of this paper is twofold: to lay down the technical specifications that an ideal Head Mounted Display (HMD) should have to guarantee safety and accuracy in AR-guided neurosurgical interventions, and to design the most suitable AR visualization modality for the guidance of a crucial task in such surgery.
2 Materials and Methods
2.1 Design of the HMD
The design of the HMD started from a deep analysis of currently available HMD technologies.
Existing wearable augmented reality displays can deploy Optical see-through (OST) or Video see-through (VST) approaches. Typically, in OST visors, the user’s direct view of the real world is augmented through the projection of the virtual content on a beam combiner and then into the user’s line of sight. The user sees the virtual information displayed on a semi-transparent surface of projection (SSP). Differently, in VST visors the virtual content is merged with camera images captured by two external cameras rigidly fixed on the visor [16].
Both approaches have benefits and drawbacks depending on the task they are designed for. In the context of image-guided surgery, the AR content offered may be simply informative (e.g., textual or numerical values relevant to what is under observation as patient data from the anesthesia monitor) or it may consist of three-dimensional virtual objects inserted within the real environment in spatially defined positions. In the latter case, the virtual content seeks to provide a patient-specific virtual representation of the hidden anatomy (obtained from diagnostic images as CT, MRI, 3DUS…) so as to guide the surgeon’s hand during precision tasks as tissue incisions or vessels isolation. Generally, the VST paradigm yields an accurate and robust alignment between virtual and real content at the expenses of a less realistic and authentic perception of the “real world”, being this affected by the intrinsic features of the camera and display; with OST there is an inevitable lag between real and virtual information and at the same time an accurate alignment between real scene and virtual content cannot be achieved without a specific, and often error-prone, eye-to-display calibration routine. Nonetheless, the main benefit of OST visors is to maintain an unobstructed view of the real world. This is why, depending on the surgical task to be aided, a system that provided both the see-through mechanisms together with a switching mechanism allowing a transition between the two modalities could represent a disruptive asset in the context of AR-based neuronavigators.
An AR HMD that addresses human factors issues towards the achievement of optimal ergonomics and perfect usability in surgery means to target at least the following:
-
To develop a new hybrid video-optical see through AR HMD that allows both the see-through modalities.
-
To develop a mechanism that manages the transition between occluded and non-occluded view. The occluded view is used for the video see-through (VST) modality, whereas the non-occluded view is necessary for implementing the optical see-through (OST) modality.
-
Integrate a real-time eye pose estimation routine (i.e. OST-to-eye calibration) whose goal is to achieve a geometrically consistent augmentation of the reality.
-
Design and develop a software framework capable of managing several video or optical see through-based surgical navigation applications. The application will have to be user-friendly, ergonomic and highly configurable so as to make it suitable for many typologies of potential applications.
This hardware developing phase is currently ongoing in an European project (H2020) coordinated by the authors whose aim is to design, develop and validate a wearable augmented reality (AR) microdisplay-based system to be used in the operating theatre [17, 18].
The VOSTARS project aims to design, develop and validate an immersive and ground-breaking wearable augmented reality (AR) microdisplay-based system to act as surgical navigator. The new AR-based head mounted display (HMD) is bound to massively revolutionize the paradigms through which wearable AR HMD systems are commonly implemented.
2.2 Design of the Virtual Content, Presentation and Interaction Modality
The definition of the virtual content that is intended to augment the surgical experience starts from the decomposition of the addressed intervention into surgical tasks [19].
A major issue in the designing of AR-based surgical navigation system is related to the need of providing consistent visual cues for correct perception of depth and spatial relations in the augmented scene [20, 21]. In fact, as showed by previous studies [11, 22,23,24,25,26], the visualization of virtual content in AR applications are effective in aiding the surgeon executing a specific medical procedure only if they are strongly related to the task. For example, sometimes the superimposition of a semi-transparent virtual anatomy, albeit visually appealing, can be rather confusing for the surgeon. This is due to the surgeon’s limited perception of the relative distances between real and virtual elements within the AR scene and it may be affected by the presence of unnatural occlusions between real and virtual structures. Further, the presentation of a too detailed and complex virtual content, may confound the surgeon instead of being of assistance.
Starting from the previous work [11], in this work the AR content was conceived together with a surgical team to aid the surgeon in planning the optimal trajectory for accessing the surgical target. The tasks selected for guidance in the OST modality are: craniotomies, targeting of the entry point of the endoscope, trajectory alignment.
The defined virtual content are represented by:
-
A viewfinder to clearly show the ideal entry point on the patient’s skull. This entry point would also allow the definition of a proper area for craniotomy.
-
The trajectory to be followed by the endoscope.
-
The virtual frustum of the endoscope; this would help the surgeon in assessing the field of view covered by the endoscope in a specific position.
-
The targeted lesion and some anatomical landmarks (ventricles) (Fig. 1).
Considering that in the surgical room the surgeon could never violate the sterility of the surgical field, manual gesture interaction will be used to allow the user to interact with the AR application without the need for any sort of physical interface; moreover considering the need to keep hands within the surgical field, voice commands will be added to provide a hands-free interaction modality with the AR application.
2.3 Evaluation Study
To bring forward the assessment of the most effective AR visualization modality pending the development of a fully functional hybrid OST/VST HMD, we developed a Microsoft HoloLens based app. Microsoft Hololens was chosen as testing tool for assessing the ergonomics of the AR visualization modality.
The HoloLens is a stand-alone OST HMD that provides unique features such as a high-resolution display, ability to spatially map objects, handle gesture interface, easy interaction through straight gaze-to-target cursor management and voice recognition control mechanism [27]; it has no physical tethering constrains which can limit the movements/gestures of the user during the simulation of the surgical tasks. MixedRealityToolkit, a freely available collection of scripts and components, allows an easy and fast development of AR applications.
Tests also required the fabrication of a physical simulator (i.e., patient specifichead mannequin) similar to that used in [11]; based on the 3D model of this mannequin,an expert surgeon planned the best entry point on the skull cap and the optimal endoscope trajectory for the simulated surgical case.
Physical Simulator Development
The phantom was built starting from an high resolution magnetic resonance imaging study (MRI) suitable for neuronavigation. The image sequence data set was used for volumetric reconstruction combined with thin sliced axial T2-weighted images. The ITK-Snap 5.1 with a custom modified plugin was used to segment ventricles and skull [28]. A simplified lesion model was added close to the ventricular area to simulate the target for endoscopy. The skull model, with the simulated lesion, were 3D printed using acrylonitrile butadiene styrene (ABS) (with a Dimension Elite 3D Printer). A silicone mixtures was used for the manufacturing of the scalp to improve the simulation realism.
A physical support for a registration target (a Vuforia [29] Image Target, as described in the following section), was rigidly anchored to the bone synthetic replica to allow the registration of the virtual content to the real scene.
AR App Development
Unity3D (5.6.1f) was used to create the application. The MixedRealityToolkit (2017.1.2) script collection was used to interact with the virtual content by means of cursor management through gaze-to-target interaction, gesture (“air-tap”), and voice. As already said, the virtual environment include: the targeted lesion and ventricle models, and the preoperative plan (viewfinder and trajectory). A virtual cursor was added to the virtual scene to indicate the straight gaze direction, estimated from the position and orientation of the user’s head in the Microsoft HoloLens based app (the final hybrid OST/VST HMD will allow eye-tracking, thus the virtual cursor position will be fully controllable with eye movement).
The detection and tracking functionalities offered by the Vuforia SDK were used for registration purposes. In particular, two Vuforia Image Targets were used to track in real-time the physical simulator and the endoscope.
3 Results
Several tests were conducted to evaluate the most ergonomic AR visualization sequence in function of the task to be accomplished. In this phase, experienced and young surgeons were asked to perform the percutaneous task wearing the Hololens with the AR app running (Fig. 2).
During the test they were requested to execute the craniotomy and reach the target with the endoscope maintaining the ideal trajectory. The testing phase was essential to define the exact sequence to be used to visualize the AR content.
The users were able to interact with the application via voice commands or hand gestures so to tailor the augmented experience to the user’s own needs.
The testing phase confirmed that the visualized virtual elements are useful to accomplish the surgical target. It underlined that a correct sequencing is of outmost importance for a fruitful augmented experience:
-
Firstly, the surgeon can choose to visualize the target anatomy just for a rehearsal.
-
Only the viewfinder will be visualized to guide the craniotomy.
-
Once the surgical access is prepared the ideal trajectory is showed and the surgeon pivoting on the access point can align the endoscope to the trajectory.
-
While entering the anatomy also the endoscope virtual frustum and the target anatomy are added to improve surgeon spatial awareness during the surgical task.
4 Conclusion
Clinical navigation systems are nowadays routinely used in a variety of surgical disciplines to assist surgeons with minimally invasive and open interventions for supporting spatial orientation and targeting [30,31,32,33,34,35,36]. In surgical navigators, AR-based techniques are often used for identifying the precise location of target lesions or body regions to improve the safety and accuracy of the interventions [15, 37,38,39,40].
There is a growing interest on the use of AR systems as new surgical navigation systems. The introduction of AR in neurosurgery, both for training purposes [13, 41,42,43,44] and as surgical navigators, can lead to positive and encouraging results in terms of increased accuracy and reduced trauma to the patient.
Wearable AR systems based on HMDs allows the surgeon to have an ergonomic viewpoint of the surgical field and of the patient’s anatomy and reduce the problems related to eye-hand coordination [23].
When conceiving innovative navigation paradigm in terms of hardware and software the way virtual content is provided to the user is highly impacting on the usability of the wearable device in terms of ergonomics, effectiveness of the navigation experience, and confidence in the device.
References
Inoue, D., Cho, B., Mori, M., Kikkawa, Y., Amano, T., Nakamizo, A., Yoshimoto, K., Mizoguchi, M., Tomikawa, M., Hong, J., Hashizume, M., Sasaki, T.: Preliminary study on the clinical application of augmented reality neuronavigation. J. Neurol. Surg. A Cent. Eur. Neurosurg. 74, 71–76 (2013)
Kockro, R.A., Tsai, Y.T., Ng, I., Hwang, P., Zhu, C., Agusanto, K., Hong, L.X., Serra, L.: Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 65, 795–807 (2009). discussion 807-798
Schulz, M., Bohner, G., Knaus, H., Haberl, H., Thomale, U.-W.: Navigated endoscopic surgery for multiloculated hydrocephalus in children. J. Neurosurg. Pediatr. 5, 434–442 (2010)
King, A.P., Edwards, P.J., Maurer Jr., C.R., de Cunha, D.A., Hawkes, D.J., Hill, D.L., Gaston, R.P., Fenlon, M.R., Strong, A.J., Chandler, C.L., Richards, A., Gleeson, M.J.: A system for microscope-assisted guided interventions. Stereotact. Funct. Neurosurg. 72, 107–111 (1999)
Edwards, P.J., King, A.P., Maurer Jr., C.R., de Cunha, D.A., Hawkes, D.J., Hill, D.L., Gaston, R.P., Fenlon, M.R., Jusczyzck, A., Strong, A.J., Chandler, C.L., Gleeson, M.J.: Design and evaluation of a system for microscope-assisted guided interventions (MAGI). IEEE Trans. Med. Imaging 19, 1082–1093 (2000)
Stadie, A.T., Reisch, R., Kockro, R.A., Fischer, G., Schwandt, E., Boor, S., Stoeter, P.: Minimally invasive cerebral cavernoma surgery using keyhole approaches - solutions for technique-related limitations. Minim. Invasive Neurosurg. 52, 9–16 (2009)
Cabrilo, I., Bijlenga, P., Schaller, K.: Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations. Acta Neurochir. (Wien) 156, 1769–1774 (2014)
Deng, W.W., Li, F., Wang, M.N., Song, Z.J.: Easy-to-Use augmented reality neuronavigation using a wireless tablet PC. Stereot. Funct. Neuros. 92, 17–24 (2014)
Besharati Tabrizi, L., Mahvash, M.: Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J. Neurosurg. 123, 206–211 (2015)
Citardi, M.J., Agbetoba, A., Bigcas, J.L., Luong, A.: Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. Int. Forum Allergy Rhinol. 6, 523–528 (2016)
Cutolo, F., Meola, A., Carbone, M., Sinceri, S., Cagnazzo, F., Denaro, E., Esposito, N., Ferrari, M., Ferrari, V.: A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom. Comput. Assist. Surg. 22, 39–53 (2017)
Kawamata, T., Iseki, H., Shibasaki, T., Hori, T.: Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note. Neurosurgery 50, 1393–1397 (2002)
Meola, A., Cutolo, F., Carbone, M., Cagnazzo, F., Ferrari, M., Ferrari, V.: Augmented reality in neurosurgery: a systematic review. Neurosurg. Rev. 40, 537–548 (2017)
Finger, T., Schaumann, A., Schulz, M., Thomale, U.W.: Augmented reality in intraventricular neuroendoscopy. Acta Neurochir. (Wien) 159, 1033–1041 (2017)
Cutolo, F.: Augmented Reality in Image-Guided Surgery. In: Lee, N. (ed.) Encyclopedia of computer graphics and games, pp. 1–11. Springer, Cham (2017)
Rolland, J.P., Fuchs, H.: Optical versus video see-through head-mounted displays in medical visualization. Presence Teleoper. Virtual Environ. 9, 287–309 (2000)
Cutolo, F., Fontana, U., Carbone, M., D’Amato, R., Ferrari, V.: Hybrid video/optical see-through HMD. Adjunct. In: Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (Ismar-Adjunct), pp. 52–57 (2017)
Kersten-Oertel, M., Jannin, P., Collins, D.L.: The state of the art of visualization in mixed reality image guided surgery. Comput. Med. Imag. Grap. 37, 98–112 (2013)
Bichlmeier, C., Wimme, F., Heining, S.M., Navab, N.: Contextual anatomic mimesis hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, ISMAR 2007, pp. 129–138 (2007)
Kersten-Oertel, M., Chen, S.J.S., Collins, D.L.: An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE Trans. Vis. Comput. Graph. 20, 391–403 (2014)
Badiali, G., Ferrari, V., Cutolo, F., Freschi, C., Caramella, D., Bianchi, A., Marchetti, C.: Augmented reality as an aid in maxillofacial surgery: Validation of a wearable system allowing maxillary repositioning. J. Cranio. Maxill. Surg. 42, 1970–1976 (2014)
Cutolo, F., Parchi, P.D., Ferrari, V.: Video see through AR head-mounted display for medical procedures. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2014, pp. 393–396. IEEE (2014)
Parrini, S., Cutolo, F., Freschi, C., Ferrari, M., Ferrari, V.: Augmented reality system for freehand guide of magnetic endovascular devices. In: 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 490–493. IEEE (2014)
Ferrari, V., Viglialoro, R.M., Nicoli, P., Cutolo, F., Condino, S., Carbone, M., Siesto, M., Ferrari, M.: Augmented reality visualization of deformable tubular structures for surgical simulation. Int. J. Med. Robot. Comput. Assist. Surg. 12(2), 231–240 (2015)
Cutolo, F., Badiali, G., Ferrari, V.: Human-PnP: ergonomic AR interaction paradigm for manual placement of rigid bodies. In: Linte, Cristian A., Yaniv, Z., Fallavollita, P. (eds.) AE-CAI 2015. LNCS, vol. 9365, pp. 50–60. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24601-7_6
Evans, G., Miller, J., Pena, M.I., MacAllister, A., Winer, E.: Evaluating the Microsoft HoloLens through an augmented reality assembly application. Degrad. Environ. Sens. Process. Display 2017, 10197 (2017)
Ferrari, V., Carbone, M., Cappelli, C., Boni, L., Melfi, F., Ferrari, M., Mosca, F., Pietrabissa, A.: Value of multidetector computed tomography image segmentation for preoperative planning in general surgery. Surg. Endosc. 26, 616–626 (2012)
Badiali, G., Roncari, A., Bianchi, A., Taddei, F., Marchetti, C., Schileo, E.: Navigation in orthognathic surgery: 3D accuracy. Facial Plast. Surg. FPS 31, 463–473 (2015)
Volonte, F., Pugin, F., Bucher, P., Sugimoto, M., Ratib, O., Morel, P.: Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion. J Hepatobiliary Pancreat. Sci. 18, 506–509 (2011)
Zheng, G., Nolte, L.P.: Computer-assisted orthopedic surgery: current state and future perspective. Front. Surg. 2, 66 (2015)
Luebbers, H.T., Messmer, P., Obwegeser, J.A., Zwahlen, R.A., Kikinis, R., Graetz, K.W., Matthews, F.: Comparison of different registration methods for surgical navigation in cranio-maxillofacial surgery. J. Cranio-Maxillo-Facial Surg. 36, 109–116 (2008). Official publication of the European Association for Cranio-Maxillo-Facial Surgery
Condino, S., Calabro, E.M., Alberti, A., Parrini, S., Cioni, R., Berchiolli, R.N., Gesi, M., Ferrari, V., Ferrari, M.: Simultaneous tracking of catheters and guidewires: comparison to standard fluoroscopic guidance for arterial cannulation. Eur. J. Vasc. Endovasc. Surg. 47, 53–60 (2014). The official journal of the European Society for Vascular Surgery
Parrini, S., Zhang, L., Condino, S., Ferrari, V., Caramella, D., Ferrari, M.: Automatic carotid centerline extraction from three-dimensional ultrasound Doppler images. In: 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014, pp. 5089–5092 (2014)
Condino, S., Ferrari, V., Freschi, C., Alberti, A., Berchiolli, R., Mosca, F., Ferrari, M.: Electromagnetic navigation platform for endovascular surgery: how to develop sensorized catheters and guidewires. Int. J. Med. Robot. + Comput. Assist. Surg. MRCAS 8, 300–310 (2012)
Ukimura, O., Gill, I.S.: Image-fusion, augmented reality, and predictive surgical navigation. Urol. Clin. North Am. 36, 115–123, vii (2009)
Lamata, P., Ali, W., Cano, A., Cornella, J., Declerck, J., Elle, O.J., Freudenthal, A., Furtado, H., Kalkofen, D., Naerum, E., Samset, E., Sánchez-Gonzalez, P., Sánchez-Margallo, F.M., Schmalstieg, D., Sette, M., Stüdeli, T., Sloten, J.V., Gómez, E.J.: Augmented Reality for Minimally Invasive Surgery: Overview and Some Recent Advances (2010)
Nicolau, S., Soler, L., Mutter, D., Marescaux, J.: Augmented reality in laparoscopic surgical oncology. Surg. Oncol. 20, 189–201 (2011)
Rankin, T.M., Slepian, M.J., Armstrong, D.G.: Augmented reality in surgery. In: Latifi, R., Rhee, P., Gruessner, W.G.R. (eds.) Technological Advances in Surgery, Trauma and Critical Care, pp. 59–71. Springer, New York (2015)
Ferrari, V., Viglialoro, R.M., Nicoli, P., Cutolo, F., Condino, S., Carbone, M., Siesto, M., Ferrari, M.: Augmented reality visualization of deformable tubular structures for surgical simulation. Int. J. Med. Rob. + Comput. Assist. Surg. MRCAS 12, 231–240 (2016)
Viglialoro, R., Ferrari, V., Carbone, M.C.M., Condino, S., Porcelli, F., Puccio, F.D., Ferrari, M., Mosca, F.: A physical patient specific simulator for cholecystectomy training. In: CARS Proceedings of the 25th International Congress and Exhibition, Pisa, Italy, June 27–30 (2012)
Francesconi, M., Freschi, C., Sinceri, S., Carbone, M., Cappelli, C., Morelli, L., Ferrari, V., Ferrari, M.: New training methods based on mixed reality for interventional ultrasound: design and validation. In: Engineering in Medicine and Biology Society (EMBC), 2015, 37th Annual International Conference of the IEEE, pp. 5098–5101. IEEE (2015)
Freschi, C., Parrini, S., Dinelli, N., Ferrari, M., Ferrari, V.: Hybrid simulation using mixed reality for interventional ultrasound imaging training. Int. J. Comput. Assist. Radiol. Surg. 10(7), 1109–1115 (2014)
Acknowledgments
Funded BY THE HORIZON2020 Project VOSTARS, Project ID: 731974. Call: ICT-29-2016 - Photonics KET 2016.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Carbone, M. et al. (2018). Proof of Concept: Wearable Augmented Reality Video See-Through Display for Neuro-Endoscopy. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2018. Lecture Notes in Computer Science(), vol 10851. Springer, Cham. https://doi.org/10.1007/978-3-319-95282-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-95282-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-95281-9
Online ISBN: 978-3-319-95282-6
eBook Packages: Computer ScienceComputer Science (R0)