Abstract
Commercial availability of three-dimensional (3D) augmented reality (AR) devices has increased interest in using this novel technology for visualizing neuroimaging data. Here, a technical workflow and algorithm for importing 3D surface-based segmentations derived from magnetic resonance imaging data into a head-mounted AR device is presented and illustrated on selected examples: the pial cortical surface of the human brain, fMRI BOLD maps, reconstructed white matter tracts, and a brain network of functional connectivity.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Augmented reality (AR) is different in its approach than virtual reality (VR); in the former, computer-generated objects supplement the physical, real-world environment [1,2,3,4,5,6,7,8,9] while in the latter, the entire real world is replaced by a computer-generated, virtual environment [10,11,12,13,14,15,16]. Both techniques have found initial applications in teaching and anatomical education [4, 10, 12] and have been explored for aiding surgical procedures [7, 14,15,16].
With increasing complexity of neuroimaging data, there is a need to convey this information to the end user, who in many cases may not be an expert in AR visualization or neuroimaging in general. A quick and easy approach for displaying the result and the most pertinent information will therefore be of great advantage as well as a simplicity of interaction with this complex data. With the advent of consumer-grade head-mounted AR devices, AR technology has been made available to imaging experts without the need of being an expert in AR computer algorithms or AR hardware.
Here, we report first experiences with a commercially available AR device (Hololens, Microsoft Inc.). The MS Hololens is a self-contained holographic computer in the shape of a wearable headset that allows the user to see, hear, and interact with computational objects projected into the environment such as a living room, office space or, potentially a surgical suite. The user can navigate this augmented scene by gestural interaction. There is no need to connect the Hololens to a PC, instead, all data required to display the computational objects is transferred to and stored in the Hololens directly. It uses high-definition lenses with small computer screens positioned in front of the eyes of the user together with spatial sound technology to create an immersive interactive holographic environment. We present an algorithm entirely based on freely available software to create AR objects from magnetic resonance imaging (MRI) data. Selected examples illustrate the application of the algorithm.
Materials and Methods
-
1.
Augmented reality device
The AR device (Hololens, Microsoft Inc.) was acquired as part of a Developer’s edition. One of the device-specific apps, the 3D Viewer beta, allows importing 3D objects.
-
2.
Image data processing pipeline
The following sections describe the conversion of the neuroimaging data into a format that is compatible with the AR device. A flow chart as summary is provided as well (Fig. 1).
MRI image data, both anatomical (structural and diffusion) and functional, was retrospectively collected from an IRB-approved human research study involving repetitive bladder voiding [17, 18] in DICOM format [19] (3T Ingenia, Philips).
-
(a)
Anatomical image data—brain surface
High-resolution T1-weighted MRI images of the brain of one subject (axial, 1 mm isotropic resolution) were imported into Freesurfer (version 5.3.0) and separate pial surfaces of the left and the right hemispheres were obtained in the stereolithographic (STL) format. A 3D scene combining surfaces from both hemispheres was created Paraview (version 5.2, Kitware Inc.) and stored in the X3D format, which was previously also reported to be used for medical applications [20].
-
(b)
fMRI BOLD activation maps
Task-based fMRI images were analyzed by a standard processing pipeline (AFNI software, version 16.3.18) [21]. Group analysis yielded BOLD activation maps in Talairach space (p < 0.05, NifTI format [22]) to identify brain areas activated during the initiation of bladder voiding [18]. Converted into the VTK format (www.vtk.org, Kitware Inc.) by ImageJ {version 1.50c4, 3D IO plugin [23]) the 3D BOLD maps were overlaid onto a mesh of the cortical surface created from the T1-weighted skull-stripped images. This 3D scene including both BOLD activation information and the cortical surface was exported in X3D format (using Paraview).
-
(c)
Diffusion tensor images—white matter tracts
White matter tracts were reconstructed in VTK format from a diffusion tensor acquisition (axial, 15 directions, slice thickness 2.5 mm, in-plane resolution 1.75 mm; diffusion toolkit and trackvis software, versions 0.6.3 and 0.6.0.1, retrospectively, www.trackvis.org/dtk [24], minimum tract length 21 mm). Due to size restrictions in the AR device software, the 3D scene (X3D format) created from this dataset was reduced by the application of the ‘Decimate’ filter in Paraview (target reduction of 90%).
-
(d)
Functional connectivity—brain networks
Functional connectivity (FC) during repetitive bladder voiding was determined by an algorithm described previously [25]. With the correlation coefficient between time courses as edge weights, individual FC networks were created and visualized in the anatomical space of one subject as a polydata dataset (using Paraview), where edges were color-coded by strength of activation of the vertices they connect. For improved visualization, edges with low connectivity strength (edge weights <0.8) were omitted in the final 3D scene (X3D) and a mesh of the cortical surface was included.
All 3D scenes (X3D) were loaded into the Blender software (version 2.77, blender.org) and rendered with standard settings (in FBX format). FBX files are readable by the 3D Viewer app of the AR device.
Results
The AR devices succeeded in creating a feeling of actual physical presence of the computational object, which is difficult to convey by the simultaneously recorded photos (Fig. 2) or videos (supplementary materials 1 and 2).
-
1.
Anatomical image data—cortical surface
The user was able to walk around the freely suspended pial cortical surfaces and inspect them from varying viewpoints. Entering the structure, revealed (limited) interior features as outer layers were automatically removed by the AR device (Fig. 3).
-
2.
fMRI BOLD activation maps
The ability to inspect the location of activated brain regions in a large 3D display by visual inspection facilitated the understanding of the global fMRI BOLD activation pattern also by users, who had no prior introduction to the AR device or the software. No additional instructions were needed and immediate acceptance was observed.
-
3.
Diffusion tensor imaging dataset
Presenting the DTI dataset as a “floating” structure, where layers of tracts can be “pealed off” by simply entering into the computer-generated object, greatly improved the understanding of the spatial relationship of different tract groups to each other (Fig. 3). In addition, the AR device allows simultaneously streaming of the view that is presented to the user to a nearby computer screen (Fig. 3). This feature allowed the imaging professional to guide the inexperienced user.
-
4.
Functional connectivity—brain networks
As observed for the previous models, the ability of walking “into” the computational model allowed to selectively remove layers of edges thereby gaining an intuitive understanding of the connectivity of selected brain areas. The network under investigation was found to consist of two large sub-networks, one in the frontal brain and one that connected sensory regions with deeper brain structures including the micturition center located in the pons (Fig. 2).
Discussion
First reports of AR to aid in surgical intervention or for educational purposes have been reported as early as 20 years ago [1, 2, 4, 26, 27]. Recently, easy access to powerful portable and at the same time commercially available AR devices has renewed interest in this topic [28,29,30,31].
In this technical note, an algorithm and first impressions of a head-mounted AR device (Hololens, Microsoft Inc.) for displaying complex medical image data are presented. In all cases, the application of the device was successful as an impression was created whereby the computational object seemed to exist in the actual real space in which it was displayed. Once positioned, users with no experience of the AR device could interact with these objects by simple inspection guided by simultaneous streaming of the AR view to a computer screen. This greatly aided in the acceptance of this technology and may facilitate applications in clinical research and in patient education. AR provides a new venue, where the imaging expert can interact with the research subject or patient directly. Once established, the here presented algorithm for converting the image data into 3D objects that can be displayed by the AR device can then be easily extended to other kinds of 3D data, potentially to provide guidance in surgical interventions.
Our algorithm is easy to implement, as it makes use of well-established and tested software and data transfer mechanisms. There is no need to write additional software as its components work well together but also provide sufficient flexibility to modify the appearance of the created computational objects. At the same time, this flexibility introduces some kind of variability into the data manipulation. Collecting more feedback from different user groups will help establish further guidelines how to streamline and standardize the processing and display of medical image data. It is anticipated that different user groups will have varying requirements, all of which may not be fulfilled with the here presented algorithm. Once feedback from these different groups is available, additional development efforts aimed at enhancing innovation of the AR environment will then be successful in fostering the application of wearable AR devices for visualizing medical image data.
One great potential of the wearable AR devices such as the MS Hololens is the creation of unique collaborative experiences. For example, co-located users can see shared 3D virtual objects that they interact with, or a user can annotate the live video view of a remote worker, enabling them to collaborate at a distance [30]. Means to identify objects traditionally used with tablets or large computer screens such as mouse pointers can be largely replaced by gestures that make it easier and faster to interact in particular with less training and knowledge of the underlying computer system. There is a direct benefit of integrating decision makers that are not computer experts due to the simplicity of interaction once an AR scene is set up in the device. The Hololens also allows virtual annotations to be incorporated into a computer-created scene. This feature together with view independence has recently been shown to reduce task completion time [31]. The interaction with the computational objects comes naturally. Further improvement of the efficiency of gestures will aid in this process.
Limitations
Restriction of the size of the 3D model, i.e., number of vertices, was experienced as a serious limitation and compromises (i.e., decimating the computational meshes) were necessary. Another shortcoming consisted in the gestures and verbal commands needed to interact with the AR device. Visualization session were therefore separated into two parts; in the first part, the imaging expert positioned the virtual model in the real space, after which the AR device was handed over to the inexperienced user, who then viewed the object without any further interaction with the device.
Conclusion
First positive experiences using a commercially available head-mounted AR device are reported. An algorithm is presented visualizing complex three-dimensional objects derived from medical image data. Potential applications include enhancing understanding of the complexity contained in these objects and advanced education in neuroimaging research.
References
Azuma RT: A survey of augmented reality. Presence Teleop Virt 6(4):355–385, 1997
Milgram P, Kishino F: A taxonomy of mixed reality visual displays. IEIC Transactions on Information and Systems, E77D:12, 1321–29,1994
Abe Y, Sato S, Kato K et al.: A novel 3D guidance system using augmented reality for percutaneous vertebroplasty. J Neurosurg Spine 19(4):492–501, 2013
Blackwell M, Morgan F, DiGioia, 3rd AM: Augmented reality and its future in orthopaedics. Clin Orthop Relat Res 354:111–122, 1998
Kerner KF, et al.: Augmented reality for teaching endotracheal intubation: MR imaging to create anatomically correct models. AMIA Annu Symp Proc p. 888,2003
Nicolau S et al.: Augmented reality in laparoscopic surgical oncology. Surg Oncol 20(3):189–201, 2011
Fritz J et al.: Augmented reality visualization using image overlay technology for MR-guided interventions: cadaveric bone biopsy at 1.5 T. Invest Radiol 48(6):464–470, 2013
Volonte F et al.: Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci robotic console. Int J Med Robot 9(3):e34–e38, 2013
Markman A et al.: Augmented reality three-dimensional object visualization and recognition with axially distributed sensing. Opt Lett 41(2):297–300, 2016
Chinnock C: Virtual reality in surgery and medicine. Hosp Technol Ser 13(18):1–48, 1994
Ota D et al.: Virtual reality in surgical education. Comput Biol Med 25(2):127–137, 1995
Olofsson J et al.: Advanced 3D-visualization, including virtual reality, distributed by PCs, in brain research, clinical radiology and education. Stud Health Technol Inform 50:357–358, 1998
Webb G et al.: Virtual reality and interactive 3D as effective tools for medical training. Stud Health Technol Inform 94:392–394, 2003
Farber M et al.: Virtual reality simulator for the training of lumbar punctures. Methods Inf Med 48(5):493–501, 2009
Clarke DB et al.: Virtual reality simulator: demonstrated use in neurosurgical oncology. Surg Innov 20(2):190–197, 2013
Mi SH et al.: A 3D virtual reality simulator for training of minimally invasive surgery. Conf Proc IEEE Eng Med Biol Soc 2014:349–352, 2014
Khavari R et al.: Functional magnetic resonance imaging with concurrent urodynamic testing identifies brain structures involved in micturition cycle in patients with multiple sclerosis. J Urol 197:438–444, 2016
Shy M et al.: Functional magnetic resonance imaging during urodynamic testing identifies brain structures initiating micturition. J Urol 192(4):1149–1154, 2014
Bidgood, Jr WD, Horii SC: Introduction to the ACR-NEMA DICOM standard. Radiographics 12(2):345–355, 1992
John NW et al.: MedX3D: standards enabled desktop medical 3D. Stud Health Technol Inform 132:189–194, 2008
Cox RW: AFNI: what a long strange trip it’s been. Neuroimage 62(2):743–747, 2012
Larobina M, Murino L: Medical image file formats. J Digit Imaging 27(2):200–206, 2014
Schneider CA, Rasband WS, Eliceiri KW: NIH image to ImageJ: 25 years of image analysis. Nat Methods 9(7):671–675, 2012
Xie S et al.: DiffusionKit: a light one-stop solution for diffusion MRI data analysis. J Neurosci Methods 273:107–119, 2016
Karmonik C et al.: Music listening modulates functional connectivity and information flow in the human brain. Brain Connect, 2016. doi: 10.1089/brain.2016.0428
Berlage T: Augmented-reality communication for diagnostic tasks in cardiology. IEEE Trans Inf Technol Biomed 2(3):169–173, 1998
Sato Y et al.: Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization. IEEE Trans Med Imaging 17(5):681–693, 1998
Kawamata T et al.: Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note. Neurosurgery 50(6):1393–1397, 2002
Paul P, Fleig O, Jannin P: Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE Trans Med Imaging 24(11):1500–1511, 2005
Lukosch S, Billinghurst M, Alem L et al.: The effect of view independence in a collaborative AR system. Computer supported cooperative work. J Collab Comput 24(6):563–589, 2015
Lukosch S, Billinghurst M, Alem L et al.: Collaboration in augmented reality. Computer supported cooperative work. J Collab Comput 24(6):515–525, 2015
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
(MP4 81.0 mb)
(MP4 87.9 mb)
Rights and permissions
About this article
Cite this article
Karmonik, C., Boone, T.B. & Khavari, R. Workflow for Visualization of Neuroimaging Data with an Augmented Reality Device. J Digit Imaging 31, 26–31 (2018). https://doi.org/10.1007/s10278-017-9991-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10278-017-9991-4