Introduction

Whenever we speak of Virtual Reality, we automatically think of stereoscopic goggles; however, goggles are merely the tip of the iceberg of this technology, since specific software is required in order to enjoy immersive experiences [16].

There is a wide range of development tools and programming languages focused on the development of mobile applications. This is also true of Virtual Reality, where there are different development environments orientated towards the implementation of Virtual Reality systems, such as Unreal and Unity. These environments were not created with Virtual Reality in mind, but rather were focused on the development of video games. However, since creating a video game and creating a Virtual Reality experience are very similar tasks, these tools –which are not the only ones– are used for both purposes [714].

In our work groups we currently have a large number of 3-D anatomical models of the human body, all with excellent quality and an exceptional degree of realism. We have generated a range of 3-D anatomical models of different parts of the human body based on radiological section images obtained through imaging techniques such as computed tomography and MRI. These images use the standard Digital Imaging and Communication in Medicine (DICOM) format which, using commercially available medical image processing software, were processed for subsequent handling in computerised technological environments.

Different Virtual Reality technological applications are currently available on the market which moves inside of different parts of the human body using stereoscopic vision systems with 3-D goggles. Many of these developments use video recorded in 360° which is then displayed with Virtual Reality goggles. These videos cannot be viewed in a normal way, since the images are distorted. A Virtual Reality display places video images inside a sphere, with the camera located in the centre, moving in line with the movements users make with their head [8, 11, 12, 14]. The problem with 360° videos is that the user does not really get a three-dimensional vision of the bones with a sense of depth, so they can not appreciate them as if they were real.

The aim of this study is to illustrate the teaching potential of applying Virtual Reality in the field of human anatomy, where it can be used as a tool for education in medicine.

The difference between our development and other procedures are the following: we provide the possibility of interaction in order to take decisions on which explanations we wish to display within the virtual environment, all the experience is in 3-D and we provide interactive educational tools, where the user has to assemble a skull with the diferent bones, by moving his/her own hands and applying his/her knowledge. In the future, this ability to interact will make it possible to simulate surgical interventions, allowing the student or doctor to train in a virtual environment and carry out a simulated procedure without using a real body.

Material and methods

Our application is developed for most of the different goggles we can find in the market: Samsung Gear VR, Oculus Rift or Cardboard. Cardboard is a Virtual Reality platform developed by Google that allow users to use most of the current smartphones to experience Virtual Reality, without having to spend too much money, as they are available in the market for less than five euros.

For the first time in history, we find ourselves in the perfect scenario for the proliferation of Virtual Reality systems in our society since headsets can be acquired at a very low cost [16], being widely accessible.

Our 3-D model of the cranium was developed using an Asteion computed tomography, by Toshiba Medical Systems, of Complejo Hospitalario Universitario de Salamanca, following the protocol for study of the cranium: one in anteroposterior projection and one in lateral position.

The DICOM raw data files provided by the equipment were used to reconstruct a volume which was saved in format ANALYZE 7.5.

An algorithm known as marching cubes was then applied, obtaining a triangular mesh model of the surface of each of these cranial structures. Given the high resolution of the images, the mesh was simplified and smoothed, obtaining polygonal models which could then be edited more easily.

The whole 3-D model of the cranium was generated by the composition of the images of each of the cuts made by way of computed tomography.

When the 3D model was generated, we began programming the virtual experience. For this purpose, the Unity3D video game engine was used, providing a programming environment that provided a series of tools and functions that helped programmers design virtual experiences in both 2D and 3D.

In addition to programming all the animations, camera movements, modifications in the materials of each bone structure, among others, it was necessary to also implement the interfaces that allow users or students to navigate through the different options offered by the system. The main menu (Fig. 1) offers users the following options:

  • Guided Tour: Guided tour through the exterior and interior of the human skull.

  • Evaluation test: The evaluation test places the user inside the skull and asks a series of questions, responding to each one by identifying the correct bone.

  • Vault assembly: Assembling the bones of the upper part of the skull using an interactive simulation.

  • Base assembly: Assembling the bones of the lower part of the skull.

Fig. 1
figure 1

Main menu interface

Furthermore, the project we developed includes two different simulation sections. In these two sections, the user or student must assemble the skull with their own hands. In the first section, they have to assemble the upper part of the skull, choosing its different bone structures and putting them in the right place. In the second section, they must assemble the skull base that makes up the different cranial fossae.

The user can also choose between the two different difficulty levels. In the easiest one, the user will see green anchor points that show how each bone should fit into the skull. In the more complex one, the user will not have any indication or help.

In order to design this interactive simulation experience, it is crucial to recognize the movement of the user’s hand, so we programmed an application that allows to turn any Android device into a Virtual Reality command. This way, the user holds the cell phone with the hand and the Virtual Reality system recognizes the movements and emulates them at the same time in the virtual environment.

Recognizing the user’s movements can be done with different devices, such as Leap Motion, which uses infrared technology to detect the user’s hands and movements. However, the objective of this project is that anybody can use it, without spending a lot. That is why we have turned any smartphone into a remote (Fig. 2), since everyone has one, or even several. Therefore, with a simple stereoscopic headset, a smartphone, and another smartphone, which can be older, we will have all the necessary elements to enjoy the Virtual Reality experience of the skull’s anatomical structures and the interactive simulation.

Fig. 2
figure 2

Using the remote control for interaction

Every time the user places a bone, an explanation is given for it. All the explanations have both audio and text so people with hearing disabilities can also enjoy this simulation experience.

The C# programming language, which is an object oriented language developed by Microsoft, was used for programming all the scripts implemented to create the virtual and interactive experience of the simulation. This language is available for programming in Unity3D, the source code is well-organized and easy to understand.

Results

When entering the cranium through the large hole of the occipital bone (foramen magnum), it can be seen that its base is staggered over three levels called cranial fossae: the posterior cranial fossa, the middle cranial fossa, and the anterior cranial fossa (Fig. 3).

Are delimited by two boundary lines (Fig. 3), an anterior one and a posterior one. The first, the anterior, is the prolongation of the anterior clinoid apophysis of the lesser wings of the sphenoid through to the side of the cranium, starting in the prechiasmatic sulcus. The second line, the posterior limiter, extends from the upper edge of the petrous part of the temporal bone to the back of the sphenoid sella.

Fig. 3
figure 3

Foramen magnum with cranial fossae delimitations marked with virtual animations we can see during the virtual experience

The Anterior Cranial Fossa: made up of the Frontal, Ethmoid (Fig. 4) and Sphenoid bones, specifically the lesser wings and body part of this unpaired medial bone of the base of the cranium. It contains the frontal lobes and is the smallest fossa.

Fig. 4
figure 4

Vision of the anterior cerebral fossae focused on the stmoide bone in its medial part

The Middle Cranial Fossa: goes from the lesser wings of the sphenoid to the upper edge of the petrous part of the temporal bone, contains the sella turcica (where the hypophysis is found). The following orifices are particularly visible in this fossa: optic foramen, optic nerve, superior orbital fissure, greater foramen rotundum, foramen ovale, foramen spinosum, anterior foramen lacerum. All of these orifices are correctly marked with different animations.

The Posterior Cranial Fossa: starts posterior to the superior edge of the petrous part of the temporal bone, the posterior clinoid apophysis and the quadrilateral plate of the sphenoid. It is formed in a large proportion by the basilar, lateral and flaky parts of the occipital bone.

It is the largest and deepest of the three cranial fossa and houses the cerebellum, at the trunk of the brain (Fig. 5).

Fig. 5
figure 5

Posterior cranial fossae with cerebellum

In the interactive simulation we designed for this virtual experience, the user will be able to see the different bone structures of the skull separated and located around it.

Since the 3D model was created by separating the meshes of the different bone structures, they were placed facing the camera. To achieve the most realistic experience possible, it is necessary to apply a material to each bone, which includes a color that reminds the user of the appearance of a real skull. It is also important to study the lighting environment that must be created so that different shapes and cavities can be properly seen.

The user has to go to the cranial vault where they will see the bones of that part separated and the base of the skull in the middle. They will then grab a bone and place it where it belongs. For this, a smartphone with our application installed will be used to detect the movements to place the bone properly.

To pick up a bone, simply put the virtual hand on top of the bone (Fig. 6) and press the screen of the smartphone you are using as a remote. When you want to let go of the bone in the right place, stop pressing the screen of the smartphone used as a remote control. The user experience is pleasant and very simple, intuitively they learn to manage the system in just a few seconds. This was very important during the design, considered a high priority element.

Fig. 6
figure 6

Interactive simulation experience

Conclusions

One of the most interesting aspects of the Virtual Reality technique is that it can be manipulated and users can interact with the generated virtual environment [1, 9, 1215]. In consequence, we can not only visualise a virtual world –in our case a cranium and its constituent parts– but can also manipulate it in order to simulate a specific surgical intervention. The virtual cranium will behave like an actual cranium throughout the operation, whilst the user will manipulate it in a highly realistic manner through the movement of his or her hands. The aim is to simulate an operation in the most realistic way possible.

To achieve this, we use remote control devices which transmit information on the movements made by the user to the goggles, which in turn interpret these data and display them so the user can see his or her own hands moving and carrying out the operation in the virtual environment.

Although several technical aspects are still outstanding in order to ensure complete presence for our brain (such as eliminating certain physical discomforts related to the use of these VR goggles), we firmly believe that this type of technological spatial vision resource will undoubtedly contribute to improving medical training processes. More recently, new hardware elements such as gloves have started to appear, which can be used to interpret the movements not only of the hand but also of each of the individual fingers. This aspect will undoubtedly improve the user experience and make the simulation even more authentic and realistic.

We believe that a carefully designed visual experience such as the one we have developed can help users (students) to have a sense of control over the environment –however fictitious– and facilitate learning and training processes in the medical sphere.

This undoubtedly represents the future for training in medicine. However, this system will not be limited to just training, since it will be used by surgeons to carry out pre-surgery studies and even go through a complete simulation of the intervention, making it possible to accurately and safely predict possible problems which may appear during surgery. We therefore conclude this work by indicating that the use of these stereoscopic vision technologies in medical training facilities and helps improve training in clinical and surgical skills.