1 Introduction

Anatomy is considered to be one of the most essential courses for medical students to develop their clinical skills (Fig. 1). A solid understanding includes, for instance, how the human skeleton is built up, the spatial relationships between body parts, the features of different organs, etc. Traditionally, anatomy is taught either by dissecting human cadavers or using text books. Dissection is considered as the most powerful means to presenting and learning the human body since it gives a direct observation and interaction with the human bodies. However, there are a few disadvantages of this method. First of all, preserving human bodies and maintaining dissection rooms in accordance with the European laws is extremely costly (Turney 2007). Besides, due to the limited space around a dissection table, only a small group of students can learn with one cadaver. However, the limited number of available cadavers does not allow a large number of dissections, which limits the opportunities for students to get sufficient anatomy education. Finally, the limitations of the regulations and number of cadavers does not enable self-directed learning, which is identified as an essential skill for medical graduates (Ramnarayan and Hande 2005). Therefore, most teaching of anatomy is still performed from books and online materials that are usually easy to reach and cheap. Another benefit is that all teaching materials in available books and other supporting materials can be found from given references or other online resources, which makes self-directed learning possible. So this method is easy for teachers to organize courses and teaching materials and for students to systematically acquire the required knowledge. However, most of the traditional text books focus on 2D images which is not sufficient to replace human dissection. 3D perception, compared to 2D, provides a better view of depth information of visualized data, which is very important in understanding the spatial location and relationships of human bodies and their composition (Tam et al. 2009; Drake et al. 2010). Another disadvantage is the fact that books show mostly only a specific, and typically a perfect, example of the human body. Therefore, it is not possible to learn from real use cases. Although these are sometimes enhanced with 3D images, for instance, the existing 3D augmented reality books (Wu et al. 2013) can pop up a 3D virtual object on top of the presented material, the interactive 3D experience for students is very limited.

Fig. 1
figure 1

VeLight: A scene with the lower legs with labels for each object is shown. Students can view the scene from different sides and distances by rotating and translating it with the 3D stylus of the zSpace system

Interactive 3D visualizations have made a significant impact in many research fields. Being intuitive and easy to use, these systems are getting more and more popular. An overview of integrating visualization and interaction in scientific data exploration is given in Keefe (2010) and Keefe and Isenberg (2013). With the rapid growths of advanced VR and AR technologies, many interactive visualization techniques using VR and AR techniques have been developed (Bach et al. 2016; Kreylos et al. 2006). These technologies provide a 3D immersive view and direct interactions with the presented data, which greatly supports researchers in understanding and further exploring their datasets. This trend has been confirmed recently by a great number of effective data exploration techniques, such as the data exploration in medical imaging (Johnson et al. 2016; Kim et al. 2017; Sotiropoulos et al. 2012; Jackson et al. 2013; astronomy Yu et al. 2012; 2016; Fu et al. 2010), molecular biology, experimental particle physics, etc. Though expert evaluations and user studies for the fundamental exploration task, all these works showed that, compared to the traditional interaction techniques, the interactive visualization based on VR and AR techniques can give more efficient and effective results.

However, there are two research questions that have to be discussed before integrating VR/AR technologies into scientific exploration.

  1. Q1:

    what exploration tasks in the work practice can be improved by integrating VR/AR techniques?

  2. Q2:

    how VR/AR techniques support students’ data exploration needs?

These questions are very essential for designing interaction methods in fundamental studies since the new interactive visualization system/techniques may affect the traditional learning system.

In this application paper, we focus on VR techniques since they provide simple interaction techniques for 3D manipulations with 3D spaces and objects. We developed an interactive visualization tool for computed tomography scan (CT) based anatomy teaching and training, with which students can study anatomy using a 3D VR display with intuitive interaction methods. The VR device that we use is zSpace  Footnote 1. It consists of a stereographic screen, head tracked eyewear and a stylus, as shown in Fig. 2. With the help of the head tracking system and the 3D glass, the zSpace system understands the correct position of the user and provides 3D perception to the user. Similar to the glasses, the stylus is also tracked in 6DOF, with which the user is able to have full 6DOF interaction with virtual objects.

Fig. 2
figure 2

The zSpace 200 device. Image from https://en.wikipedia.org/wiki/ZSpace_(company)

2 Related work

Our work relates to existing anatomy education methods and advanced interactive visualization approaches in medical training, teaching and data exploration, including VR/AR techniques in medical practice.

2.1 Anatomy education

Anatomy is considered as one of the most critical skills that medical students have to develop. A deep understanding of anatomy is essential for safe clinical practice in surgery (Moxham and Plaisant 2007; Estai and Bunt Estai and Bunt 2016) review the range of teaching resources and strategies used in anatomy education. In general, there are three types of teaching/training modes: cadaver dissection, text books with medical images, lecture-based teaching and computer-based learning which we further discuss below.

Dissection has been the anatomy teaching method for over 400 years (Azer and Eizenberg 2007). Its irreplaceable features, such as enhancing direct feeling and deep learning, practice of manual skills (Azer and Eizenberg 2007; Fruhstorfer et al. 2011) dissection has been regarded as integral and indispensable in the process of training professional doctors (Korf et al. 2008; Netterstrøm 2008) From medical students perceptions towards anatomy education, there is also no doubt that dissection is one of the most highly-rated teaching resources (Böckers et al. 2010). However, as we discuss in the introduction, there are a few disadvantages of this method. The main concern is that cadavers for dissection are considered costly and time-consuming (Turney 2007; Aziz et al. 2002). Therefore, part of the anatomy education has shifted from dissection to text books, which solves the cost issue and helps students prepare for dissection sessions. However, it cannot be used to completely replace human dissection — it merely serves as a supplementary tool due to the limited 2D materials. Another teaching method is through plastic models of human bodies. Although it can solve the 2D perception issue, the model can only represent an example of the human body. In the medical practice, many different models of body parts and different real use cases are needed to be studied in different levels of detail (Korf et al. 2008). Thus, Fruhstorfer et al. (2011) conclude that plastinated prosections are an adequate resource for the early stages of undergraduate training, but the study of wet cadaveric material is still needed for further learning experience. Estai and Bunt (2016) propose that the most effective method to motivate students in modern anatomy education is through multimodal teaching paradigms — by combining multiple pedagogical resources to complement one another. Finally, 3D4Medical Footnote 2 is a cross-platform anatomy education application for helping people understand anatomy through intuitive visualization and interactivity. Users are supported to rotate, zoom, pan and select 3D models through different platforms. An interesting feature is that students can view the 3D model and share their views in real-time in the multi-users AR environment. In the next session, we will focus on the discussion of VR/AR visualizations in the medical education.

2.2 VR/AR visualizations in medical education

Computer-based learning is another popular method in medical training and teaching. There are two main reasons: first of all, the use of medical imaging in anatomy education makes it possible for the students to learn from real cases, instead of the given examples in the text books; second, it increases students’ interest in studying anatomy (Pabst 2009; Reeves et al. 2004) Yet, there is no clear proof that computer-based learning is a better approach than traditional teaching methods (McNulty et al. 2003; Khot et al. 2013; Estai and Bunt 2016; Research Tam et al. 2010) shows that computer-based learning can be used as a great supplement rather than replacement of the traditional teaching methods.

Data visualization aims to help researchers to understand data through visual representations. This is achieved by visualization methods that are intuitive, can easily enhance important structures, and improve the iterative, interactive procedure of exploration and analysis. Concerning these advantages, most computer-based learning methods in medical education are based on 3D visualizations of anatomical structures. Data visualization also encompasses user interaction and analysis (Ware 2004) With the rapid development of VR/AR technologies, interactive visualizations with these technologies have made a significant impact in many different disciplines. Many studies (McNulty et al. 2003; Rizzolo and Stewart 2006) have confirmed the usefulness of VR in medical education. By using touch-based surface, Fyfe et al. (2013) present a virtual 3D dissection platform to explore anatomical body structures. In CINCH Akers 2006, Akers et al. use bimanual input (a trackball and a tablet PC pen) to manipulate and explore 3D models.

Recently, Besançon et al. (2021) present a survey of the state-of-the-art in spatial interfaces for 3D visualization. Their work focuses on the visualization tasks as well as interaction paradigms, such as tangible interaction, mid-air gestures, haptic interfaces and hybrid interaction paradigms. By using advanced 3D VR head mounted displays, for instance Oculus Rift or HTC VIVE, users are supported to see, analyze and explore complex data visualizations in immersive 3D virtual environment. Ye et al. (2021) present an immersive analytics system to assist experts in exploring and analyzing movement trajectory. Their system combines 2D and 3D visualizations and provide users first person view of trajectory data in badminton. Users can also use the VR controller to select and analyze 3D trajectories. Similarly, VR (3D representations, intuitive 3D interactions) can also offer benefits in analyzing and exploring other sports data, such as data visualizations for soccer (Xie et al. 2021) and table tennis matches (Wang et al. 2021; Mathur Mathur 2015) introduces a VR application, by using Oculus Rift in conjunction with Razer Hydra hand controllers, for medical training and instruction purposes. Park et al. (2014) design and evaluate the efficacy of a low-cost VR system for training surgical skills for novice trainees and their results show the preliminary evidence of the efficacy of the VR system. Sousa et al. (2017) use a VR setup and interactive surface to support radiologists to explore 3D data. With hand tracking system, such as Leap motion, Theart et al. (2017) propose the gesture-based method for microscopic data analysis in data scaling and rotation tasks. Khadka et al. (2018) propose a tangible multimodule interface for immersive visualizations. They use tangible props worn around the wrist to represent individual slices of data. Users are supported to add and remove slices from the visualization by manipulating the props.

AR is another immersive 3D visualization technology. Different from VR, the novel aspect of AR is to combine the virtual object with the real environment, which shows a great potential use in the operating room. Various surgical fields including craniomaxillofacial surgery, orthopaedics, spine surgery, neurosurgery, laparoscopy surgery and biopsy procedures are exploring the potential of AR Mischkowski et al. (2005; Kalavakonda et al. 2019) propose that AR could be used for aiding tumor resection in skull-based surgery. They use Microsoft Hololens  Footnote 3 to enable visualization of Computed Tomography (CT) imaging superimposed in 3D on the patient. Meulstee et al. (2019) present an AR image-guided surgery system (IGS) and compare it to the accuracy of a conventional IGS system. Moreover, Glas et al. (2021) explore the possibility of translating the virtual surgical plan to the operating room. The results of the preliminary study show that AR-based technique enables typical navigation tasks to be performed faster and more accurately compared the traditional method. A challenge is the manipulation of 3D data and cutting planes with axis-based constrains (Blum et al. 2012). For the medical visualizations, users often need to manipulate/zoom 3D, which often required to be constrained along a single direction. Another interesting approach for our work is the lightweight tangible 3D interface for exploration of thin fiber structures presented by Jackson et al. (2013). In their work, they employ a depth sensing camera to track small-scale gestures with a paper prop and then they map the movement of the tracked prop into the manipulations of 3D bioimaging datasets. Their approach of data manipulation interests us since the zSpace stylus, which we use to interact with virtual objects, has the same shape as their paper prop.

However, VR and AR head-mounted displays are usually costly, which limits their use in daily education. Furthermore, VR and AR visualization and interaction methods are very case-oriented, for instance, it is difficult to answer 2D text questions by writing in the pure 3D environment provided by these head-mounted displays. Thus, before integrating VR/AR technology in our work, we first discuss the two research questions, Q1 and Q2, proposed in the introduction.

3 System design

In order to learn the necessity of introducing the advanced VR/AR technology to the medical education, we conducted an initial interview with medical experts and teachers. We aimed to learn better the general requirements and the fundamental tasks in medical training and teaching, based on which we can discuss our research questions (Q1 and Q2) and design our system.

3.1 Participants

We invited three medical experts in the interview. All of them are currently working in the local University Medical Center. Two of them had more than 10 years clinical experience in orthopedics and the other has 15 years research experience in Medical Informatics. In the interview, firstly, the medical experts were asked to present the general tasks and requirements, including their workflow in the medical education, the teaching materials as well as the question types. After that, we had a discussion on their tasks and approaches.

3.2 General requirements and tasks

Table 1 General requirements and tasks as specified by domain experts

Based on the interview, we list the most fundamental tasks which are required in the medical training and teaching in Table 1. We now explain some of these tasks in more details.

Visualization Data visualizations in text books are usually provided by static images. This only gives a very limited view of a prefect example of the human body. Thus, we need our system to give a 3D perception of data, which can easily reveal relevant structures and improve the iterative, interactive procedure of analysis. Our system should support users in observing both volumetric scans (CT, MRI, PET as a form of DICOM data (Mildenberger et al. 2002) (V2) and the inner structure of the body, which is usually a subset of 3D meshes (V1 and V3). Moreover, volume rendering is typically used to render volumetric data and a transfer function is used to assign RGB and alpha values for every voxel in the volume. Users can edit it in order to look at different anatomical structures, such as the skin, the skull or the vessels. Thus, the system should provide a colormap for the transfer function (V6) and the editing function on it (I6).

Interaction Interactivity supports users in learning the required knowledge while they are “playing” with the data. Users receive immediate feedback and improve their ability to understand the presented data. Therefore, our system should provide intuitive interactions for 3D data manipulations. For instance, users are able to highlight dedicated objects by a single click on the object (I4), and pick up an object and put it at the right position and orientation in the scene (I3).

Modes As a tool for teachers and a training system for students, the system should have two modes: a self-assessment mode (M1) and an examination mode (M2). In both modes, teachers should be supported to arrange questions (or tasks) and set specific configuration options for the questions. For instance, they can provide possible answers in multiple choice tasks and specify the correct position of the target object in the object selection tasks. Different from the examination mode, in the self-assessment mode, students should be able to receive more feedback on their actions and should have the possibility to retry the tasks.

Course management As a teaching tool, teachers should have a large degree of freedom in designing their courses (C1), by using the provided elements. The system should have different questions/tasks types, such as object selection (T3) and placement (T4). Teachers who using the system in their classes should have the freedom to design their courses as they want. Pre-designed courses would limit the flexibility of the system. Moreover, this flexibility may come in handy when the system potentially expands to other disciplines, such as visualizing surgery procedures to patients. To provide teachers with this flexibility, the application should be able to read a specification file with what kind of question types the program should ask.

Fig. 3
figure 3

The users can adjust the colormap of the transfer function to look at different anatomical structures

3.3 Discussion with domain experts

In the interview, the domain experts were impressed by the default demo provided by the zSpace system, which shows anatomical objects and supports users to explore human body systems. They further expressed the potential needs for such a novel interactive visualization tool in the medical practice, not only for teaching and training, but also for surgery planning. We discussed the exploration tasks in their daily practice or in teaching, which could potentially benefit from learning from a virtual environment (Q1). The medical experts commented that understanding and memorizing the spatial relationships of the anatomical structure is strongly required in medical studies. However, 2D representations of the body structures given by the images in text books are unintuitive, which requires students’ strong imagination in order to understand the spatial structure in 3D. To this point, the 3D perception of depth can indeed help them to understand the data structure. Supported by the tracking system, VR systems also provide the ability to look around the presented data and always show the precise perspective. In addition, because of the intuitive interaction methods, a large degree of freedom can be provided in designing tasks for the same purpose. For instance, for the same purpose of studying the position of a dedicated object, we can either ask students to select the object, or let them pick up the object and put it at the right place and orientation in the body. Thus, we see the potential value in integrating VR interactions in understanding spatial anatomical structures.

The other question is how VR techniques support researchers’ data exploration tasks (Q2). More specifically, we want to make sure the proposed 3D interaction techniques can support users to study and memorize the spatial structures. First of all, one benefit of the zSpace, and also of most of VR technologies, is that the 3D view and depth perception can greatly help to understand the data. Second, the zSpace stylus is held like a pen and users can move their hand naturally to check an object from all directions. This is a very intuitive design that we believe most users can remember how to use without any difficulties. However, we do not want to limit the interaction choices. Instead, we want to provide more natural interaction techniques (such as the lightweight tangible 3D interface (Jackson et al. 2013) in case those can assist users to better understand their data. Another design choice is that, in order to avoid memory overload, we will use only one button on the stylus.

4 VeLight

In the design of our system and the interaction techniques, we were guided by several complementary goals. We designed our system to:

  1. G1:

    support the focus on data exploration,

  2. G2:

    support the manipulation of the scene space as well as the selected object,

  3. G3:

    encapsulate all six degrees of freedom for 3D interaction,

  4. G4:

    provide both large-scale and precise manipulations,

  5. G5:

    be intuitive and require little learning time,

  6. G6:

    be flexible to apply different visualization methods on the target objects, and

  7. G7:

    be easily extensible with different task types, courses and teaching materials.

As an interactive visualization tool for medical education, we specifically suggest to leave a big space on the interface for presenting the visualized data (G1). This is a crucial design decision since a big visualization can provide more details of the data, which assists users with fully concentrating on data exploration. We also want to enable users to manipulate both the whole scene space and the selected object (G2). For instance, students can view the whole body, including the 3D volumetric CT scan as well as the bones, by a large-scale interaction. However, precise interaction is often needed (G4), especially in the task T4, when an object is requested to be moved to the correct location. They can also select and manipulate a dedicated object in the scene. A specific task in this case is tasks I2 - I4 in Table 1. Moreover, we want to enable users to control all degrees of freedom (G3) in an intuitive manner (G4) so that our interaction technique can take advantage of the setup of the VR device. As discussed above, we want to keep the interaction technique simple and use only one button on the stylus, so that little learning time is required (G5). The button is used for holding an object. Furthermore, we want to provide a large freedom in designing the courses (G7). As we listed in Table 1, a variety of questions should be given to explore anatomy by a variety of object manipulation tasks and multiple-choice questions. Those tasks should be possibly included in the system. Finally, VeLight needs to load both volume and mesh data and visualize the data on the zSpace screen. Thus, we also strive for our visualization to have different visualization ways (G6). The users are able to adjust the colormap of the transfer function to get different visualization effects (see Fig. 3).

Fig. 4
figure 4

The interface: the right part of the screen is the UI Area, the left part is the Scene Area. In this particular task, the user has to select the highlighted object in the volume rendering. The UI Area contains the question, the transfer function of the volume, the multiple choices and a submit button

Fig. 5
figure 5

The user is moving the bone in the direction of its correct position. a The red outline indicates that the object is not yet considered to be correctly placed; b The user moves the bone to the correct position in the scene. The green outline indicates this to the user

4.1 User interface

Figure 4 presents the interface of VeLight. The left part of the interface shows the scene, including the volume data, the objects and highlighted target objects. The scene is visualized by the support of stereo rendering of the graphics library and the correct projection is used based on the spatial location of the user, which is tracked by the zSpace 3D glasses. The right side of the screen is dedicated to a 2D user interface (UI), which explains the task that the user has to perform and provides the necessary UI elements. For preparation of the teaching materials, the system provides various types of questions/tasks, such as multiple choice questions, object selection and placement. Teachers are supported to design their courses, teaching and assessment forms. They can upload their own datasets, edit questions, and arrange the course/lecture structure. Students can attend the course and interact with teaching materials (3D data), through which they can understand and remember the body parts, the features of different organs, etc. They are also supported to do a self-assessment (M1) and receive immediate feedback. In the examination mode (M2), when students complete a task, the application should record their answers,so teachers can extract scores of students from the system.

4.2 Interaction techniques

VeLight offers interaction with the data through the zSpace stylus. Here we apply the concept of an adaptive length stylus in the interaction technique. The stylus is visualized as a ray emerging from the stylus tip in the scene. When the ray does not intersect an object, it is visualized with a predefined length and with a ball at its end. Otherwise, if the ray intersects an object, the length of the ray will be adjusted according to the nearest intersection point. In this case, the ball will be located exactly on the intersection point (see Fig. 5). This makes it easy to grab objects that are far away, and to manipulate objects with high precision by picking them up with a small ray length. On the other hand, if users want precise manipulations, they should move the stylus close to the target object for a closer distance interaction. The stylus is acting as a proxy for interacting with a 3D object. To further support users to interact with 3D objects, we also provide different ways for the 3D interaction. One possibility is that, the stylus can be regarded as the target object. When users manipulate the stylus, the target object will be manipulated accordingly.

4.3 Course management

The specification file contains a list of tasks that the application presents to students. For each task, teachers specify the type of the task and the corresponding setup. The specification includes:

  • Whether a volume (such as a body) is shown,

  • Which objects (such as bones) should be shown,

  • Which objects should be highlighted,

  • Which objects should be enclosed by a sphere,

  • Whether labels of objects should be shown.

Furthermore, they have to set the specific configuration options for the questions, depending on the task type. For instance, possible answers and the correct answer should be given in multiple choice tasks; the target object should be specified in the object placement and object selection tasks.

4.4 Implementations

We realize our system by using C++ and render the visualizations using OpenSceneGraph (OSG) Footnote 4. OSG is an open source high performance 3D graphics toolkit. We chose OSG as our graphics toolkit based on two considerations: an integral part of VeLight is concerned with volumes and OSG has a superior plugin for volume rendering; and OSG also supports stereo rendering in quad buffers which is important for the depth perception of VeLight on the zSpace system. In the design of VeLight, all objects (data in the scene and user interface components) are divided in two groups. The first group contains the objects that are part of the scene and that react to grabbing and moving with the stylus. The second group are the objects that do not move when the stylus manipulates the scene, such as the background, the user interface and the visualization of the stylus itself.

Fig. 6
figure 6

User study tasks: Repositioning an object at the correct position, with the correct location outlined a; and without the outline b

5 User study and evaluation

To understand the effectiveness of task design in VeLight, in particular related to the proposed interaction techniques, we conducted a preliminary user study on I3 in Table 1. We selected this task since 3D manipulation is regarded as the most challenging and effective task in the list, which requires cognitive thinking of the spatial position and orientation of the object in the human body. Thus, we believe if students are able to position the anatomical structure at the right place, they also remember the spatial position and orientation of the anatomical structure in the body. Additionally, we also asked the participants’ initial concept to the idea of anatomy education with VeLight. Furthermore, in order to learn the system from the teachers’ perspective, we conducted a user evaluation with medical experts/radiologists. We asked them to compare VeLight with traditional anatomy education methods.

However, a teaching and training system still needs to be validated by a systematic user study, based on ease of use, teachers and students’ feedback, affordable maintainability as well as learning results. A full user study on learning results with VeLight will be performed in the near future.

5.1 Participants

Eight students from different research background (four from medical field) joined the user study. Six of them reported as right handed. However, hand preference does not really make a difference in our design since the stylus can be used in both hands. Seven medical experts/radiologists joined the user evaluation.

5.2 Apparatus

The zSpace setup consists of a stereographic screen, head-tracked eyewear and a stylus. The screen is a Full HD (\(1920 \times 1080\) pixels) stereo-enabled display, with a refresh rate of 120 HZ. The display was positioned so that the center of the display was at a height of about 30 degrees below participants’ eye (see Fig.7). All participants reported that it was the best view for them to finish the tasks.

5.3 Tasks

We tested two reposition tasks (see Fig. 6). The first task was to place an object from the correct position, with the outline of the correct location shown to the participants. The second task was similar, but without the correct position outlined. The reposition task is regarded as a low-level actions that controls position and orientation of the 3D object. However, it requires cognitive thinking of the spatial position and orientation of the object in the human body.

Each task had four trials. At the start of each trial, the objects were repositioned, to prevent the learning effect. The participants were asked to reposition the object as quickly and precisely as possible. In both tasks, there was no indication whether the object had been positioned at the right position. So that the participants had the control of the time that they could stop the current trial and start the next one. Finally, after finishing all trials, the participants were asked to fill in a questionnaire to comment on the experience of using the new interaction techniques and their ideas about the prospects of VeLight.

In the user evaluation, we did not set any specific task, instead, we let our participants to use the VeLight system to examine/study a prepared course — the leg anatomy. We asked them to examine the data and questions for each task and to report any interesting aspects they noticed.

Fig. 7
figure 7

A participant in the user study using our system

5.4 Results

We do not compare the results of the two tasks. Instead, we want to test whether continuous use of the interaction technique in VeLight improves users’ understanding. Although the task completion times and position and rotation offsets during the study were logged, we did not notice anything interesting from the log data. The reason is that the sample was still too small to generate any learning effects. However, from this preliminary study we do not expect to learn how long the participants spent on each trail and how precisely they performed the task. We aim to determine an initial idea whether the VR interaction technique can improve users’ understanding and whether users accept the new studying method. Based on the results of the preliminary study we can further discuss how to run a more detailed user study for the learning effects in the future.

From the observations and discussion, we noticed that the participants were able to learn and work with VeLight easily and quickly: after a short introduction students were capable to work with the system independently. Response to the 3D visualization and interaction was very positive. The use of the stylus for interactions was intuitive. When we asked whether VeLight can replace the current used methods (e.g., textbooks with 2D visualizations) for learning anatomy, with the scale from 1 (fully disagree) to 5 (fully agree), half of the participants voted for 4 and the other half voted for 5. However, note that, the aim of our work is not replacing the traditional method with our new interactive system. We asked this question only for understanding whether and to which extent the interactive system can be an additional education method for anatomy study. One participant commented that “With VeLight and its interaction methods, you can practice in 3D and see the connection between structures.” Another participant reported “If the user was able to upload specific anatomical structures and play around at will, I think it may be more insightful than just looking at pictures or descriptions.” All participants believed that the 3D perception and interaction with the scene gave them a better impression of how the scene is structured.

In the interview with medical experts, we asked them to compare VeLight with traditional anatomy education methods. In general, we got very positive comments — all participants reacted enthusiastically to the potential of VeLight. Most participants found the highlighted contour very helpful in determining the correct position. However, one medical expert mentioned “Without the contour, when you have to make the complete bone structure with the separate bones, you have to think about the orientation and position, instead of ‘somewhere there’.”, which we think is very challenging and requires users to have a clear idea about the spatial structure of the data. They also liked the task to grab and move an object to the correct position. On the other hand, some participants commented that VeLight should be regarded as an additional tool instead of a replacement for the currently used methods.

6 Discussion

We now look back to the two research questions that we proposed in the introduction. The initial interview with the medical experts suggested that anatomy teaching and training may be improved by advanced VR/AR technologies. Our preliminary user study and evaluation clearly showed the potential of such a system for anatomy education. Moreover, VeLight provides 3D perception and 6DOF interaction with 3D medical datasets, facilitating a deeper understanding of anatomical data in an intuitive manner. Thus, we believe that VR/AR techniques have advantages for medical students’ exploration. These advantages include 3D visualization, intuitive interactions, iterative analysis as well as data and image augmentation facilitated by advanced visualization methods. We also agree with the participants who gave the comments that VeLight should be used as an additional tool in the anatomy education, instead of being replacement. The reason is that text books, as a written language, give the most precise description of learning materials and students can always find a specific definition and interpretation. By dissecting human cadavers, students can get a direct experience from the actual cases, which would not be possible from any indirect way or any virtual/artificial objects. Thus, we believe VeLight is a strong supplement for the current anatomy education system.

However, there are some improvements that can be considered in further development. First of all, some participants in the user study complained about the fatigue issues in the user study. One participant reported that at the end of the study his wrist got a bit tired of twisting and changing positions, which caused the reposition result to be not very accurate. We also noticed that the rotation component was very sensitive for some participants, especially when the manipulated object is cylinder-shape. While the grabbing, moving and rotating interaction is already very intuitive, we have further explored new interaction techniques, inspired by Jackson et al. (2013). We believe a combination of interaction approaches would allow users to choose the technique to engage depending on the target dataset.

The user study presented in Sect. 5 was only focused on the interaction of students with the zSpace and their initial impression of the system. Although this gives an impression about the intuitiveness of VeLight, it does not provide insight in if and how the system improves the learning results. To get insight in this data, an advanced user study on learning results needs to be carried out. Thus, we will need to conduct a controlled user study to compare the VeLight interaction with traditional interaction methods, for instance, mouse and keyboard input. Another interesting study is to deploy VeLight to the classroom.

7 Conclusion

In this application paper, we present VeLight, a VR system for medical education. Its development was based on two research questions and an informal study and interviews with medical domain experts. We learn that advanced VR technologies can improve users’ understanding, in the tasks which requires spatial attention. Thus, we focus on anatomy training which requires users to have a good understanding of anatomical structures and relationships. The zSpace on which the VeLight is based provides an interactive environment to implement VR applications. With the development of VeLight we have opened this platform for use in medical teaching and training based on CT datasets. We conducted a preliminary user study and an evaluation to learn users’ initial impression of the system and the interaction method. In general, we got positive comments on VeLight. In the future, we will further improve the user interface and include more medical materials. Furthermore, formal user validation to determine the effect on learning anatomy has to be performed.