Keywords

1 Introduction

Anatomy education is fundamental in life science and health education. In anatomy education, it has been traditionally believed that cadaver dissection is the optimal teaching and learning method [1]. Cadaver dissection definitely provides tangible knowledge of the shape and size of the organs, bones, and muscles. However, dissection offers only a subtractive and deconstructive perspective (i.e. skin to bone) of the body structure. When students start with the complexity of complete anatomical specimens, it becomes visually confusing and students may have a hard time grasping the underlying basic aspects of anatomical form [2]. Consequently, many students have difficulties on mentally visualizing three-dimensional (3D) body from the inside out (i.e. bone to skin), as well as how individual body parts are positioned relative to the entire body.

Unfortunately, even with the availability of 3D interactive tools including virtual reality applications [3, 4], the issue of visualizing movement still remains to be addressed. They mainly focus on identification of anatomical components and passive user navigation. Students must still mentally manipulate 3D objects using information learned from 2D representations [5]. For students’ future career such as orthopedic surgery, physical therapy, choreography, or animation, they need to know the muscle’s action, how it interacts with other muscles, and which normal movements it facilitates. This level of complexity is not easily conveyed via 2D static representations [6]. In addition, existing educational programs don’t provide a flexible learning environment that a student may make a mistake and then learn from it.

Alternative learning materials that focus on constructivist approaches have been introduced in anatomy education: 3D printing [7, 8] and other physical simulation techniques [9, 10]. However, these alternative methods also have limitations: it is difficult to create an anatomical model that makes movements, interactions are limited, and a single model can only present limited information.

2 Background

According to the constructivist learning theories, learning is the personal construction resulting from an experiential process. It is a dynamic process, a personal experience that leads to the construction of knowledge that has personal meaning [11]. With the advancement of new technologies, virtual reality systems enable users to interact directly with anatomical structures in a 3D environment. This raises a new question: does manipulating the anatomical components in a virtual space support the users’ embodied learning and ability to visualize the structure mentally?

Rapid prototyping including 3D imaging, 3D printing and other physical simulation techniques have been recently highlighted in anatomy education [12, 13]. In the last two decades, 3D printing has been successfully utilized in different medical fields, including education. In anatomy, high-quality 3D printed replicas of cadaveric material were recently produced for teaching purpose. Clay modeling provides an interesting approach to learning anatomy and is a technique that has also been examined in recent studies. Studies [14, 15] have suggested that clay modeling can provide an effective supplement to traditional lecture-based teaching and may be more engaging for students than preserved specimen dissection.

Our goal is to develop a virtual reality learning environment that supports a constructivist learning approach and a flexible learning environment. This environment allows a student to make/manipulate a skeletal system using a digital bone box, as well as learn from any mistakes made throughout that process. The recent development of hand movement tracking ability in virtual reality has allowed us to implement this idea. We investigated how a virtual reality environment with a hand controller benefits students’ learning while studying canine anatomy. This paper presents Anatomy Builder VR that allows the user to experience different components of canine anatomy by physical manipulations: recognizing of bones, selecting of bones, and putting bones together in the 3D orientation that they would be in a live animal. We then describe a pilot study performed to examines how students can learn canine limb models through physical manipulation with a traditional bone box vs. using the Anatomy Builder VR program.

3 Anatomy Builder VR

Anatomy Builder VR (Fig. 1) is an ongoing project that examines how a virtual reality system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogic model for learning canine anatomy. Direct manipulations in the program allow learners to interact with either individual bones or groups of bones, to determine their viewing orientation and to control the pace of the content manipulation.

Fig. 1.
figure 1

Anatomy Builder VR hardware (left), main VR environment (middle), assembled pelvic limb in Anatomy Builder VR (right)

The Anatomy Builder VR program utilizes the HTC Vive virtual reality platform. Vive is a consumer-grade virtual reality hardware, primarily developed for use with video games. The platform comes with a high definition head-mounted display (HMD), two motion controllers, and two infrared tracking stations. The tracking stations, placed on opposite ends of a room, allow for room-scale virtual interactions. The project has been developed in Unity3D. All scripting is done in C#. Unity’s development environment allows for easy integration of the Vive with pre-built scripts and API. This allowed us to rapidly develop a functioning prototype and begin design on the user specific interactions. Interaction with the virtual environment is primarily done with the Vive controllers. The controllers have several buttons that are available to be programmed for various interactions.

In the current version of the program, a student can assemble a canine pelvic limb. Each bone in the pelvic limb was 3D scanned using a laser scanner, cleaned up and textured under the guidance of anatomists in the collaboration team. Within the Anatomy Builder VR environment, there is an “anti-gravity” field where the user can assemble the skeleton. Placing a bone in the anti-gravity field suspends it in place. Picking up another bone and placing its joint near a connection on the already field-bound bone will make a yellow line appear. When the user let go of the controller trigger, the two joints snap together. The user repeats this action until the skeleton is assembled to satisfaction. Reference materials to complete the articulation of the limb are presented on the wall, with navigation arrows underneath. Pointing the controller at the arrows and pulling the trigger advances the slides on the screen.

3.1 Interaction Tasks in Anatomy Builder VR

Within the current version of the Anatomy Builder VR environment, there is an “anti-gravity” field where the user places the bones to build the pelvic limb. Interaction tasks for Anatomy Builder VR include:

  • Recognition of bones. For optimal identification of the bones, it is crucial that the user can view the bones from all angles. Therefore, natural head movement is required to be able to inspect individual objects (Fig. 2 left).

    Fig. 2.
    figure 2

    Picking up a bone using a VIVE controller in Anatomy Builder VR (left), transforming a bone (right)

  • Selection of bones. The prerequisite for 3D interaction is the selection of one of the virtual bones. Placing a bone in the anti-gravity field suspends it in place (Fig. 2 left).

  • Transformation of bones. The transformation task includes rotating and translating the 3D bones. Since this is the task that the student is required to spend most of the time on, the success of learning the spatial relationships highly depends on the selection interaction techniques (Fig. 2 right).

  • Snapping of bones. Selecting and transforming a set of 3D bones is the process of assembling bones in the correct positions. Picking up another bone and placing its joint near a connection on the already field-bound bone will make a yellow line appear. When the user let go of the controller trigger, the two joints snap together. The user repeats this action until the skeleton is assembled to satisfaction (Fig. 3 left).

    Fig. 3.
    figure 3

    Snapping bones (left), Reference on the non-dominated hand controller (right)

  • Locating Reference materials. Appropriate reference materials (2D, 3D, or text), to complete the articulation of the limb, are presented where the user decides, and is controlled with the non-dominant controller (Fig. 3 right).

4 Pilot Study

In the pilot study, we investigated how a virtual reality system with direct manipulation may affect learning anatomy in comparison with a traditional bone box method. The main focus of the study was to identify and assemble bones in the same orientation as they would be in a live dog, using real thoracic limb bones in a bone box and digital pelvic limb bones in the Anatomy Builder VR program. For the purpose of the study, we recruited 11 undergraduate students aged 19 to 26. Their majors were from departments across the university and they had never taken a college level anatomy class before. The order was counterbalanced such that 6 of the students started with the bone box first and then virtual reality, and 5 of them started with virtual reality and the bone box later. The students were surveyed for responses before the study began, after each activity, and at the end of the study period.

During the bone box activity, the participant was given a set of directions, and a box containing the bones of the canine thoracic limb. In the other activity of the study, the participants were given a brief introduction to how a VR system worked, and then fitted with the Vive helmet and controller. Upon entering the VR anatomy program, the student was given time to become comfortable with the controls before beginning the tasks. Within the VR space was a screen with slides for directions, an “anti-gravity field”, and a box containing the bones of the canine pelvic limb.

5 Results

On average, each participant spent 6.3 min with the bone box and 7 min with the VR system. The participants’ experiences with the VR system were very positive. In the surveys, most of participants (90%) rated as Strongly agree for the statement, “I enjoyed using virtual reality to complete the activity.” and 9.1% as Agree. With the traditional bone box, 45% assessed as Strongly agree, 27% as Agree, 18.2% as Neutral, and 9.1% Disagree (Fig. 4).

Fig. 4.
figure 4

Assessment of overall enjoyment of each learning method (VR vs. Bone box)

In terms of identifying bones, participants’ ratings were very similar: 45.5% as Strongly agree, 45.5% as Agree, and 9.1% as Neutral for the VR session; 45.5% as Strongly agree, 36.4% as Agree, and 18.2% Neutral for the bone box session. Using the method with a constructivist focus, 63.6% of the participants responded as Agree on the statement, “I was able to manipulate bones and put them together with ease in VR”, 27.3% responded as Strongly agree and 9.1% responded as Neutral. However, 36.4% participants responded as Strongly Agree, 36.4% as Agree, and 27.3% as Disagree (Fig. 5) for bone box.

Fig. 5.
figure 5

Assessment of identifying bones and assembling bones with each learning method.

In the written responses, some participants expressed difficulties in finding details on the bones (ID06) and rotating a bone with a controller (ID05). Others expressed positive aspects of using the direct manipulation method in Anatomy Builder VR:

“I knew where joints were supposed to fit in the virtual environment since they would snap making it easier for me and making me feel more confident in completing the activity.” (ID 09)

“…being able to leave the bones in a specific orientation in VR was a good compromise for me mentally, because I didn’t have to continually revisit each bone or use energy holding them in the right place two at a time.” (ID 10)

“It actually made it easier because I was able to better manipulate the bones because they were held up “in space.” Also, it made more sense when the bones “connected” to each other.” (ID 11).

6 Discussion and Conclusion

We were interested in investigating how a constructivist learning method in VR can benefit anatomy education, while focusing on direct manipulation of bones. The pilot study showed that participants most enjoyed directly interacting with anatomical contents within the VR program. More particularly, participants enjoyed putting a skeletal system inside of the anti-gravity field because the same setup was not possible with the traditional bone box method. Participants spent much less time assembling bones in the VR but they spent a much longer time tuning the orientation of each bone in the 3D space. During the bone box activity, most of them didn’t adjust the model after they finished placing bones on the table. The Anatomy Builder VR program provided a ‘sandbox place’ to novices so that participants actually enjoyed placing bones into an imaginative skeleton.

Even though we were only able to study with a pelvic limb in the VR system, this study showed how a constructivist method could support anatomy education while using virtual reality technology in an active and experiential way. We are extending the understanding of virtual reality design for anatomy education. In the future, Anatomy Builder VR will include muscle systems and an even richer environment for self-evaluate and collaboration.