Keywords

1 Introduction

A significant drawback of learning anatomy through traditional reading material is the difficulty students have piecing together disparate 2D images into knowledge of 3D structures. In the past, this has been solved through the use of cadavers for dissection, which gives students hands-on experience anatomical structures. Recently, however, this has become less practical for a number of reasons. For one, this method of instruction does not afford students room to make mistakes or repeat procedures. Furthermore, as rising ethical concerns limit the availability of cadavers for dissection, the costs of acquiring them rise. In light of these issues, we present FlexAR, an AR application that combines a tangible interface and GUI to teach anatomy. FlexAR combines the written information available from traditional reading materials with the spatial learning one would acquire from anatomical dissection. Users have the ability to study anatomy at their preferred pace using their study method of choice, freely exploring gross anatomy or selecting specific structures for closer examination.

2 Related Work

In developing a tangible augmented reality (TAR) interface that enhances the efficiency of learning gross anatomy in group and individual study settings, we decided to look at prior work which focuses on augmented reality (AR) applications for anatomy education. Prior work relating to the use of AR for anatomy education is detailed in Juanes’ paper [3]. This paper introduces a tool for augmenting 2D images from a book with static 3D models on mobile devices. This allows the user to view the structure of particular body parts in 3D space without the need for physical models. However, we believe that having a dynamic model will improve the user’s ability to understand spatial relationships and the effect of movement on different systems. For our research we focus on demonstrating the flexion and extension of various muscle groups as the result of movement of the arm using an articulated tangible as a controller. Another related application which uses a TAR interface is ARnatomy. ARnatomy aims to create a tangible user interface (TUI) by using dog bones to control the display of information on a mobile device such as a smartphone or tablet [4]. Though this application does include dynamic text, there is little interaction between the user and the tangibles themselves; the tangible controls only the location of the text onscreen and there is no interaction between the user and mobile device. Thus, this application is useful primarily as a tool for memorizing written information. In FlexAR we fuse interaction with a graphical user interface (GUI) and TUI, allowing the user to manipulate a physical model to drive the animation of a 3D digital overlay and highlight and display the information of individual muscles.

3 System Description

FlexAR consists of a camera device as well as a tangible human arm skeleton which drives the animation of a 3D model projected over the skeleton on the device. The application uses a TAR interface in which the tangible controls interaction with the system, providing an ideal setting for the user to explore the model in 3D space. While traditional instruction employs materials such as books, diagrams, and standalone physical anatomical models, our current prototype combines written information with a tactile model to serve as a self-contained learning module. The GUI within the application supplements the TAR interface with written information, which is displayed over the 3D projection.

The prototype consists of the physical arm model accompanied by our application, which can be run on several different devices. The application uses multiple image targets affixed to a physical model of a human arm skeleton to control the animation of a 3D overlay of the arm displaying the bones and major muscles of the human arm. As users manipulate the physical model, the animation of the physical model is updated accordingly so that they can observe the extension and contraction of the muscles and articulation of the bones as well as major anatomical features. Another form of interaction with application is the ability to select muscles, highlighting them on the overlaid model and displaying their written information. In this way, users can explore anatomical structures from multiple angles and learn at their own pace.

4 Design Implementation

The augmented reality system was implemented using a number of programs and development tools. The 3D overlay was created in Autodesk Maya [1] before being imported into the multiplatform game engine Unity [7] for integration with our application, and all scripting was done in C# using Unity’s integrated development environment (IDE) MonoDevelop. Three different implementations of the project were made for each supported device: desktops, tablets, and wearables.

The assets for the 3D overlay were developed in Maya using our physical arm model and Gray’s Anatomy [2] as reference. To enhance immersion, the physical and digital models had to align as closely as possible in appearance and be anatomically correct. After the skeleton was modeled, each muscle was modeled and textured separately in order to allow them to be selected individually. The movement and deformations were created using a combination of a simple rig and blendshapes, which were set up in such a way as to match the range of motion possible with the physical model. Once the assets were completed, they were exported to Unity for integration with the application (Fig. 1).

Fig. 1.
figure 1

Front, side, and back views of the final digital assets in their anatomically neutral starting position

To expedite the development process and allow the application to be built for multiple platforms, we built the augmented reality system using the software development kit (SDK) Vuforia, a mobile AR library implemented by QUALCOMM Incorporated [6], as an extension for Unity. Vuforia relies on camera feed to track image targets, projecting a 3D overlay relative to the position of detected targets onscreen. For FlexAR, we used 4 targets: 1 to determine the basic position of the arm and the others to control the rotation of the shoulder, elbow, and wrist joints of the 3D model (Fig. 2).

Fig. 2.
figure 2

High-level system overview of the vuforia SDK unity extension

In our initial primary observations, we found that users had difficulty selecting individual muscles in 3D space. A common suggestion was to introduce direct interaction with a GUI rather than with 3D overlay selection. Rather than having to select individual muscles, users could instead display information using tabs labeled with the names of the muscles. We found that this implementation provided a more intuitive experience for the user.

5 Preliminary Observations

5.1 Protocol

To test FlexAR, we observed participants interacting with our prototype in a lab environment. We observed nine university-level participants from a variety of backgrounds including art, dance, and computer engineering. The people we observed were divided into three groups of equal size. The project was built for three different devices – a desktop with a webcam, an Android tablet, and Epson Moverio BT-200 glasses – and each group interacted with one of these devices. For this iteration we focused on usability, interaction, and user experience. Participants arrived at the lab one at a time. After a brief instruction period, each was permitted to explore the application on their own. In the first phase, we observed the interactions with the system using the tangible. Next, participants were instructed to interact with the GUI. Participants were then given the opportunity to freely use the prototype.

5.2 Discussion

The feedback we received was generally positive. Participants listed a number of areas in which they believed FlexAR would be useful. Several students compared it favorably to using anatomy textbooks to train medical personnel. A few mentioned its potential as a reference in the process of creating joint systems for 3D animation, and one participant who specialized in sculpting expressed interest in being able to view static structures from multiple angles.

From the preliminary observations we noticed several things. First, the tablet was too large and difficult to handle at the same time as the tangible. Some participants who had a second person control the tangible did not experience these issues. We inferred that tablets are unsuitable for individual study using TAR-based applications but may be useful for group studying because of the difficulty of interacting with both the device and tangible simultaneously without a partner. What we found most interesting, however, were our observations regarding the desktop and the smart glasses. Many participants gave us feedback stating that the desktop would be ideal for learning as a group or class while the glasses would be most useful in individual learning settings or where mobility was the most important factor. Using the desktop, one person - such as an instructor or group leader - could interact with the application in front of the camera while the others observed the screen. This would be most useful for guided learning. In contrast, the glasses would work best for those wishing to study independently or during individual assignments. Consequently, we plan to focus on these two devices in our continuing research (Fig. 3).

Fig. 3.
figure 3

From left to right: users of a desktop with webcam, Android tablet, and Epson Moverio BT-200 glasses

6 Future Work

Motivated by the positive initial feedback we received, we are currently furthering our research and expanding our prototype to include not only the arm but also other key regions such as the torso. A larger tangible would likely enhance the user’s sense of immersion into the application and bring us closer to our goal of being able to replace cadaver dissection as an effective method of teaching anatomy spatially.

7 Conclusion

FlexAR is a prototype tool for teaching anatomy through the use of augmented reality. We believe that it contributes to education by giving users both written and 3D visual information about anatomy without the need for dissection or traditional study materials. In its current state it is useful as a tool for studying the muscles of the arm, but when expanded shows promise as an application for teaching the anatomy of multiple complete body systems, both for individuals and large groups across a wide range of disciplines.