1 Introduction

With the advancements in virtual reality, applications employing bimanual actions to accomplish tasks and educate users are becoming commonplace. VR applications such as the VR surgical simulator the da Vinci surgical system (Baheti et al. 2008), 3D sculpting, modeling and animation (Noble and Clapworthy 1998; Green and Halliday 1996; Kiyokawa et al. 1998), immersive 3D medical tele-consultation (Mlyniec et al. 2011) and mathematics and geometry education (Kaufmann et al. 2000) employ two-handed interaction within the virtual environment to train users in performing dexterous tasks that require similar two-handed interactions in real world.

One area where virtual reality has enormous potential is to train users in complex bimanual psychomotor skills in the realm of aviation and automotive technical education. Psychomotor skill is the ability to analyze perceived situations and automatically evoke motor responses to achieve a goal (Rosenbaum et al. 2001; Ericsson and Charness 1994). Psychomotor skill learning is the process of acquiring the aforementioned skills with extensive amounts of practice to tie cognition with motor response. Our emphasis is in the aviation and automotive domain, and specifically in electrical circuitry applications, where these skills include careful analysis of the circuits and components, and precision movements while performing diagnostic measurements of electrical parameters and circuit modifications.

VR can facilitate psychomotor skills learning of these complex tasks involved in electrical circuitry measurement by providing scenarios for extensive practice with multisensory feedback. VR offers the ability to employ interaction principles to accurately simulate real-world performance. Moreover, VR simulations can provide scaffolded learning experience with different step-by-step guided practice scenarios as well as evaluative unguided exercises. It is possible to implement consistent and reliable situations using VR that can be used to train users anytime and anywhere.

The concept and the simulation was briefly introduced in (Parmar et al. 2014). The short poster paper presents the initial idea of the software, and describes a pilot study which found that users effectively learned the psychomotor skills pertaining to electrical circuitry using the interactive breadboard activity simulation (IBAS). This research greatly expands upon the cited work to provide details of the software simulation. Further, this work describes the thorough investigation performed to measure the effect of two common display metaphors: HMD with 6-DOF head-tracking (IBAS-HMD) and Desktop VR display with 3-DOF head-tracking (IBAS-DVR), on psychomotor skills learning with respect to the electrical measurement tasks.

The goals of this research can be stated as follows:

  1. 1.

    Using scaffolded learning approach, our first goal was to develop a virtual reality simulation for learning psychomotor skills related to electrical circuitry, which could be experienced in one of two different viewing metaphors: IBAS-DVR and IBAS-HMD.

  2. 2.

    Our next research goal was to investigate overall whether there is an improvement in learning outcomes in the participants as a result of using the simulation. For this goal, we developed an experiment design with pre- and post-cognitive questionnaire conforming to the four condensed levels of Bloom’s cognitive taxonomy.

  3. 3.

    Further, we wanted to perform a comparative evaluation to assess whether task performance, cognitive learning and psychomotor skills transference was different as a function of the viewing metaphors, based on participant experience in the different viewing conditions.

  4. 4.

    An additional goal was to assess how well the skills learned in the virtual simulation transfer to the real world, by the means of evaluating real-world task performance at the end of the simulation.

2 Related work

Virtual environments have been used extensively to educate users in complex manual tasks. The work of El-Chaar et al. (2011) demonstrates the use of interactive 3D virtual environments for industrial operations training and maintenance. They built a training-centered software platform utilizing 3D modeling, animation and simulation to educate users in complex industrial operations tasks. They found that VR provides enabling circumstances to guide users through most complex and critical operations. Kotranza et al. (2009) studied the effects of real-time in situ feedback of task performance in mixed environments for learning joint psychomotor-cognitive tasks with respect to clinical breast exams (CBEs). They found that by integrating real-time visual feedback of learners’ quantitatively measured CBE performance, the mixed VE provides on-demand learning opportunities with more objective, detailed feedback than available with expert observation. Their study highlighted that receiving real-time in situ visual feedback of their performance provides students an advantage, over traditional approaches to learning CBEs, in developing correct psychomotor and cognitive skills. The study of Kaufmann et al. (2000) utilized Construct3D, a three-dimensional geometric construction tool based on the collaborative augmented reality system ’Studierstube’ to educate users in mathematics and geometry. Their system utilized a stereoscopic HMD and the Personal Interaction Panel, a two-handed 3D interaction tool, and found that the use of VR technology in the form of Construct3D facilitates ease of learning and encourages experimentation with geometric constructions. Baheti et al. (2008) studied the effects of VR in the form of a VR robotic surgical simulator for the da Vinci surgical system (DVSS). They developed a two-handed 6-DOF VR trainer for acquiring basic psychomotor skills that are needed to perform surgery using the DVSS and found that the VR application provides a suitable beginners training environment for users before they graduate to the actual DVSS device. Assfalg et al. (2002) utilized a 3D VE to train construction workers in a safety training system, and found that subjects showed an increased interest in the combined used of 3D graphics and multimedia, and they appreciated the possibility of seeing such solutions systematically used for their training. In a recent study, we examined the differences in dimensional symmetry by comparing a 3-DOF interaction metaphor to a 6-DOF metaphor (Bertrand et al. 2015) and found that higher degrees of freedom improved skill transference to the real world with respect to the motor aspect of psychomotor skills. This suggests that higher degrees of interaction fidelity may be beneficial for a wide range of training simulations that involve a psychomotor component.

Also, prior empirical studies have examined the effects of various display metaphors on learning and task performance. Qi et al. (2006) compared immersive HMD, DesktopVR (DVR) and DesktopVR with haptics displays for volume visualization and corresponding task performance. Results from their study showed that the DesktopVR and the DesktopVR with haptics groups were significantly more accurate at judging the shape, density, and connectivity of objects and completed the tasks significantly faster than the HMD group. Also, the DesktopVR group was significantly faster than the haptic group, there were no statistical differences in accuracy between the two. Arthur et al. (1993) evaluated the 3D task performance for Fishtank virtual worlds and traditional workstation graphics displays comparing head-coupled viewing to stereoscopic viewing. They found that most users strongly preferred head-coupled viewing over stereo viewing. Though both head-coupling and stereo contribute to performance, head-coupling helps to a much greater extent. Demiralp et al. (2003) performed a qualitative and quantitative comparison between CAVE and DesktopVR displays for scientific visualization. The results of the qualitative study showed that users preferred DesktopVR display to the CAVE system for their scientific visualization application because of perceived higher resolution, brightness and crispness of imagery, as well as comfort of use. The results of the quantitative study showed that users performed an abstract visual search task significantly more quickly and more accurately on the DesktopVR display system than in the CAVE. They concluded that DesktopVR displays are more effective than CAVEs for applications in which the task occurs outside the user’s reference frame, the user views and manipulates the virtual world from the outside in, and the size of the virtual object that the user interacts with is smaller than the user’s body and fits into the DesktopVR display. Aoki et al. (2008) studied trainees’ orientation and navigation performance during simulated space station emergency egress tasks, compared while using immersive HMD and desktop VR systems. Their analyses showed no differences in pointing angular-error or egress time among the groups. The HMD group was significantly faster than the desktop group when pointing from destination to start location and from start toward a different destination; however they suggested that this difference may be attributed to differences in the input device used. All other 3D navigation performance measures were similar using the immersive and non-immersive VR systems, suggesting the simpler desktop VR system may be useful for the specific astronaut 3D navigation training.

Prior research in employing virtual technologies in electrical circuitry include the work of ZheMin and Lingsong (2009) in creating a 2D virtual instrument IDE utilizing a software breadboard and a pipeline component-based assembling technique. Further, Tawfik et al. (2013) describe virtual instrument systems in reality (VISIR) for remote wiring and measurement of electronic circuits on breadboard. Using VISIR, the user designs a circuit via mouse-based interaction on a simulated 2D workbench. Lingsong and Wei (2011) present a rich internet application-based 2D virtual instrument platform for experiment tests and measurements. Oleagordia Aguirre et al. (2013) present a similar 2D virtual laboratory setting providing a test bed platform for training and performing practical exercises pertaining to electrical circuitry measurements. Research by Menendez et al. (2006) describes a 2D virtual electronics laboratory to improve industrial electronics learning. Richardson and Adamo-Villani (2011) present a 3D virtual embedded microcontroller laboratory for undergraduate education. They developed and evaluated this virtual learning environment (VLE) in an introductory microcontroller undergraduate course at Purdue University, and found the VLE to be easy to use, engaging, useful, and comparable to a physical laboratory experience. A similar experiment conducted by Finkelstein et al. (2005) looked at substituting VR simulations for laboratory equipment in an introductory physics course. Here, one group of students used a computer simulation that modeled electron flow in a circuit, and another group used real equipment. Students using simulated equipment outperformed their counterparts on surveys as well as tasks of assembling a real circuit and describing how it works. Zacharia (2007) looked at investigating the value of combining real experimentation (RE) with virtual experimentation (VE) in students’ conceptual understanding of electrical circuits. A 2D desktop virtual circuits software was used in the RE + VE condition. They found that mixing the technologies enhanced students’ conceptual understanding of electric circuits more than RE alone, and their results were in favor of VE.

Our work differs from the discussed related work by providing an immersive virtual experience to train students about the fundamentals of electrical circuitry and measurement with the use of scaffolded learning. The immersive visualization, guided learning, and integrated testing is meant to augment classroom education to reduce the load on teachers to teach introductory electrical circuitry and focus on advanced topics. Our study specifically targets the combination of physical and cognitive skill learning pertaining to electrical circuitry measurement training employing an interactive 3D virtual environment. Further, our research provides a novel contribution in comparing the effects of the two common commodity display metaphors (HMD and DesktopVR) on psychomotor skills learning involved in electrical circuitry measurement. Finally, our research introduces a novel and extensible 3D simulation for educating users in electrical circuitry measurement tasks.

3 Methods

3.1 Participants

The experiment involved 24 volunteer participants, 8 women and 16 men, recruited from the College of Engineering and Science at Clemson University, aged between 19 and 30 (mean = 23, SD =  2.75). The participants had natural or corrected to 20/20 vision for this study, and had little to no prior experience with the instruments covered in the simulation. 10 out of the 24 participants played games frequently (at least once a week) and 7 participants reported familiarity with the Razer Hydra or similar input devices. However, all the participants reported the simulation to be a novel experience.

3.2 System design and implementation

3.2.1 User experience

IBAS comprised of three modules, one for each instrument: voltmeter, ammeter and multimeter. Each module was further subdivided into three sections: introduction, guided practice and exercises. IBAS started with a training module for task familiarity. The IBAS system setup is shown in Fig. 1.

Fig. 1
figure 1

Experiment setup for the Interactive Breadboard Activity Simulation in the DesktopVR condition (IBAS-DVR) on the left, and the IBAS-HMD condition on the right. The InterSense IS-1200 tracker is attached to the user’s head, and the user is holding a Razer Hydra controller

Training The training section was built specifically for users to get acclimated to the two-handed 3D interactions performed within the VE, specifically picking up and releasing objects, translations and rotations. The users were presented with simple tasks such as picking up a ball and dropping it into a basket, and turning a key inserted into a keyhole. This section also introduced users to head-tracked egocentric viewing through the HMD or head-tracked perspective corrected viewing in case of the DesktopVR condition, and the users could look at the VE from different angles to help them complete the task. They received textual as well as auditory instructions and feedback. Visual feedback was presented in the form of highlights and color changes, and auditory feedback in the form of collision sounds.

Introduction This was a textual section that introduced users to the electrical instrument to be learned in that particular module, the structure of the oncoming section and what to expect, the learning goals of the particular module, and any new electrical components, methodologies or terminologies that will be encountered within that module (Fig. 2). Reading and navigating through this text- and image-based GUI section of a maximum of five pages, prepared the users for the learning task ahead specifically pertaining to electrical circuitry measurement.

Fig. 2
figure 2

Introduction section. Provides a slideshow-based introduction to the modules within IBAS (voltmeter example shown)

Guided practice This section provided users with step-by-step guidance in learning and performing actions to successfully measure electrical parameters with the measurement instruments, just like a virtual teacher (Fig. 3). The users were presented with the VE consisting of the instrument, the breadboard, and other electrical components placed on a workbench within a classroom environment, and they interacted with the VE with either hand. Like the training section, users could view the VE from different angles utilizing the head-tracking on the HMD or the desktop monitor. There were textual instructions overlaid on the top of the screen, which the users could read as well as hear that prompted users to perform the required action to complete each step. The simulation advanced to the next instruction only upon correctly completing the current instruction step. In addition to the multisensory feedback described in the training section, feedback was also provided with the instruction background turning green or red depending on the action being performed correctly or incorrectly. There was also a simulated pseudo-haptic feedback in the form of stickiness to the breadboard wells, whereby the probes tended to stick or snap onto the breadboard wells as the user passed over the breadboard. The controllers did not provide any actual haptic feedback, but the participants experienced the need to apply some minimal force to break contact with one breadboard well and move to another. This was done to reduce accuracy errors while placing a probe into a breadboard well. On an average, users took 8 min to complete the guided practice sections (mean = 492.04 s, SD = 137.04 s).

Fig. 3
figure 3

Guided practice section. Provides step-by-step guidance and feedback to successfully perform measurement activities using the instrument. Example is shown for multimeter guided practice. The blue and red spheres indicate virtual positions of the left and right hand respectively (color figure online)

Exercise This was an open-ended exercise section in which users were presented with tasks similar to the guided practice, but without the helpful step-by-step guided instructions from the guided practice, and with increasing complexity (Fig. 4). There were three exercise tasks in each module, and the users were expected to apply the knowledge gained from the introduction and guided practice sections to perform a successful measurement and answer the question for that particular task. The users were not limited or forced to perform any sequence of steps, and visual feedback was provided for correct or incorrect answers in the form of the question background turning green or red accordingly. Upon successfully completing the exercise, the simulation advanced to the next instrument module. On an average, users took 10 min to complete all the exercises (mean = 601.37 s, SD = 195.28s).

Fig. 4
figure 4

Exercise section. Open-ended section where users are asked to take a measurement and are expected to utilize knowledge gained from previous sections. The above image shows an exercise task from the voltmeter module. The blue and red spheres indicate virtual positions of the left and right hand respectively (color figure online)

3.2.2 Modeling, rendering, and animation

We used Unity3D (Unity 2015), a powerful game development and rendering engine, to create IBAS. The 3D objects used in IBAS were modeled using Blender and then imported into Uniy3D. All the animations were performed using scripts within the Unity3D development environment. The system ran on a Windows XP machine with an Intel Core 2 Extreme QX6850 processor, an NVIDIA GeForce 8800 GT graphics card, having 4 GB of DDR2 memory. All the software, system drivers and the BIOS were updated to the latest available versions.

3.2.3 3D user interaction

Two-handed 3D user interaction with the VE was enabled using the Razer Hydra gaming controller (Razer 2015) having 6-DOF magnetic motion tracking. We used the Sixense Unity Plug-in to be able to interface with the Hydra from within the unity development environment. Positions and orientations of each Hydra controller were obtained and applied to virtual manipulators, maintaining a constant distance offset from the actual physical position of the controllers. These virtual manipulators were represented using a blue and a red sphere (selector spheres) as shown in Figs. 3 and 4, one for each hand of the user, which were translated and rotated according to the user’s hand movements. All the tasks in IBAS could be performed using either of the hands or both hands simultaneously. Visual feedback was provided when the selector spheres collided with an object in the VE, in the form of a highlighted outline around the object and a change in color of the selector spheres. Interactions such as picking up objects and moving them around could be performed by pressing the trigger button on the Hydra and then holding it pressed. Releasing the trigger button would release the picked up object. Rotations in the 3D VE, such as rotating the circular knob on the multimeter, were performed by first colliding with the object to be rotated, holding the trigger button down to select the object for rotation, and then rotating the hydra along the axis of the desired rotation. The 2 button on the Hydra could be pressed to bring up a number pad to punch in numbers for reporting measurements. The numbers on the number pad were selected by moving the selector spheres over them, and then pressing the trigger button would enter the corresponding digit in the answer text box.

3D viewing within the VE in both the HMD and DesktopVR conditions was facilitated by head-tracking using the InterSense IS-1200 VisTracker system (InterSense 2015). This system provides 6-DOF tracking using a hybrid of inertial-optical technology. There was a direct-to-scale transfer of the tracker’s position and orientation data to the camera in the VE, with a transform correction to account for the tracker’s position on the head with respect to the user’s eye position. The users experienced natural head-tracked movements in the HMD condition, and a head-tracked perspective corrected viewing in the DesktopVR condition.

3.2.4 Scenario design and generation

The scenarios were designed using a hierarchical task requirement analysis for measurements in electrical circuitry for each of the instruments, based on subject matter expert feedback from technical college instructors. The scenarios were developed for beginner-level users, having little or no knowledge of the subject. The introduction section provided all the basic knowledge required to prepare the users for the guided practice and the exercise sections. The guided practice scenario was designed at the lowest level of complexity, focusing on learning the steps of taking a measurement at the most basic level, and the exercises were designed to further the understanding from guided practice with tasks gradually increasing in level of complexity.

3.2.5 Physics and inter-object Interactions

Inter-object collisions and interactions were handled using colliders and rigid bodies within the Unity3D physics engine. Rendering the electrical wires was one special case where we used Bezier curves to draw curved lines and dynamically update them while interacting with the wires. Another special case was the simulated pseudo-haptic stickiness of the probes with the breadboard wells, where we used raycasting to find if the probe was hovering over a well and set the probe’s position to that well. The probe would then provide a mild resistance to move to a different well, giving a simulated sense of stickiness to the well and making the movement somewhat discrete instead of smooth and continuous.

3.3 Experiment procedure and design

Participants were randomly assigned to one of two conditions for the interactive breadboard activity simulation (IBAS) in a between subjects design: IBAS with non-stereoscopic, head-tracked head-mounted display (IBAS-HMD) and IBAS with non-stereoscopic, head-tracked and perspective corrected DesktopVR display (IBAS-DVR).

The two viewing metaphors (IBAS-DVR and IBAS-HMD) are highly interesting because of the varied VR experiences each provides. On one hand, the IBAS-HMD viewing metaphor is immersive and provides a first person perspective as if you are in the simulated laboratory environment. The IBAS-DVR, on the other hand, provides a non-immersive view of the environment, like looking through a window into the laboratory environment, with a body-based interaction paradigm at the level of head-tracked perspective correction. However, one can argue that the IBAS-DVR is a cheaper alternative, if perspective correction is provided via off the shelf devices such as the Xbox Kinect and a desktop monitor display, as compared to the cost of the eMagin hmd ($1500), although the Oculus HMD is a much cheaper alternative to the eMagin.

3.3.1 Hypothesis and research question

Using the two common display metaphors, the study aimed to determine the effects of display metaphors on psychomotor skills learning in an interactive 3D virtual environment pertaining to electrical circuitry measurement. We asked the following research questions:

  1. 1.

    What are the quantitative differences in users’ psychomotor skills learning and task performance with respect to cognition, time to complete task, accuracy, knowledge transfer to real-world and real-world psychophysical task performance, between conditions IBAS-HMD and IBAS-DVR?

  2. 2.

    What are the qualitative differences in users’ psychomotor skills learning and task performance with respect to cognition, presence, affordance and acceptance of the VE between the conditions IBAS-HMD and IBAS-DVR?

Based on these research questions, we hypothesize the following:

  1. 1.

    The overall learning outcomes of the participants will improve after using IBAS, and will perform quantitatively better in post-cognitive tests as compared to pre cognitive tests.

  2. 2.

    Participants in the IBAS-HMD condition will perform quantitatively better as compared to those in IBAS-DVR with respect to (a) cognition, (b) psychomotor skills learning, and (c) real-world psychophysical task performance.

  3. 3.

    Participants will report positively in favor of the IBAS-HMD condition as compared to those in IBAS-DVR with respect to cognition, presence, affordance and acceptance of the VE.

3.3.2 Materials and apparatus

For the IBAS-HMD condition, participants were seated in a chair wearing an eMagin Z800 3DVisor head-mounted display having a 40\(^\circ\) diagonal field of view for each eye at an analog SVGA resolution of 800 \(\times\) 600 at 60 Hz. For the IBAS-DVR condition, participants were seated in a chair 33 in (838.2 mm) away from a 20.4 in (518.4 mm) \(\times\) 12.7 in (324 mm) Dell 2408WFP monitor display to match the field of view of the IBAS-HMD condition. To further keep the viewing conditions consistent, the monitor was set to a resolution of 800 \(\times\) 600 at 60 Hz.

An InterSense IS1200 6-DOF VisTracker system was attached to the frame of the HMD in the IBAS-HMD condition, or a similar frame without an HMD affixed to the participant’s head in the IBAS-DVR condition for head-tracking. Participants used the two-handed Razer Hydra game controller with 6-DOF magnetic tracking to interact with the VE. The experiment PC was described earlier in Sect. 3.2.2.

3.3.3 Procedure

The procedure was composed of the following steps:

  1. 1.

    Pretrial Upon arrival, participants were provided an informed consent form. Upon receiving consent from the participants, they were asked to complete four pre-experiment questionnaires: (a) the Guilford–Zimmerman (GZ) Spatial Orientation test (Guilford and Zimmerman 1948); (b) the Visual Memory (MV-2) test and (c) the Paper Folding (VZ-2) test of Visualization from the ETS kit of factor-referenced tests (Ekstrom et al. 1976); and (d) a pretrial Cognition Questionnaire. Participants were randomly assigned to either the IBAS-HMD or the IBAS-DVR conditions.

  2. 2.

    Experiment trial Participants were explained about the two-handed game controller, the HMD (in case of the IBAS-HMD condition) and the head-tracking system, and were asked for permission to place it on their head. Figure 1 shows this experiment setup. The simulation was then started and participants were asked to follow the instructions given in the simulation. IBAS guided the participant first through a training section, where the users performed simple 3D pick and place tasks to acclimatize themselves to the two-handed 3D interaction with the VE, as well as the viewing method. IBAS then led the participant through the introduction, guided practice and exercise sections of each of the instrument modules: analog voltmeter, analog ammeter and digital multimeter. All through the runtime of the experiment, IBAS was collecting position/orientation and event specific data of entities in a separate thread and storing it in an XML file.

  3. 3.

    Real-world test After the experiment trial conclusion, participants were asked to perform three real-world psychophysical tasks of taking electrical measurements on a real breadboard circuit, one for each instrument. Video recording of the participants’ actions was performed for this test to later analyze user performance.

  4. 4.

    Post-trial Participants were asked to complete two post-experiment questionnaires: (a) a post-trial Cognition Questionnaire; and (b) the Witmer–Singer Presence Questionnaire (Witmer and Singer 1998). Finally, the participants were debriefed.

3.3.4 Measures

The independent variables are the two viewing methods: the head-tracked head-mounted display (IBAS-HMD) and the head-tracked perspective corrected DesktopVR display (IBAS-DVR). There were a number of dependent variables that we used to evaluate the effect of viewing metaphors on psychomotor skills learning.

  1. 1.

    Quantitative measures: Bloom’s cognitive taxonomy Bloom’s taxonomy (Bloom et al. 1956) is a way to organize the levels of cognitive skills learned in a classroom and similar educational situations. This taxonomy has been popular in VR and education literature, and is utilized frequently to evaluate learning. For example, one study (Jou and Wang 2013) investigated the effects of virtual reality environments on learning performance of technical skills in accordance to Bloom’s Taxonomy. Another study (Schmitz et al. 2012) analyzed the educational potential of augmented reality games for learning using the taxonomy. Also, Bloom’s taxonomy was utilized to study the effects of travel technique on cognition in virtual environments (Zanbaka et al. 2004).

    We analyzed and compared scores of each participant from pre- and post-experiment cognition questionnaires, which were based on four condensed levels of Bloom’s Taxonomy (Crook’s condensation) (Crooks 1988), namely Knowledge (recall or recognize information), Application (use or apply knowledge), Analysis (interpret elements, analyze or break down) and Evaluation (critical thinking, strategic comparison and review). Below are a few example questions from the questionnaire under each cognitive level:

    Knowledge

    • Identify the measurement instrument shown in the picture.

    • Using the illustration of the analog ammeter scale below, determine (a) the unit of measurement and (b) the number of fractional divisions, or graduations, per unit of measurement.

    Application

    • Give the steps you would follow to measure an electrical parameter using the given instrument.

    • Explain step-by-step the process that you would follow when measuring voltage across a circuit element using the instrument shown.

    Analysis

    • Given a scenario of a colleague using a measurement instrument, analyze if the colleague is making any mistakes, and provide explanation and the solution

    • Imagine you saw a colleague grabbing a 0–10V DC voltmeter to measure the current through an electrical circuit. (a) describe the issues you expect your colleague to encounter, and (b) discuss how you would explain to your colleague why this voltmeter is not appropriate for making this measurement.

    Evaluation

    • Reflect on your personal and work experiences and describe one scenario in which you used or could have used measuring instruments to assist on a task or project.

    Dave’s psychomotor taxonomy Dave’s taxonomy for the psychomotor domain (Dave 1975) organizes motor skills into the following six categories: Imitation (observe and replicate), Manipulation (reproduce activity from instruction or memory), Precision (execute skill reliably, independent of help), Articulation (adapt and integrate expertise to satisfy a non-standard objective), and Naturalization (automated, unconscious mastery of activity and related skills at strategic level). Psychomotor skills can be assessed by mapping measured variables to these categories and analyzing differences across the categories.

    We administered three real-world psychophysical tasks to the participants to study the transfer effects of learning in the virtual environment. On a given electrical circuit assembled over a breadboard, the three tasks were (1) to measure the resistance across a particular electrical element, (2) to measure the voltage across a particular section of the circuit, and (3) to measure the current flowing through the circuit. We used the first 3 levels of Dave’s psychomotor taxonomy, namely the imitation, manipulation and precision levels, for evaluating the psychophysical factors. For the imitation level, we looked at the mean time to complete the guided practice tasks to measure the efficiency of imitation of the given instructions. For the manipulation level, we looked at how well the users remembered and followed the steps learned from the guided practice section while performing the real-world psychophysical task, and also the number of mistakes made while performing the task. Finally, for the precision level, we looked at the time to complete the virtual exercises, the time to complete the real-world task, the number of contacts made with the breadboard wells in the virtual exercises, and the distance the virtual probes traveled in the VE.

    Performance and visual perspective changes We also analyzed various performance related variables on the movements of end effectors and head position/orientation attest to the perceptual-motor affordances with respect to the DesktopVR versus HMD viewing that learners experienced during simulation-based education of the psychophysical skills related to electrical circuitry.

  2. 2.

    Qualitative measures: In an attempt to better interpret the quantitative measures, we asked a number of open-ended discussion questions to the participants. These questions were used to assess their overall experience in the simulation. Some examples of these questions are: “how helpful were the step-by-step guided instructions in learning the tasks? and what was the best and worst thing about the simulation?”

  3. 3.

    Co-factors: In addition to the main measures described above, we gathered additional information regarding cognitive factors such as the participant’s spatial orientation using the GZ test; visual memory using the MV-2 test; and visualization using the VZ-2 test. We also collected responses from the participants regarding their sense of presence in the VE using the Witmer–Singer presence questionnaire.

4 Results and analysis

Out of a total of 24 participants, 23 were considered for data analysis with 12 in the HMD condition and 11 in the DesktopVR condition. One participant in the DVR condition was excluded due to technical errors in the data collection modules of the experiment. Overall no significant differences were found between spatial orientation (GZ), visual memory (MV-2), or Mental Rotation (VZ-2) scores between participants in the HMD and DVR viewing conditions. Therefore our participant pool in both conditions had very similar backgrounds with respect to their innate spatial abilities.

For the participants’ sense of presence scores on the Witmer–Singer questionnaire (Witmer and Singer 1998), we found a significant difference for the question: “How quickly did you adjust to the virtual environment experience?” between the HMD and DVR conditions. The responses were scored on a seven-point Likert scale (1 = not at all, 7 = less than a minute). Participants adjusted to the simulation significantly quicker in the HMD condition (M = 5.67, SD = 0.88) as compared to the DVR condition (M = 4.33, SD = 1.15), t(22) = −3.17 and \(p\,<\,0.004\). Since the DVR condition was similar to viewing the environment through a window, the visual disconnect could have been potentially jarring to the participants and could be affecting their closed loop visual-motor response to adjust to the environment. Objects in the DVR condition can appear distorted when trying to position yourself at the right vantage point. This artifact of perspective warping is not present in HMD viewing. There was also no significant difference in participants’ sense of presence scores between the HMD and DVR viewing conditions for any of the remaining questions. The quality of the DVR condition was higher than a regular desktop application due to the presence of head-coupled perspective correction. However, the quality of the HMD condition was lower than the current state-of-the-art technology available with higher FOV and resolution. This could explain why the experience of presence matched among the participants in both the conditions.

4.1 Quantitative results

4.1.1 Cognitive domain

We analyzed the differences in the scores of the participants to the cognition questionnaire in each of the four condensed levels of Bloom’s Taxonomy (Crook’s condensation) namely Knowledge, Application, Analysis and Evaluation. In a 2 \(\times\) 2 factorial design, we examined the differences in pre- and post-cognition scores as a within subjects repeated measure in each of the four levels separately, and the two viewing metaphors IBAS-DVR versus IBAS-HMD condition as an external block between subjects factor (Fig. 5). We conducted a 2 \(\times\) 2 mixed model ANOVA analysis and Tukey post hoc HSD to perform follow-up comparisons in the examination of differences due to the individual factors namely the within subjects experiment session (pre versus post), and the between subjects viewing condition (HMD vs. DVR). Assumptions of normality of the interval score data in the dependent variable scores were tested prior to the mixed model ANOVA analysis using a statistical test of normality. Table 1 shows a summary of the results.

Table 1 Descriptive statistics and summary of results for quantitative analysis in the cognitive domain
Fig. 5
figure 5

Mean participant scores (%) across the four condensed levels of Bloom’s Cognitive Taxonomy for each of the IBAS-DVR and IBAS-HMD conditions

In the Knowledge level, the mixed model ANOVA comparing the main effect of experiment session revealed that participants in the pre session scored significantly lower (M = 42 %, SD = 0.27) than the post-session (M = 67 %, SD = 0.19), F(1, 42) = 11.65 and \(p\,=\,.0001\). Post hoc evaluation revealed that participants in the HMD condition scored significantly higher in the post-session (M = 68 %, SD = 0.18) as compared to the pre session (M = 41 %, SD = 0.30), t(22) = 2.60 and \(p\,=\,0.01\). Post hoc evaluation also revealed that participants in the DVR condition scored significantly higher in the post-session (M = 66 %, SD = 0.22) as compared to the pre session (M = 43 %, SD = 0.25), t(20) = 2.22 and \(p\,=\,0.03\). Participants seemed to have improved overall with respect to basic knowledge of the electrical circuitry, and seem to be more consistent in their scores in the post as compared to the pre sessions.

In the Application level, the mixed model ANOVA comparing the main effect of experiment session revealed that overall participants in the pre session scored significantly lower (M = 17 %, SD = 0.21) than the post-session (M = 88 %, SD = 0.09), F(1, 42) = 247.41 and \(p\,<\,0.0001\). The ANOVA comparing the main effect of viewing condition also revealed that overall participants in the HMD condition scored significantly higher (M = 58 %, SD = 0.37) than the DVR condition (M = 47 %, SD = 0.40), F(1, 42) = 6.08 and \(p\,=\,0.01\). Post hoc evaluation revealed that participants in the HMD condition scored significantly higher in the post-session (M = 90 %, SD = 0.08) as compared to the pre session (M = 26 %, SD = 0.26), t(22) = 8.0 and \(p\,<\,0.0001\). Post hoc evaluation revealed that participants in the DVR condition scored significantly higher in the post-session (M = 86 %, SD = 0.10) as compared to the pre session (M = 8 %, SD = 0.06), t(20) = 21.7 and \(p\,<\,0.0001\). There was also a significant difference in participants scores in the pre-experiment session in that participants in the HMD condition scored significantly higher (M = 26 %, SD = 0.26) than participants in the DVR condition (M = 8 %, SD = 0.06), t(21) = 2.29 and \(p\,=\,0.03\). Nevertheless, participants seems to have greatly improved in both conditions with respect to the application domain of electrical circuitry learning, and seem to be more consistent in their scores in the post as compared to the pre sessions.

In the Analysis level, the mixed model ANOVA comparing the main effect of experiment session revealed that overall participants in the pre session scored significantly lower (M = 16 %, SD = 0.30) than the post-session (M = 72 %, SD = 0.27), F(1, 42) = 43.50 and \(p\,<\,0.0001\). Post hoc evaluation revealed that participants in the HMD condition scored significantly higher in the post-session (M = 82 %, SD = 0.15) as compared to the pre session (M = 13 %, SD = 0.28), t(22) = 7.41 and \(p\,<\,0.0001\). Post hoc evaluation revealed that participants in the DVR condition also scored significantly higher in the post-session (M = 60 %, SD = 0.33) as compared to the pre session (M = 20 %, SD = 0.34), t(20) = 2.84 and \(p\,=\,0.01\). In both viewing conditions, participants seemed to have improved overall with respect to the higher learning process of analysis in the electrical circuitry domain, and seem to be more consistent in their scores in the post as compared to the pre sessions.

In the Evaluation level, the mixed model ANOVA comparing the main effect of experiment session revealed that overall participants in the pre session scored significantly lower (M = 10 %, SD = 0.26) than the post-session (M = 77 %, SD = 0.28), F(1, 42) = 79.76 and \(p\,<\,0.0001\). The ANOVA comparing the main effect of viewing condition also revealed that overall participants in the HMD condition scored significantly higher (M = 54 %, SD = 0.43) than the DVR condition (M = 32 %, SD = 0.41), F(1, 42) = 7.95 and \(p\,=\,0.007\). Post hoc evaluation revealed that participants in the HMD condition scored significantly higher in the post-session (M = 89 %, SD = 0.10) as compared to the pre session (M = 19 %, SD = 0.34), t(22) = 6.7 and \(p\,<\,0.0001\). Post hoc evaluation revealed that participants in the DVR condition scored significantly higher in the post-session (M = 65 %, SD = 0.36) as compared to the pre session (M = 0, SD = 0), t(20) = 5.91 and \(p\,<\,0.0001\). There was also a significant difference in participants scores in the post-experiment session in that participants in the HMD condition scored significantly higher (M = 89 %, SD = 0.10) than participants in the DVR condition (M = 65 %, SD = 0.36), t(21) = 2.14 and \(p\,=\,0.04\). In analysis, participants seem to have greatly improved in both conditions with respect to the higher level learning outcomes pertaining to evaluation. Furthermore, the results indicate that the viewing condition is important in grasping evaluation concepts with psychomotor skills learning in that participants in the IBAS-HMD viewing condition seem to have learned these concepts better than participants in the IBAS-DVR condition.

4.1.2 Psychomotor domain

Table 2 shows a summary of the results obtained within the psychomotor domain.

Table 2 Descriptive statistics and summary of results for quantitative analysis in the psychomotor domain

For the Imitation level of Dave’s psychomotor taxonomy, we analyzed the mean time to complete the guided practice sessions within the IBAS-HMD and IBAS-DVR conditions (Fig. 6). An independent samples t test revealed that participants in the DVR condition performed significantly faster (M = 391.55 s, SD = 101.32) than those in the HMD condition (M = 586.48, SD = 100.18), t(20) = 4.63, \(p\,<\,0.01\).

Fig. 6
figure 6

Mean time taken (in s) by the participants to complete the guided practice and virtual exercises sections in each of the IBAS-DVR and IBAS-HMD conditions

In the Manipulation level of Dave’s psychomotor taxonomy, we used a two-way mixed model ANOVA to analyze the effect of virtual world versus real world as one factor, with HMD and DVR conditions as another factor, on the order of steps performed to complete the task based on that learned in the guided practice sessions. However, the results were not significant.

In the Precision level of Dave’s psychomotor taxonomy, we analyzed the mean time to complete the virtual world exercises using an independent samples t test (Fig. 6). Results showed that participants in the DVR condition completed the exercises significantly faster (M = 161.26 s, SD = 52.31) than participants in the HMD condition (M = 242.02, SD = 50.36), t(20) = 3.76, \(p\,<\,0.01\).

Fig. 7
figure 7

Mean number of probe contacts with the breadboard wells in the virtual environment across the IBAS-DVR and IBAS-HMD conditions

We also analyzed the mean time to complete the real-world tests across the two conditions of HMD and DVR, using a two-tailed independent samples t test. However, the results did not show a significant difference.

Further, we analyzed the number of probe contacts made with the breadboard wells in the virtual world across the DVR and HMD conditions (Fig. 7). An independent samples t test revealed that the number of contacts made with the virtual probes were significantly higher in the HMD condition (M = 614, SD = 178) as compared to the DVR condition (M = 468, SD = 96), t(20) = 2.47, \(p\,<\,0.05\).

Fig. 8
figure 8

Mean distance traveled by the negative and positive instrument probes in the virtual environment across the IBAS-DVR and IBAS-HMD conditions

Finally, we compared the distance traveled by the negative and positive probes in the virtual world across the two conditions using an independent samples t test (Fig. 8). The distance traveled by the negative probe in the HMD condition was significantly larger (M = 208.01 cm, SD = 69.37) than that in the DVR condition (M = 130.22, SD = 57.99), t(20) = 2.93, \(p\,<\,0.01\). The distance traveled by the positive probe was also significantly larger in the HMD condition (M = 127.86, SD = 36.10) as compared to that in the DVR condition (M = 82.46, SD = 31.59), t(20) = 3.22, \(p\,<\,0.01\).

4.1.3 VR performance variables

Time to Complete Tasks: An independent samples t test comparing the time to complete in various interactive session of the interactive breadboard activity simulation (IBAS) was done between the two viewing conditions. In the guided practice session, participants in the HMD condition took significantly longer time (M = 586.47 s, SD = 100.17) than participants in the DVR condition (M = 391.55 s, SD = 101.32), t(21) = 4.64 and \(p\,=\,\,0.0001\). In the Voltmeter open exercise session, participants in the HMD condition took significantly more time (M = 250.50 s, SD = 74.67) than participants in the DVR condition (M = 150.73 s, SD = 37.78), t(21) = 3.98 and \(p\,=\,0.0006\). In the Ammeter open exercise session, participants in the HMD condition took significantly longer time (M = 272.48 s, SD = 74.39) than participants in the DVR condition (M = 176.69 s, SD = 89.99), t(21) = 2.79 and \(p\,=\,0.01\). Likewise in the multimeter open exercise session, participants in the HMD condition took significantly more time (M = 203.06 s, SD = 32.29) than participants in the DVR condition (M = 156.33 s, SD = 45.55), t(21) = 2.85 and \(p\,=\,0.009\).

Distance covered during bimanual interaction In order to compare the total distances maneuvered by the participants during two-handed interaction (measured in cm) in the simulated breadboard tasks in the open exercise session of IBAS, we conducted an independent samples t test comparing the distances covered by the Razer Hydra effectors in the dominant and non-dominant hand of the participants in the simulation between the two different viewing conditions. We found that total distance covered by the controller in the dominant hand was significantly larger in the Voltmeter open exercise session in the HMD condition (M = 1281.85 cm, SD = 745) than in the DVR condition (M = 696.57 cm, SD = 392), t(21) = 2.32 and \(p\,=\,0.03\). We also found that total distance covered by the dominant hand controller in the Ammeter open exercise session was again significantly larger in the HMD condition (M = 836.01 cm, SD = 427) than in the DVR condition (M = 417.65 cm, SD = 239.74), t(21) = 2.86 and \(p\,=\,0.009\).

Number of virtual probe contacts with the breadboard wells In order to compare the number of contacts that the participant made while placing the probes (+ and −) on the breadboard during two-handed interaction in the simulated breadboard open exercise session, we conducted an independent samples t test comparing the number of virtual probe contact with the breadboard between the two viewing conditions. We found that participants made significantly higher number of contacts with the breadboard in the Voltmeter exercise session in the HMD condition (M = 247.58, SD = 127.89) than in the DVR condition (M = 163.63, SD = 32.69), t(21) = 2.11 and \(p\,=\,0.04\). Likewise, we also found that participants made significantly higher number of contacts with the breadboard in the Ammeter exercise session in the HMD condition (M = 201.16, SD = 32.84) than in the DVR condition (M = 156.54, SD = 64.04), t(21 = 2.23 and \(p\,=\,0.04\).

Although participants could interact with both hands in the simulated breadboard activities in the guided practice and open exercise sessions, we found that participants generally preferred to maneuver the probes, select and manipulate various parts of the simulated breadboard components using their dominant hand. Also, we carefully matched the constant offset in distance between the virtual manipulators and the physical controller in the hand equally in both viewing conditions, in order to be consistent in our experiment design. However, participants seemed to have enjoyed a higher level of perceived affordances in the IBAS-HMD condition as opposed to the IBAS-DVR condition, as evidenced by the significantly higher distances covered, number of virtual probe contacts, and time to complete the task during manual dexterous activities in the IBAS-HMD condition as compared to the IBAS-DVR condition.

4.1.4 Visual perspective changes

Here we analyzed participants’ head-tracked position and orientation data, continuously logged by the VR system, in order to ascertain to what extent did participants change their visual perspective and attend to the simulated psychomotor task from a suitable vantage point (Figs. 9, 10). We computed the total Euclidian distance covered by the participants’ head as they moved side to side to adjust the viewing angle in the different viewing conditions. We conducted an independent samples t test on the total distance covered by the participants’ head (measured in cm) and total head rotation in degrees about the three separate axes of the world coordinate system, in the open exercise session of IBAS. Note that we ensured that simulation frame rate in both conditions was even at around 40Hz. Participants moved their head a significantly greater distance in the HMD condition (M = 2920 cm, SD = 810.7), than in the DesktopVR condition (M = 1644.47 cm, SD = 895.8), t(21) = 3.58 and \(p\,=\,0.001\). Participants pitched their heads up and down by a significantly larger amount in the HMD condition (M = 3263.85, SD = 925.87) than in the DesktopVR condition (M = 1173.66, SD = 678.60), t(21) = 6.13 and \(p\,<\,0.0001\). Participants panned their heads from side to size by a significantly larger amount in the HMD condition (M = 2248.91, SD = 644.23) than in the DesktopVR condition (M = 1058.65, SD = 410.52), t(21) = 5.23 and \(p\,<\,0.0001\). Also, participants rotated their heads about their viewing direction by a significantly larger amount in the HMD condition (M = 1599.83, SD = 561.76) than the DesktopVR condition (M = 492.92, SD = 373.37), t(21) = 5.50 and \(p\,<\,0.0001\). Overall, we found that participants exhibited a larger amount of gaze shifts and changes in their visual perspective a significantly larger proportion of time in the HMD condition than in the DesktopVR condition, in performing the psychomotor skills learning activities.

Fig. 9
figure 9

Mean distance traveled (in cm) through head movements in the virtual environments across the IBAS-DVR and IBAS-HMD conditions

Fig. 10
figure 10

Mean amounts of head rotation (in \(\circ\)) in the virtual environments across the IBAS-DVR and IBAS-HMD conditions

4.2 Qualitative results

In order to assess the perceived differences in learning the psychomotor skills in the different visual display metaphors, we asked participants to report on the strengths and weaknesses of the HMD and DesktopVR viewing conditions with respect to psychomotor skills learning. We have summarized the responses below by viewing metaphor when asked the question, “what was the best and worst thing about the simulation?”

Strengths of the HMD viewing metaphor:

  • “Practice exercises with feedback after every instrument explanation was very helpful in learning the task step-by-step before the open exercises.”

  • “Best thing is that I can learn various instruments just like sitting in front of the workbench, and the details presented were good.”

  • “Closely resembles the real-world experience!”

Weaknesses of the HMD viewing metaphor:

  • “Although the interaction and 3D simulation was believable, but the head gear hurts. Would be better if I had a more comfortable and less bulky gear.”

  • “I experienced difficulty placing the probes in the correct wells.”

  • “Difficulty getting used to the device for input, and maybe because I never used it before.”

Strengths of the DesktopVR viewing metaphor:

  • “Very engaging similar to the real world, good experience to learn a complicated task using the simulation.”

  • “The look and feel of the simulation was excellent and engaging.”

  • “I liked following a series of steps to complete the task with both hands, and then repeating them on my own later.”

Weakness of the DesktopVR viewing metaphor:

  • “The view of the 3D environment felt a little unfamiliar for me, I had to keep the scene within my window.”

  • “The breadboard was difficult to interact with, and I had a difficult time telling the depth.”

  • “Some of the controls were difficult but easy to adjust to.”

Participants generally seemed to prefer the interactive nature of IBAS in learning the psychomotor skills. They liked the guided practice with feedback sessions of the breadboard activities, and the open exercise sessions in fine tuning their performance of the task that they just learned. Although participants mentioned that the HMD gear was bulky and cumbersome, it was realistic in simulating the experience of performing the breadboard task as if sitting in front of an actual workbench. Participants felt that the DesktopVR viewing was just as engaging, but mentioned that the viewing with head-tracked perspective correction in performing the psychomotor task was somewhat unfamiliar (perhaps unnatural) to them. Though viewing conditions were closely matched in both metaphors with respect to resolution, field of view, non-stereoscopic, and incorporated head-coupled viewing, the participants seemed to perceive the psychomotor learning task as more natural in the HMD viewing as compared to the DesktopVR condition.

4.3 Discussion

This study was aimed at comparing the head-mounted display-based viewing metaphor to a desktop-based viewing metaphor using an empirical evaluation on the learning task of electrical measurement instruments, using the IBAS VR simulation. In general, the participants found the IBAS highly engaging and a great learning experience. They gained significant knowledge and understanding of the topic of basic electrical circuitry and parameter measurement, and the results testify the same (hypothesis 1 was supported). VR, therefore, can be an effective tool for education in basic concepts of electrical circuitry, which can help in preparation ahead of learning advanced concepts within a classroom or a training environment.

In a two pronged approach for analysis of the participant data, we examined the cognitive effects of IBAS using Bloom’s Cognitive taxonomy, and psychomotor effects using Dave’s Psychomotor taxonomy. Traditional methods of training and teaching tend to focus on the three lower levels of Bloom’s taxonomy knowledge, comprehension and application. VR, however, has the potential to impact higher levels of Bloom’s taxonomy analysis, synthesis and evaluation. Bell and Fogler (1997) state that these upper levels are more difficult to teach and evaluate than the lower levels, and as a result are not implemented as extensively in most curriculums. For IBAS, with respect to the knowledge and analysis levels, the viewing metaphor had little or no significant effect in learning. However, in the higher levels of analysis and evaluation, participants performed significantly better in the HMD condition than the DVR condition (hypothesis 2a was partially supported). Therefore, choosing a viewing metaphor between the HMD and DVR conditions would greatly depend on the level that a cognitive task is designed for. An example of such a task for electrical measurements would be training in microchip design and troubleshooting, or electronics maintenance in automobile or aviation equipment. In case of tasks designed to focus on psychomotor skills rather than cognitive skills, the opposite of hypothesis 2b was partially supported, in a way that the DVR viewing condition would be preferable, as the results show that participants were better at the imitation and precision levels of Dave’s psychomotor taxonomy in the DVR condition as compared to the HMD condition. Tasks requiring psychomotor precision would include training in medical procedures such as laparoscopic surgery and endoscopy. The study did not find significant effects in virtual world versus real-world task performance across the two conditions, and hypothesis 3 was not supported. One possible reason could be the design and duration of the real-world tasks. A much better design for testing real-world task performance and skill transference is warranted, and a pre versus post-comparison would yield more robust results in this regard.

The IBAS-DVR condition was a window into the virtual world of the simulation, which limited the user’s vision through it. The field of view was altered through the means of head-tracking and perspective correction to correctly display the elements in the VR, but the user could not see beyond the limits of the desktop screen. Also, peripheral vision was not blocked in the DVR condition, which would reduce the sense of presence for the user. The HMD condition, however, had a 360° field of regard, and the field of view was constant. Further, the peripheral vision was blocked by the HMD assembly, which strengthened the sense of presence. This would explain why participants spent more time exploring the virtual environment with significantly higher head rotations and orientations and significantly higher hand movements in the HMD condition as compared to the DVR condition. This result supports the observations of Ruddle et al. (1999) in their navigation study where participants spent less time stationary when using an HMD and looked around significantly more as compared to desktop viewing. They argue that “one explanation for this behavioral difference may be that the HMD provided an interface in which changes in view direction were natural and required less effort.”

Santos et al. (2009) examined various studies comparing VR systems using HMDs and desktops (or similar) and learned that the differences between these viewing metaphors accounted in literature are either conflicting or insignificant. For example, Pausch et al. (1997) found the HMD condition to be better in a search task in VR, specifically in concluding that the searched target is not present, and no difference in other cases, whereas Robertson et al. (1997) found the desktop condition to be better at searching tasks when the target was present, and no difference between the viewing conditions otherwise. Mizell et al. (2002) found no difference between desktop and HMD conditions in their study. Santos et al, in their study, found that global user performance was better for the desktop as compared to the HMD viewing. This result is similar to our study where we found that in general VR performance, participants in the DVR condition performed the guided practice and exercise sessions significantly faster than those in the HMD condition. However, this result greatly depends on the type of the simulation and design of the task.

Participants found the VR hardware novel and intriguing, though they seemed to require time and practice in getting acclimated to the VR equipment. Even though the participants enjoyed being immersed in the virtual environment, they stated that they would be more comfortable if the equipment was less bulky. Therefore, for future studies of similar design, care must be taken that either the VR equipment used is comfortable and less bulky, or ample time and practice is provided for getting adjusted to the equipment.

5 Conclusion and future work

We created an interactive breadboard activity VR simulation (IBAS) to educate users in psychomotor skills pertaining to electrical circuitry. In an empirical evaluation, we compared two popular viewing metaphors (HMD vs. DVR) on psychomotor skills learning using a combination of quantitative variables pertaining to cognition based on Bloom’s taxonomy in knowledge, application, analysis and evaluation levels, objective VR performance variables, psychophysical skills assessment task measures, and qualitative subjective questionnaires. In comparing the pre- and post-experiment cognition questionnaire results, we found that in both viewing conditions, participants effectively learned the psychomotor skills in all condensed levels of Bloom’s taxonomy. However with respect to the highest levels of learning pertaining to Evaluation, participants in HMD viewing condition learned the task significantly better than participants in the DVR condition.

In the guided practice and open exercises sessions of IBAS, participants in the HMD viewing condition spent more time exploring the psychomotor task, and manually interacted extensively with IBAS as compared to the DVR viewing condition. The performance of the psychomotor learning task in the HMD viewing condition also seemed more natural as reported by participants in the subjective self-reports. An implication of this study for designers and consumers of VR systems for training and education is that if higher level learning pertaining to Evaluation is important in psychomotor skills acquisition, then an HMD viewing metaphor may be preferable as compared to a DVR display.

In an attempt to carefully match the viewing conditions in the empirical evaluation so as to eliminate confounding variables between the two viewing conditions, several limitations arose with our study that we hope to address in future work. In both viewing conditions the resolution and field of view was limited. Differences in display resolution can be a confounding factor in measurement of performance across different displays, as discussed by Ragan et al. (2013). While comparing DesktopVR to HMD in a virtual search task, Pausch et al. (1997) maintained the same resolution across conditions within their study to hold variables constant. Further, the display conditions in our study were non-stereoscopic to match the HMD condition with the DesktopVR condition, and to keep the study design simple by reducing the factor of stereoscopic vision. The lack of stereoscopic vision, however, affected depth perception in both the scenarios. A controlled environment was primarily motivated by the need for a low-cost VR simulation that we needed to deploy in partnering technical colleges to teach and train users in the electrical circuitry task in aviation and automotive technical education.

In future studies, we will compare these results against HMD viewing with higher resolution and field of view, and unrestricted desktop and large screen displays, for full ecological validity on psychomotor skills learning. We will also compare and contrast the perceived affordances and learning characteristics in the results from our current binocular non-stereoscopic viewing condition against the binocular stereoscopic viewing (impact of depth perception) on psychomotor skills learning. A real-world-based control group to compare the efficacy of skills transfer in the real world with that in the virtual world will help further generalize our results.