Introduction

The use of simulation technology for teaching and evaluating surgical skills acquisition is driven by the demand for improved quality of care and accountability in surgical outcomes, increasing restrictions on the use of animal models, dwindling resident case logs, medicolegal pressures, and fiscal mandates for cost-effective performance. Laparoscopy and endoscopy have proven to be ideally suited for the use of virtual-reality models for simulation training. This approach allows for virtual mentorship, task deconstruction, and the ability to study dynamic comprehensive surgical metrics [1, 2]. With the introduction of the surgical robot in 2001, an additional laparoscopic technological adjunct became available that arguably requires a shorter learning curve for attaining appropriate surgical skills. Robotic surgical training is restrictive because robotic simulators are just now becoming commercially available. The use of the actual robot for training can be difficult due to high utilization during the week for actual surgery, the high cost of purchasing a robot specifically for training purposes, and the space required to house a robot specifically for training.

Built to train robotic telesurgical skills, the dV-Trainer (MIMIC Technologies, Inc., Seattle, WA) is a novel portable offline virtual-reality (VR) robotic simulator platform comprised of a binocular three-dimensional visual output, two finger telemanipulators similar to the da Vinci robotic system’s finger cuff effectors (Intuitive Surgical, Sunnyvale, CA), and an optional three-dimensional (3D) projection screen [3]. The movements of the telemanipulators utilize a cable-driven system to appreciate movements of the surgeon through space and apply force feedback when appropriate (Fig. 1). Dry-lab tasks used to train learners with a da Vinci system can be digitally recreated for the dV-Trainer system with the same motion scaling afforded by the robot.

Fig. 1
figure 1

Virtual-reality 3D dV-Trainer simulator platform

In order to assess basic aspects of validity of the dV-Trainer, the VR platform was introduced during a postgraduate pediatric robotic-assisted laparoscopic (RAL) urology course at the AUA meeting in Anaheim, June 2007. The goals of the pilot design were to assess whether or not the subject’s actual robotic OR experience correlates with performance metrics on the beta release version of the dV-Trainer VR platform, whether the trainer’s components were ‘acceptable,′ and whether the content of its exercises had deemed educational value as assessed by learners. This was measured through participant post-course evaluation and electronic performance metrics collection on the VR platform. The study’s objective is to examine for evidence of validity for the integration of robotic simulation tools into continuing medical education (CME) courses and surgical residency training.

Materials and methods

This study was approved by the Office of Education, AUA and each course enrollee filled out a consent form detailing the study goals at the beginning of the course. Fifteen learners in the course rotated through a didactics station of robotic surgery applications in pediatric urology, a da Vinci-S dry-lab training station, and a virtual-reality robotics training station. The instructors were blinded to the experience of each learner and each enrollee was given the opportunity to opt out of the study with no change in their skills lab time. Learners were first surveyed about their perceptions of the value of simulation training and were asked to rate acceptability of the dry-lab da Vinci-S console and the offline VR platform. Performance metrics were recorded on the dV-Trainer for a ring transfer module which was simulated in both VR and the dry-lab skills station (Fig. 2). Subjects were given 2–3 min of time to develop familiarity with the platform and then were given 5 min for the task. The module involved grasping three successive rings from a series of pegs on the wall of the module and transferring them from one instrument to another before placing them on an upright peg on the floor of the module. The purpose of the modules was to teach proper telemanipulator arm and camera clutching, object transfer from one instrument to another, and object placement. Class participants filled out course evaluations and acceptability questionnaires.

Fig. 2
figure 2

Ring transfer modules for the dry lab (left) and the VR lab (right)

Performance metric results were divided into those obtained from learners with prior robotic experience and learners completely new to any robotics platform—experienced versus nonexperienced. We were deliberate in not describing learners as ‘experts’ or ‘nonexperts’ since to this date there have been no standardized qualifications to rate a robotic surgeon with either of these labels. Complete performance metrics recorded from the dV-Trainer were the time (s) for placing the first three rings on the floor peg, economy of motion as measured by distance traveled (mm) by each instrument during the task time, peak ring strain (which is a surrogate of tissue deformability), number of instrument collisions, time each instrument spent out of view (s), and the time the master telemanipulators were out of center (s). A Likert scale was used to assess the platform performance where a rating of 0 corresponded to totally unacceptable and a rating of 6 corresponded to totally acceptable. Means testing was performed with paired t test with unequal variances using STATA SE 9.2 (College Station, TX) with a statistical significance set at P < 0.05.

Results

Demographics

Four enrollees cited prior robotic experience whereas 11 (73%) participants had no prior robotic experience and none of the subjects had previously seen or used the dV-Trainer simulator. Almost half of the learners reported that they play video games and all students were right-handed (Table 1).

Table 1 Subject demographics

An overwhelming majority of learners (88%) believe that there is a role for the use of computerized simulation in robotic surgery training and almost half believe that simulation should be used for accreditation. Based on the needs assessment survey participants confirmed that a VR platform for robotic surgical training would be beneficial (Table 2). Users were asked to review the overall acceptability of the simulator as a whole, as well as its components, on a scale from 0 to 6 (Likert scale) whereby scores greater than 3 indicate acceptability for training (Fig. 3). The overall performance rating for simulation with the offline trainer and dry lab was 4.69 and 5.23, respectively, demonstrating face validity. Experienced learners were more critical of all categories of acceptability, including the actual da Vinci-S robot master, rating the overall performance of the dry lab and the offline trainer at 4.75 and 3.75, respectively. Nonexperienced learners rated these training modalities at 5.44 and 5.11, respectively.

Table 2 Needs assessment survey completed by subjects
Fig. 3
figure 3

Simulation module acceptability results. Likert scale (0–6) with 0 being totally unacceptable and 6 being totally acceptable. Error bars SEM

Construct validity

Surgeons with da Vinci-S experience performed better than inexperienced users. Of the six metrics evaluated, task time, economy of motion, and time the master telemanipulators were outside the center of the interface workspace were found to be statistically significant when comparing those with robot experience to those without experience. Therefore, the construct that experience correlates with performance on the trainer has validity, as indicated when utilizing the three aforementioned metrics for performance evaluation [4] (Table 3).

Table 3 Subject performance metrics on dV-Trainer simulator divided by experience

Discussion

We undertook this study to assess the performance and acceptability of a novel robotic VR simulator. Since most medical institutions do not have the financial or space allowances for a da Vinci system solely for the purpose of training, a validated simulation platform for robotic surgical skills acquisition would be advantageous. In addition, for an institution to recover costs of the da Vinci system, the robot needs to be in constant use, precluding training time on the robot during regular working hours. This is particularly challenging when trying to adhere to the Accredited Council for Graduate Medical Education (ACGME) guidelines for resident duty hours. The training of laparoscopic skills through simulation has been shown to improve operative performance in residents [5, 6], yet robotic simulators have not undergone validation studies as legitimate adjuncts to robotic surgical training.

The results of the acceptability ratings demonstrate that both simulation and dry-lab training are considered acceptable modalities for robotic surgical skills acquisition. Class participants rated simulation training on a level comparable to the dry-lab training. This finding is evidence that access to the actual robot system may not be necessary to teach robotic skills. Recognizing that enrollment in the course was voluntary, we cannot ignore the potential for sampling bias since participants who signed up for the course are most likely early adopters of technology and view robotics or simulation training as useful. And we acknowledge that, although a learner may not perceive the need to be taught on the da Vinci system to learn robotics skills, acquisition of skills from a VR trainer does not necessarily translate into OR proficiency.

Overall, it was interesting to note that those surgeons who have used the robot prior to this course were much more critical of the dry-lab and VR simulation training than were novice users. This finding was consistent with the findings of Lin et al. when testing another robotic simulator between experts and nonexperts. They observed that nonexperts found that robotic simulation better reflected clinical skill than experts [7].

The six metrics chosen to evaluate the course enrollees are similar to those tested in prior laparoscopic simulation training studies [810]. Economy of motion (EOM) was chosen as a metric since studies have shown that EOM improves with increased proficiency [9, 11], and our data demonstrated a discernible difference. Another metric that has been linked to RAL skill proficiency is instrument collisions and, although fewer occurred in the experienced group, this did not reach statistical significance. This could be explained by the small sample size or by the short task time duration. The two metrics involving telemanipulator and instrument positioning are critical to small-space surgery relevant for pediatric procedures, and errors in these can translate into unwanted tissue trauma and awkward ergonomics for the roboticist. The nonexperienced participants tended to keep their instruments out of view longer than the experienced participants, but not to a degree of statistical significance, yet the master telemanipulators spent a significantly longer time out of the center of the workspace. We would expect experienced roboticists to keep instruments within the visual field, so perhaps expanding the task time would yield significant differences between the groups. The dramatic difference between the groups for the time the telemanipulators spent outside of the center of the workspace may be related to the impact that nonfamiliarity of the dV-Trainer had on the two groups. There may be an inherent understanding of telemanipulator function in the experienced group that was lacking in the nonexperienced group. This has implications for training beginners on a dV-Trainer prior to receiving robot console time. New learners may understand telemanipulator function better having used this technology through simulation and attain robotic skills faster.

Despite the low power of this study, we nonetheless demonstrated statistical significance in three performance metrics of construct validity. These results support the initiation of a larger trial involving subjects of many different experience levels.

Conclusion

Our study represents an initial demonstration of acceptability of a VR simulator for the da Vinci surgical robot system and gives insight into the potential value of offline training for robotics skills acquisition. Acceptability and preliminary face and content validity were demonstrated by the learners in this cohort. We also show evidence that the construct of experience correlated with key performance metrics. The data provide evidence that the dV-Trainer may be appropriate for integration into continuing medical education and residency robotic training curricula. Prospective studies looking at a larger pool of learners at varying degrees of both robotic and laparoscopic surgery skills are needed and will help determine the utility of integrating this simulator into the surgical curriculum. Further studies assessing predictive validity are ultimately required to confirm that VR robotic simulation training translates into improved surgical outcomes in patients.