Introduction

The da Vinci ® robot (Intuitive Surgical, Inc., Sunnyvale, CA) has gained widespread adoption for minimally invasive procedures in multiple specialties [1]. The skillful use of this system requires training and practice and in this setting, simulation environments can offer safe tools to teach and learn the basic technical principles of robotic surgery [2, 3]. Because the da Vinci ® does not provide tactile feedback for the surgeon at the console, the development of da Vinci ® simulators is freed from a major challenge of creating high-fidelity laparoscopic simulators: the replication of tactile feedback to the surgeons. Consequently, these aspects of the da Vinci ® platform make it particularly amenable to virtual reality simulation [1].

The dv-Trainer® (Mimic Technologies, Inc., Seattle, WA) (dVT) is a virtual reality simulator based on a complete kinematic representation of the da Vinci ® robot. The hardware design and main components of the dVT simulate the da Vinci ® surgeon’s console. The dVT simulation software allows the user to observe a virtual environment that includes a representation of da Vinci ® Endowrists® interacting with inanimate objects or relevant virtual anatomy [4, 5]. The dVT and other robotic simulators have been scrutinized in validation studies suggesting that simulators are promising tools to accelerate the initial training for console skills and can support the basic execution and some assessment of most core skills [1, 2, 4,5,6,7,8,9].

In robotic surgery, in addition to training of the surgeon, there is a clear place for team work training. In particular, the coordination between the console-side surgeon and first-assistant is crucial, as the performances are influenced by this variable [10,11,12,13]. From this point of view, it might be worthwhile to keep in consideration the training of assistants to further improve perioperative patient’s outcomes. This hypothesis leads to the development of the Xperience™ Team Trainer (XTT). The XTT is the last virtual reality simulator applied to the da Vinci ® system designated to train the first-assistant’s psychomotor skills and facilitate rehearsal of interaction with the console-side surgeon (Fig. 1) [14]. However, limited validation studies are currently available. The aim of this study is to evaluate preliminary results for face, content, and the workload imposed regarding the use of the XTT for the psychomotor and communication skills training for the first-assistant in robot-assisted surgical procedures.

Fig. 1
figure 1

Xperience™ Team Trainer platform Xperience™. Team Trainer (Mimic Technologies, Inc., Seattle, WA) is a virtual reality simulator available as an optional hardware component for the dv-Trainer® and simulates the patient-side working environment. It consists of a platform with two movable laparoscopic tool ports with force feedback capability and width adjustment, an integrated video monitor, and it is equipped for ergonomic height adjustment and rotation and tilt adjustment supporting common port positions used in robotic procedures. The surgical simulation software used is M-Sim™ 3.0 (Mimic Technologies, Inc., Seattle, WA)

Methods

Description of the simulator

The XTT is available as an optional hardware component for the dVT and simulates the patient-side working environment [14]. It consists of a platform with two movable laparoscopic tool ports with force feedback capability and width adjustment, an integrated video monitor, and it is equipped for ergonomic height, rotation and tilt adjustment supporting common port positions used in robotic procedures. The surgical simulation software used is M-Sim™ 3.0 (Mimic Technologies, Inc., Seattle, WA).

Choice for exercises

Three exercises where chosen among the 13 available exercises to assess different aspects of the simulator. “Exercise 1” is only a laparoscopic exercise (Pick & Place laparoscopic demo), “Exercise 2” is a simple team training task (Pick & Place 2), and “Exercise 3” is a complex team training exercise (Team Match Board 1).

  • Pick & Place laparoscopic demo: scattered colored objects are placed onto their corresponding colored boxes using laparoscopic instruments (Fig. 2a).

    Fig. 2
    figure 2

    a Pick & Place laparoscopic demo: scattered colored objects are placed onto their corresponding colored boxes using laparoscopic instruments. b Pick & Place 2 (team exercise): scattered colored objects are placed onto their corresponding colored boxes performing transfers from the endowrists monitored by console-side surgeon to the laparoscopic instruments manipulated by the first-assistant or from first-assistant to console-side surgeon as requested by the task. c Team Match Board 1 (team exercise): scattered letters and numerals placed on the perimeter of a 3 × 3 letter board are placed onto their corresponding positions performing transfers from the endowrists monitored by console-side surgeon to the laparoscopic instruments manipulated by the first-assistant or from first-assistant to console-side surgeon as requested by the task

  • Pick & Place 2: scattered colored objects are placed onto their corresponding colored boxes performing transfers from the endowrists monitored by console-side surgeon to the laparoscopic instruments manipulated by the first-assistant or from first-assistant to console-side surgeon as requested by the task (Fig. 2b).

  • Team Match Board 1: scattered letters and numerals placed on the perimeter of a 3 × 3 letter board are placed onto their corresponding positions performing transfers from the endowrists monitored by the console-side surgeon to the laparoscopic instruments manipulated by the first-assistant or from first-assistant to console-side surgeon as requested by the task (Fig. 2c).

Participants

Residents and senior surgeons were invited to participate in a prospective, institutional review board-approved study. Participants were general surgeons, urologists or gynecologists with previous laparoscopic experiences as first operators and previous experience as first-assistant in more than ten robot-assisted procedures.

They were categorized into “Beginners” (less than 50 cases as first-assistant in robot-assisted procedures) and “experts” (more than 50 cases as first-assistant in robot-assisted procedures). All the participants were given a standardized introduction to the XTT and viewed a demonstration of all the exercises before starting their trials.

Console-side surgeon: in the operating room is the first operator during the robot-assisted procedure and he/she works at da Vinci ® console, for the purpose of this study, he/she works at dvT during “team exercises”.

First-assistant: in the operating room is the bed-assistant, he/she works with laparoscopic instruments helping the first operator to perform the robot-assisted procedures when needed (tractions, exposition of structures, irrigations, aspirations, passing needles, etc.), for the purpose of this study, he/she works at XTT simulator.

Study design

Face validity assesses the realism of the simulator itself and whether it represents what it is supposed to represent [15,16,17]. The questionnaire used to establish the face of XTT was rated on parameters such as ease of use, realism of the exercises, realism of the movement of the instruments and realism of the interaction between objects simulated by XTT. These parameters were scored on a five-point Likert scale (difficult to use/not realistic–very easy to use/very realistic). After two attempts of exercise 1, each participant completed a questionnaire assessing XTT as “laparoscopic” simulator (Table 1A). Subsequently, they performed, as first-assistant, 2 attempts for exercise 2 and 2 attempts for exercise 3 with the same console-side subject (surgeon expert on dvT simulation system), and then completed a questionnaire assessing XTT as “team trainer” simulator (Table 1B).

Table 1 Questionnaires

Content validity assesses the simulator as a training device and validates that it teaches what it is supposed to teach [15,16,17]. The questionnaire used to establish the content of XTT was rated on parameters such as about its utility as an educational tool (overall relevance in the training of the first-assistant for robotic surgery, training tool for novices in laparoscopy or for surgeons or residents with previous laparoscopic experience, as training to improve communications skills between console-side surgeon and first-assistant). These parameters were scored on a five-point Likert scale (not relevant/not useful–very relevant/very useful). This questionnaire also contained a question regarding the utility of visualization of the “metrics” (completion time, motion-based measures, and error measures) after the exercises and a question about suggestions to improve the XTT simulator (vision, instruments, interaction, ergonomy, “models” used in the exercises). After “team exercises”, experts completed the content validity questionnaire (Table 1C and D).

Standard laparoscopy proved an augmented physical workload when compared to robotic surgery [18]. Workload imposed by the XTT was also assessed during using the six-item NASA Task Load Index questionnaire measuring mental demand, physical demand, temporal demand, performance, effort and frustration along a 21 increment scale of low, medium and high [19]. All the participants completed this questionnaire at the end of the trial (Table 2).

Table 2 Six-item NASA task load index—workload imposed by XTT simulator

Statistical analysis

Data were registered using de-identified subjects ID in a specifically designed database (Microsoft Excel®, Microsoft Corporation, Redmond, WA, USA) and statistical analysis was performed using a commercially available software package (SPSS 15.0 for Windows®—SPSS Inc., Chicago, IL). The χ 2 test was used for categorical variables, and the t test was used for continuous variables. A “P value” less than 0.05 was considered significant.

When applicable, a comparative analysis between “beginners” and “experts” concerning the responses obtained was performed.

Results

Twenty-one consenting participants were included and all of them completed all the exercises, the face questionnaires, and the six-item NASA Task Load Index questionnaire: 12 “Beginners” (11 general surgeons and one urologist) and 9 “Experts” (7 general surgeons, one urologist and one gynecologist) who also completed the content validity questionnaire.

Face validity

Results about XTT as “laparoscopic only” simulation platform are reported in Table 1A. Overall, we registered 77 out of 84 answers (91.7%) ranging from average to very easy to use/realistic (exercises, movement simulated by XTT, interactions between objects simulated by XTT). There were no statistically significant differences between beginners’ and experts’ responses for each question (χ 2 test, P = 0.617, P = 0.755, P = 0.466, P = 0.457, respectively).

Results about XTT as “team trainer” simulator are reported in Table 1B. Overall, we registered 76 out of 84 answers (90.5%) ranging from average to very easy to use/realistic. There were no statistically significant differences between beginners’ and experts’ responses for each question (χ 2 test, P = 0.232, P = 0.405, P = 0.294, P = 0.508, respectively).

Overall, comparing the responses to the face validity questionnaires, no statistically significant differences were observed between results of face validity of the XTT only as “laparoscopic” simulation platform and as “team trainer” simulator (χ 2 test, P = 1.000, P = 0.986, P = 1.000, P = 1.000, respectively for each question from 1 to 4).

Content validity

Responses scored on a five-point Likert scale regarding content validity (completed by the group of nine “experts”) of the XTT are reported in Table 1C. Overall, we registered 34 out of 36 answers (94.4%) ranging from average to very relevant/useful and 30 out of 36 answers (83.3%) ranging from nearly to very relevant/useful.

The content validity questionnaire contained also two other questions (reported in Table 1D): 8/9 “experts” (88.9%) judged the visualization of the “metrics parameters” after the exercises useful; “instruments out of view” was assessed as the most important feedback parameter (8/8 of the “experts” put it in one of the first three positions when was asked to put them in order from the most important to least important, if they considered the visualization of the “metrics parameters” after the exercises useful ), nobody (8/8) considered “completion time” as an useful feedback parameter.

The last question of the content validity questionnaire concerned suggestions to improve the XTT simulator (vision, instruments, interaction, ergonomy, “models” used in the exercises), “interactions” and “instruments” (9/9 and 8/9, respectively, of the “experts” put them in one of the first three positions when asked to put them in order from the most important to least important to improve) were considered the most important suggestions to improve the system.

Workload imposed

Mental and temporal demand and effort were rated from medium to nearly very high for most of the participants (66.7, 66.7, and 80.7%, respectively). Physical demand and frustration were rated from nearly very low to medium from most of participants (57.1 and 57.1, respectively). All the participants rated the performance from medium to perfect.

Comparing “beginners” and “experts” regarding the mean score (± standard deviation) given to mental, physical, and temporal demand, performance and effort, no statistically significant differences were observed (t test, P = 0.892, P = 0.140, P = 0.454, P = 0.259, P = 0.099, respectively). The mean frustration level was rated significantly lower by the “experts” compared with the “beginners” (4.0 ± 2.6 Vs 7.2 ± 3.0) (t test, P = 0.019).

Discussion

This study reports the preliminary results regarding the validation of the XTT, an additional tool to dVT simulator consenting the execution of “four hands” tasks powered by a software conceived to obtain high-fidelity interaction between console-side surgeon’s simulator (dVT) and first-assistant surgeon’s simulator (XTT). The skillful use of the da Vinci ® robot requires training and practice and a learning curve exists for both residents and experienced surgeons. During the learning process, there is an increased propensity for error and in the setting of clinical practice, this may not be acceptable [2, 20]. Improvements in computer processing have led to more realistic and sensitive virtual reality simulators for robotic surgery training. Virtual reality simulators are now powered providing statistical feedback on the surgeon’s performance [15]. In recent years, multiple companies have released simulators to address the need for a safe and cost-conscious environment in which residents and surgeons can learn console skills [1, 2]. Moreover, robotic procedures are not “one-man-show” procedures and a good chemistry among the components of the team is an essential tool. Nonetheless in robot-assisted procedures, the first operator is alone at the console and the bed-assistant plays an important role. In addition to training of the surgeon, there is a clear place for learning of the skill by the team as a whole. In particular, the coordination between the console-side surgeon and the first-assistant is crucial, as the performances are influenced by this variable [10,11,12,13]. The first-assistant’s level can be related to the post-operative morbidity as well as the main surgeon’s experience [12]. From this point of view, it might be worthwhile to keep in consideration the training of assistants to further improve patients’ outcomes. Nonetheless, the robotic surgery operating room is a different environment compared with the classic setting, because the first operator is alone “inside” the console and communication’s misunderstanding is not so rare. In this setting, the XTT has been conceived as a virtual reality simulator applied to the da Vinci ® system designed to develop the first-assistant’s psychomotor and communication skills and facilitate rehearsal of interaction with the console-side surgeon. At present, no data are available regarding the validation of this new virtual reality simulator.

The first step of our study was to assess the face validity of the XTT only regarding its hardware and software components as pure laparoscopic simulation platform. In this context, the XTT was shown to possess face validity as evidenced by the rankings given on the simulator’s ease of use and realism parameters (exercises, movement simulated by XTT, interactions between objects simulated by XTT) (Table 1A) by both “beginners” and “experts”. These findings were confirmed when the XTT was rated as “team trainer” simulator (Table 1B). Comfort and workload imposed by the XTT were also assessed using the six-item NASA Task Load Index. None of the participants expressed significant discomfort while using the XTT as evidenced by workload scores. Comparing “beginners” and “experts” regarding the mean score given to mental, physical, and temporal demand, performance and effort, no statistically significant differences were observed. This is at least in part due to the fact that the target of this study was the first-assistant training, so when was asked to “experts” to complete tasks as “first-assistant” they have found themselves in a different position with respect to their actual habitual console-side position (4/9 “experts” in our series). Hence, contrary to the previsions, their mental, physical, and temporal demand and effort were not significantly lower when compared to “beginners”. The mean frustration level was rated significantly lower by the “experts” compared with the “beginners” (t test, P = 0.019), probably due to the different level of experience. Nonetheless, the mean frustration score was, however, consistently low, also in the “beginners” group (7.2 ± 3.0).

The XTT was shown to possess content validity for first-assistant training in robotic surgery either for novices in laparoscopy or for surgeons with previous laparoscopic experience. All the experts rated XTT as useful to improve communications skills between console-side surgeon and first-assistant. Moreover, most of them (88.9%) judged the visualization of metrics after the exercises useful. “Instruments out of view” was assessed as the most important feedback parameter and “completion time” as the least important feedback parameter. Experts suggested improving the “interactions” and the “instruments” of the XTT simulator. From the opinions collected during the study period, the possibility to work in common and to alternate “console-side sessions” and “first-assistant sessions” were also considered as an advantage for the XTT platform. However, no objective data were collected in this regard and further investigations are needed to assess these interesting topics in future studies. XTT could also be a useful tool for non-technical skills studies.

We acknowledge that this study has several limitations. Yet, the low number of participants, including the arbitrary cut-off of 50 procedures to separate the participants, and the low number of exercises tested makes our findings not sufficient to validate the effectiveness of the XTT simulator. Furthermore, we enrolled highly selected participants (also “beginners” group was composed by surgeons with previous laparoscopic experiences as first operators and 10–49 previous experiences as first-assistant in robot-assisted procedures, no “novice resident” was invited to participate to the study) ensuring high level feedback in the responses.

In conclusion, the XTT demonstrated excellent face and content validity as well as reasonable workload parameters. Virtual reality team training using the dv-Trainer® in association with the Xperience™ Team Trainer represents a useful tool in which bed-side assistant surgical skills could be learned in a safe environment. Further research on large scale is required to confirm the interest of XTT. Reliability, cost effectiveness [2], construct, concurrent and predictive validity studies are also needed. Yet, XTT is only an added component to a platform (Mimic dv-Trainer®) that previously has shown reliability, construct validity, concurrent validity and educational impact [2].