Keywords

Introduction

Advancements in technology to objectively assess skill and more rigorous education efforts to ensure skills competency and proficiency in trainees have led to a dramatic change away from the paradigm of ‘see one, do one, teach one’. In an effort to improve the work and educational environments for trainees and promote patient-centered care, implementation of work-hour restrictions and the expectation of direct resident oversight in the operating room have been a forcing function to provide standard pre-clinical technical and cognitive skills training through simulation [1,2,3]. Surgical training programs are adapting to the change of education curricula and balancing both patient-centered care with learner-centered training is a work in progress. Given this goal, it is of utmost importance that programs have the tools necessary to implement a safe, efficient curriculum with an objective measure of trainees’ technical skills [1, 4]. The Halsteadian apprenticeship education model has endured for over a century, yet this model has limitations, especially since it limits the ability of standardization of training experience. A ‘train by opportunity’ model is not inclusive and ensures that some trainees have different experiences than others. Urological training is currently assessed by a combination of the direct preceptor model, case logs, and the written and oral boards. With the introduction over the last two decades of minimally invasive surgery in the urological field, how to safely achieve these goals remains to be settled. Minimally invasive surgery lends itself well to defining metrics of skill through video capture and instrument/user movement tracking. These opportunities have enabled education experts to generalize and standardize training across large groups of learners. Because urology involves several technically challenging procedures, we must leverage these education technologies to advance trainees through a proficiency and competency model [1, 4]. Novel methods for objective skills assessment and skills transfer promise to place urology at the forefront of education among all surgical disciplines. Our aim in this chapter is to describe the current trends in education and simulation specifically in minimally invasive surgery within urology.

Needs Assessment: Why Simulation?

Upwards of 400,000 deaths annually are due to errors in medicine making medical errors the third leading cause of death in the United States behind cardiovascular disease and cancer and ahead of pulmonary disease and trauma [5, 6]. The U.S. has much higher reported errors compared to other developed countries, and unfortunately this is actually believed to be vastly underreported. Not every state in the U.S. requires reporting of medical errors, and due to this very low percentage of reported errors, interventions are unreliably initiated [7].

Furthermore, the technique of a surgeon has been directly related to patient outcomes [3, 8]. Surgical errors are common, but the majority are preventable with many attributable to surgical technique and communication failures [3]. The cost of a single surgical error is estimated to increase costs of a patient’s care up to approximately $30,000 [3].

In addition to patient safety, another aspect to consider is the unseen costs of resident surgical training. By 2030 it is estimated that the cost to train enough surgeons to support the expected US population will be $37 billion dollars [9]. Some of the costs include increased operative time allotted to resident training, which have been estimated to incur sometimes more than $100,000 of cost per trainee. That is why shifting part of the education model into a simulated environment, decreasing the time to reach competency, and potentially decreasing the cost of training, is desirable. The Accreditation Council for Graduate Medical Education (ACGME) officially authorized simulation into the curriculum of surgical residency programs in 2008 [3]. Simulation affords educators with means of assessing and tracking skill adoption and transfer. The end goal is to decrease learning curves to enhance patient care.

Learning Curves in Minimally Invasive Surgery

A learning curve is a “theoretical concept that draws a surgeon’s performance against time” and has been described as the plateau of some defined marker that is felt to demarcate competence in a procedure [10]. The concept was first described in laparoscopic cholecystectomy and has also been extensively studied in urological oncology [10]. As described in another eloquent way, “on the way to achieving mastery the (learning) curve represents the initial challenges in competence, and the change in technical proficiency and efficiency with increasing experience” [11]. Time in training does not necessarily reflect competency, and for difficult and invasive procedures it is important to understand when a surgeon in training has reached this important marker [12].

Endourology Learning Curves

Percutaneous nephrolithotomy (PCNL) is the operation of choice for challenging and/or large renal staghorns or calculi, and is therefore an important tool in the urologist’s armory [10]. PCNL is a technically challenging procedure and a survey of urologists showed that only 11% obtained percutaneous access instead of relying on a radiologist [10, 13], suggesting that the training experience for percutaneous access is not adequate. Obtaining access is the most difficult and a critical step of the procedure. Watterson et al. showed fewer complications and better stone clearance rates were observed when the urologist performed this crucial step versus the radiologist [10, 13].

Allen et al. looked at three defined variables – operating time, fluoroscopic screening time and radiation dose – that were felt to reflect a surgeon’s level of expertise [12]. Based on the time for a beginner surgeon to reach a plateau on these parameters, it was felt “competence” was reached after 60 cases and “excellence” after 115 cases. There was a plateau in operating time at 60 cases and another plateau observed in the fluoroscopic measures at 115 cases. The comparisons were drawn from a senior surgeon who had performed more than 1600 cases. This was done at a large tertiary referral center where both the novice and senior surgeons obtained their own access and performed the procedure in a similar fashion. The novice surgeon , although not familiar with PCNL, was otherwise already proficient in other endourological techniques and was observed for a defined period of time by the senior surgeon. This limits the applicability of predicted time to competence to a truly novice surgeon not experienced in any endourology and who is not in a similar supportive environment. Importantly, there were no major complications and the stone free rates were similar for both surgeons [12].

Ziaee et al. also looked to define the learning curve in PCNL [10]. A single surgeon was prospectively observed for his first 105 solo PCNL cases at a large tertiary referral center. Operation time plateaued at 45 cases. Only minor complications were observed, and these complications were all within the first 45 cases. Competence was therefore achieved at 45 cases. However, stone clearance continued to improve up to the final case, so excellence was felt to be achieved after 105 cases. Of note, the subject was an endourology fellow already adept at other endourological procedures. Applicability of this study to truly novice surgeons or residents remains to be seen [10].

Guiu-Souto et al. looked to break down fluoroscopic measures and apply this to the learning curve. Due to a deficit in radiological exposure training in the urologist, both urologist and patient are at risk for significant fluoroscopic exposure during the learning curve. Based on plateau achieved in procedure time and exposure time, competence was measured at 50 cases and excellence at 105 cases [14].

Song et al. assessed the learning curve in total ultrasound guided PCNL and found that the number of cases to achieve competency was similar to those of previous reported studies. They retrospectively reviewed the outcomes of a novice surgeon to that of a senior surgeon, who had more than 1000 cases under his belt. The study was done at a high-volume, tertiary referral center in China for complex stone disease where ultrasound guidance was performed for the entire procedure. Competency was felt to be obtained after 60 cases with no difference in stone free rates and complications. It is important to note that a surgeon can still safely and effectively perform the operation while still learning [15].

Laparoscopic Surgery Learning Curves

Ku et al. defined the learning curve in laparoscopic nephrectomy in children as 10 cases. Prior to this, laparoscopic nephrectomy had previously been shown to be a safe alternative to open surgery in children [16]. Ku et al. felt that the learning curve was defined by not only operating room time and case number, but also by the ‘frequency’ with which a surgeon performs a procedure. The experience of a single surgeon was retrospectively described over a 5-year time period in which he performed 20 consecutive cases. The first 10 cases were compared to the outcomes of the second 10 cases. In children aged 1–15 years, there was no statistical difference among the 2 groups in patient characteristics. The initial approach was transperitoneal, but with additional experience the retroperitoneal approach was employed. The time in the operating room statistically decreased – 181–125 min – and there was a significant decrease in median hospital stay – 5.4–2.5 days – between the two groups . Otherwise no major complications were seen and both had routine postoperative courses. The surgeon was already an expert in open surgery but had not specifically performed laparoscopy in children. It was unclear if the surgeon had performed laparoscopy in adults prior to this study. This should be taken into account when determining the true learning curve of a novice [16].

Robotic Surgery Learning Curves

Robot-assisted surgery has gained traction since the 1980’s with its first use in neurosurgery, and since then has had increased utility in adult and pediatric surgery [17]. Unique features in robot-assisted laparoscopy, including 3-dimensional, enhanced (10X) vision and greater degree of rotational movement, lend towards an easier learning curve [17]. From prior studies that looked to define the learning curve in adult surgery, robot-assisted laparoscopic prostatectomy has a learning curve of 50 cases to achieve competency – with 150–200 cases needed to achieve more nuanced mastery over oncological margins [11]. In surgeons already adept at robotics, the learning curve for robot-assisted laparoscopic cystectomy was defined as 20 cases [11]. And for robot-assisted laparoscopic partial nephrectomy, competency was seen in 15–30 cases [11]. Abboudi et al. performed a systematic review of studies defining the learning curve in adult urological surgery [18].

A study in the Journal American College Surgeons assessed the anastomosis of a dismembered pyeloplasty using open, laparoscopic and robot-assisted surgery in a swine animal model [17]. The robotic arm had shorter anastomotic times and demonstrated an easier learning curve compared to the laparoscopic arm. Both the robotic and laparoscopic groups’ procedural times improved with familiarity and approached those of the open arm. The adequacy of the repair was determined through a unique intraoperative design assessing pressure and volumetric measures to indicate patency. With experience, these parameters also approached those of the open arm. Histology taken from the robotic arm actually indicated a better profile (less collagen III deposition) than the open and laparoscopic arms [17].

Sorensen et al. completed a retrospective review of the first 33 consecutive children undergoing robot-assisted laparoscopic pyeloplasty when robot-assisted surgery was first introduced in 2006 at their institution [11]. The outcomes of two pediatric urologists performing these were compared to open controls. Both robotic and open groups had success of 97% at a mean of 1-year follow-up. The total operative time was used as a maker for defining a learning curve of 15–20 cases. The time for the pyeloplasty was examined separately from set-up time associated with positioning and other ‘peripheral’ time. The improvement in operative time seen in the robotic arm was mostly due to a decrease in the actual pyeloplasty versus this ancillary time. Early on there were 3 robotic failures that required conversion to laparoscopy for part or all of the remaining operation, but no conversion to open was necessary. There were no statistical differences in the postoperative complications between the two groups. However , there were more ‘technical complications’ that took place in the early learning period of the robotic arm. The authors highlighted that achieving excellency most likely takes more cases, but that a novice can safely and efficiently perform the procedure with this initial small learning curve. Other points of interest in establishing a robotic program at an institution, including training of the support staff and the authors describe what they believe to be a “synergistic effect, in that experienced robotic surgery staff may accelerate a novice surgeon…and vice versa.” [11] Some advocate separating institution of a new technique , as seen in this study, from the learning curve of an already established procedure [18].

Tasian et al. performed a prospective cohort study at an academic institution comparing outcomes of pediatric urology fellows versus the attending surgeon [19]. The fellow cases were defined as the fellow performing >75% of the console time versus 100% of the console time being done be the attending. There were no failures as defined by postoperative imaging, and there were no intraoperative complications for either group. Median operative time was 58 min for the attending. The mean rate of fellow operative time decline was recorded and used to project a learning curve of 37 cases. Whether or not this is achievable during a 2-year fellowship depends on the program’s “case volume and supervision” level. And it is not clear how this translates to post-training when a surgeon is operating independently. Other considerations include attendings performing the more difficult cases, defining the cost of obtaining proficiency in a robotic procedure, progressive involvement of the fellow in more difficult portions of the case (progression from renal dissection, to anterior anastomosis, to posterior anastomosis) [19].

Team Approach

Sim et al. assessed a unique team approach during introduction of robot-assisted laparoscopic prostatectomy at their institution [20]. A team of three urologists, progressing from bedside assistant to console surgeon, performed a total of 100 cases for organ confined prostatic adenocarcinoma. The first console surgeon had the most experience, however limited, and the other two acted as bedside assistants (one with more active involvement) [20].

Intra-Operative Assessment of Skills

Expert-Based Evaluation: Objective Structured Assessment of Technical Skills

The traditional model for evaluation of surgical skills has been through direct observation of the trainee at an individual level. This has obvious limitations, one of which is the subjective nature of the assessment and another is the irreproducibility [21]. Animal models have also been employed to measure surgical skill, which carries its own ethical implications. The development of bench models has been used as a more accessible and affordable avenue for the same purpose [21]. Bench models can represent inanimate simulations of tasks experienced in the operating room. The Objective Structured Assessment of Technical Skill (OSATS) was developed as an extension to a previous model designed to objectively assess clinical competency. Martin et al. showed the promise of this test by showing its feasibility in a group of general surgery residents at the University of Toronto. The authors were able to show that use of live and bench models were comparable. They used a three-prong scoring system: a procedure-specific checklist, a previously validated global rating scale, and a single pass or fail decree [21].

Kishore et al. took this format and applied it to endourology. A 14-point curriculum was developed to assess resident skills with cystoscopy and ureteroscopy [22]. This ranged from selection and assembly of instruments, troubleshooting common problems, to patient positioning. This model combined previous work that looked at these tasks individually [22,23,24]. The construct validity – whether a test measures what it purports to be measuring – was evaluated in this undertaking. In order to do this, it was necessary to show that the outcome was associated with experience or training level of the resident [22]. The authors believed that acquisition of the fundamental elements of the procedure was key to understanding the technique along with the manual skill required to complete that task. They also highlighted the importance of being accessible to the trainee – simply by occurring in a scheduled fashion separate from clinical duties at the start of the day. This initial phase of the study of this tool was done at a single institution. Resident feedback was utilized in the development of the final tool that took all of this into account [22].

Argun et al. went on to show the construct and internal validity of this tool in a multi-institutional setting [1]. Thirty urology residents at three institutions were enrolled in this study. Employing the same tool Kishore and colleagues used, cognitive and psychomotor skills were assessed. Anatomical models of the renal collecting system, reconstructed from CT scans, were used for the latter aspect of the test. Using this unique model, the trainee was asked to navigate, stone basket, perform laser lithotripsy and assemble equipment. A checklist was used to confirm each proposed step was performed, followed by a debriefing by the faculty examiner, and then resident feedback. Once again construct validity was similarly assessed. Internal validity was felt to be intact due to the correlation between the more subjective global assessment score and the total score for the psychomotor checklist [1].

Institutions are now employing these surgical assessment tools within their residency programs as a means to assess resident progress and provide feedback to the trainee [1]. The next step would be the ability to utilize this in competency assessments for trainee promotion. In order for a construct to be used in this fashion, it would need to be rigorously tested and validated at multiple institutions [1, 4].

Crowdsourced Assessment of Technical Skills

Novel models to use assessment tools in a more blinded and anonymous way have been developed to add objectivity to the process. Ghani et al. used a model created in Michigan to recruit urologists in the state to assess the video recorded performances of peers through blinded review as a means to coach one another – The Michigan Urologic Surgery Improvement Collaborative (MUSIC) [25]. This model has the advantage of providing a safe review forum free from politics of competing urology practices in part due to the common mission to improve the care of prostate cancer patients throughout the state. The group has even linked assessments to patient outcomes. These collaboratives are powerful assessment models, and require a significant amount of buy-in from the providers. The state of Michigan is also covered by one primary payer – Blue Cross and Blue Shield – which funds this endeavor and hires data abstractors to cull patient care outcomes data from each hospital. In many healthcare environments, this model is difficult to reproduce. Using the same assessment tools, Lendvay et al. has leveraged large groups of anonymous reviewers in a way that is scalable [26].

After validation in dry-lab settings, animate labs, and human surgery, Crowd-Sourced Assessment of Technical Skills (CSATS) has been shown to predict patient outcomes and correlates to expert reviews of providers’ performances [27]. The technology leverages anonymous crowdworkers from an online platform that encompasses over a million reviewers. The reviewers need not be in the medical field, however, the large group of reviewers – 30–50 – who review each video provide an accurate assessment of the technique of a surgeon through video review. These numeric objective scores have been shown to correlate with the patient outcomes of the surgeons reviewed. The process takes only a few minutes to hours to review hundreds of surgical videos. The intention of this technology is to make an objective de-identified review process scalable and rapid so that the feedback provided can yield positive change before the next surgery is performed [27].

These methods to aggregate large numbers of either expert surgeons or crowdworkers all center on a theme of objectifying a process that for over a hundred years has been performed by one or two individuals always invested in the advancement or credentialing of the performer. It is imperative that our profession ensure public safety through iterative and reliable assessment methods.

Simulation Training in Surgery

Surgeon case volume is one of the most important factors in improving surgical outcomes and reducing surgical complications and improving morbidity and mortality. However, in our current environment of training, the question arises on whether or not it is ethical to train and practice in real life scenarios with live patients. Furthermore, the current residency paradigm is a ‘train-by-opportunity’ which means that if a certain disease is encountered within the rotation or residency experience, then the resident is fortunate to have seen the disease. But if the disease is not studied or seen in a patient, then the resident never sees that disease. Surgical simulation allows for development of technical and non-technical skills in surgery without risking patient safety and allows for every trainee to have similar experiences. These learner-centered education paradigms are increasingly incorporated into surgical training curricula, and surgeon credentialing mandates [28]. With the current technologies available, simulation training is become an important and emphasized part of surgical training. Additionally, simulation can be used at any stage of training, and even for maintenance of skills in surgeons who have completed their surgical training. A wide range of training platforms and curriculums have been utilized in surgical training with a focus on different aspects of surgical training. Surgical simulation by a variety of methods has been shown to improve surgeon performance in the operating room, suggesting that surgical simulation training contributes to acquiring and transfer of skills necessary to achieve surgical proficiency [18, 29].

Task-Based Simulation

Task-based simulation has traditionally been the most common platform that exists in surgical simulation. Task-based simulation is the simplest form of simulation in any platform for both reality-based and virtual reality simulation. Steigerwald et al. demonstrated that the surgical residents of all levels using either a reality-based laparoscopic trainer or a virtual reality laparoscopic trainer improved their scores in both the simulation setting and in the live operating room setting, but there was no significant difference in the performance of the residents using one system compared to the other [30]. Regardless of the type of simulation trainer, simulation correlates with improved operative performance in both the simulation and live operative setting in laparoscopy. Currently, there is no evidence that one method is superior to the other. The tasks themselves are not specific to operations, but rather tasks completed with inanimate objects focused on utilizing different surgical instruments and developing surgical skills. Some of the basic skills includes hand-eye and left-to-right-hand coordination, grasping, transferring, cutting, and suturing. These skills are acquired with a variety of common tasks including peg transfer, pattern cutting, ligating loop, clip application, needle driving, suturing, and knot tying [30].

Task Based Simulation: Reality-Based Simulation

In laparoscopic surgery, physical box or video trainers have been composed the majority of simulation surgery. One of the main advantages of this kind of training device, is that they utilize the actual laparoscopic instruments used in surgical procedures. This allows the trainee or user to familiarize themselves with the surgical instruments, and practice a variety of tasks using these instruments (Figs. 28.1 and 28.2). Simulation with these types of trainers have been established as an effective method of laparoscopic skills acquisition [30].

Fig. 28.1
figure 1

Reality-based laparoscopic simulation. In this image, the operator is performing a task-based simulation commonly known as “peg transfer” using laparoscopic instruments to transfer objects from pegs on one side to pegs on the other side

Fig. 28.2
figure 2

Reality-based laparoscopic simulation. In this image, the operator is performing a task-based simulation for simple suturing and knot tying using laparoscopic instruments

Several studies have been published evaluating the utility of laparoscopic surgical training platforms. A systematic review by Dawe et al. summarized the transferability of skills acquired from surgical simulation-based training to the live patient setting. This review included a total of 27 studies: 14 studies on laparoscopic simulation, 13 studies on endoscopic simulation and 7 studies on other procedures. The vast majority of studies reviewed demonstrated improved performance in the participants with simulation training compared to their peers without simulation training [28].

Task Based Simulation: Virtual Reality Simulation

In more recent years, virtual reality (VR) systems have become more widely available and have been integrated into some simulation curriculums. The virtual reality simulators utilize computer-generated environments to perform simulation tasks (Figs. 28.3 and 28.4). Similar to the reality-based simulators , virtual reality simulation allows the trainee to perform specific tasks. A major advantage to VR simulation is that objective performance metrics beyond task time can be captured and used as feedback to the learner. Common metrics include path length, economy of motion, grasp forces, Cartesian coordinate data, velocities, object drops, etc. These metrics correlate with expertise [31]. One of the potential disadvantages of virtual reality simulators in laparoscopic surgery is the lack of haptic feedback to the user which is dissimilar to real surgery except in robotics where no haptic feedback exists.

Fig. 28.3
figure 3

A virtual reality simulator demonstrating a computer-generated image of a gallstone within a gallbladder. This simulator can be used for task-based and procedure-based simulation. In this this image, the operator can practice using laparoscopic instruments to touch and move the gallstone within the gall-bladder

Fig. 28.4
figure 4

A virtual-reality simulator designed for robot-assisted laparoscopic surgery simulation. This simulation system can be used for task-based and procedure-based simulation in robotic surgery

Procedure-Based Simulation

Procedure-based simulation allows trainees to apply the fundamental skills in surgery that they have developed to perform more complex procedural tasks in the form of surgical procedures or steps of surgical procedures. Procedure-based models can include portions or specific steps of surgical procedures, or complete surgical procedures. One of the major challenges in procedure-based simulation is their ability to realistically represent the operative environment [32]. Santangelo et al. developed a carotid endarterectomy whole-task simulator which demonstrated high face-validity among experts and trainees [33]. Similarly, Ghazi et al. developed and tested a simulation model for PCNL [34]. The model included simulation of all steps of the procedure and was tested in experts and trainees. They were able to demonstrate excellent face and content validity of the model [34]. And most recently, Weiss et al. demonstrated excellent face and construct validity among experts and trainees in their cervical laminectomy simulator [35]. Future research will help delineate the role and utility of these and other procedure-based simulators in the training and assessment of surgical skills.

Procedure-Based Simulation: Reality Based Simulation

Procedure based simulation allows surgical trainees a safe and risk-free environment to practice specific portions of procedure or entire procedures. Millan et al. created a model laparoscopic ureteral reimplantation and found that use of the simulator increased technical performance in surgeons [36].

Procedure-Based Simulation: Virtual Reality Simulation

Virtual reality simulation training has been incorporated into some training curricula. Sethi et al. demonstrated face validity of a virtual reality simulator in 20 participants. Skilled surgeons, fellows and medical students performed several tasks using a robotic simulator. All participants found the simulator easy to use and realistic [37]. One study by Chowriappa et al. evaluated the use of virtual reality training in urology trainees. In this study, trainees were randomized to either a control group with typical training or the intervention group in which trainees were given procedure-based virtual reality training in a specific step of a robotic surgical procedure. They found that the virtual reality simulation group had overall higher scores and better performance compared to the control group. In addition, the majority of participants found that the simulation platform was similar to the real surgical procedure [32].

Whitehurst et al. conducted a randomized study to evaluate the use of virtual reality simulation compared to dry lab simulation in robot naive surgeons and trainees. In this study, participants were randomized to robotic training using either dry lab task completion using a surgical robot or virtual reality surgical simulation using robotic surgery simulator. Performance was then assessed with procedure completion in a live animal model completed using the surgical robot. They found no difference in surgeon performance between the two groups and concluded that virtual reality simulation can be used for training in robotic surgery [38].

3D Simulation Models

3-dimensional model-based simulation training has recently emerged as another tool for surgical training. Ghazi et al. developed an anatomically correct 3-dimensional model for simulation of percutaneous nephrolithotomy. The model was tested in urology and interventional radiology trainees and experts and was found to have good validity [34]. Cheung et al. developed a 3-dimensional model of a kidney to be used as a model for laparoscopic pyeloplasty surgery. The model was tested using pediatric urology fellows and faculty members demonstrating usability [39]. The use of these and other similar 3D models may allow for high-validity full-procedure simulation to be used in surgical training.

Surgical Warm-Up and Rehearsal

The concept of warming-up before is common and widely utilized across several disciplines outside of medicine, such as sports and performing arts. Warm-up prior to sports activities has been shown to enhance performance, reduces fatigue and reduce errors. In contrast surgical warm-up prior to operating remains a subject of debate among surgeons. The main concern among surgeons in the belief that this practice will delay or prolong surgical procedures and has not been widely adopted in the surgical community. Surgical warm-up can include both mental warm-up and physical warm-up. A variety of studies evaluating the utility of surgical warm-up have demonstrated improvements in intra-operative performance in technical, cognitive and psychomotor performance [40,41,42]. Lendvay et al. demonstrated that in a randomized controlled trial, expert surgeons performing robotic surgery tasks benefited from a brief VR warm-up session prior to doing ring transfer and intracorporeal suturing [43]. This has led to an on-going trial of expert surgeons in the operating room doing clinical cases and seeing if brief VR suturing tasks can prime the surgeons to perform better in the first 15–20 min of their robotic surgery.

Da Cruz et al. performed a study evaluating the performance of medical students utilizing warm-up prior to laparoscopic cholecystectomy in a porcine model compared to medical students completing the surgery without warm-up. The group participating in warm-up had significantly superior results compared to the group without warm-up. In this group of inexperienced medical students, pre-operative warm-up was effective in improving surgical performance [41].

Polterauer et al. performed a randomized controlled trial comparing the use of pre-operative warm-up training with a virtual reality simulator before laparoscopic salpingo-oophorectomy with no warm-up in experienced surgeons and residents. In this study, there was no statistically significant difference in the performance of surgeons in the warm-up group compared to the no warm-up group [44].

Pike et al. performed a systematic review of studies evaluating the effect of pre-operative simulation on surgical performance. The review included 13 studies: 5 randomized controlled trials, 4 randomized cross-over trials and 4 case series. Four studies were on real patients, and the remainder were on simulated outcome measures. All but one of the studies found that warm-up improves operative outcomes, although the specific measures of outcome varied among studies [42].

A systematic review by Abdalla et al. revealed that warming-up before an operative procedure improved trainee performance. This review included six randomized studies comparing the performance of trainees with and without warm-up on laparoscopic surgical performance. Improvement in intraoperative laparoscopic performance was observed with surgical warm-up pre-operatively in 5 of 6 studies [40].

In addition to physical warm-up, mental practice has also been demonstrated to be an important part of performance preparation. A randomized controlled study be Arora et al. evaluated the effect of mental practice on surgical performance of virtual reality laparoscopic cholecystectomy in novice surgeons. In this study of 18 participants, the group utilizing mental practice performed better compared to the group not using mental practice [45].

Similar to other fields, the use of pre-operative warm-up (both mental and physical) appears to be helpful in improving surgical performance. The positive effect of surgical warm-up appears to be present in novice surgeons, surgical trainees, and experienced surgeons.

Credentialing

Currently there is no standard US hospital credentialing guidelines for procedures. Hospitals turn to case currency (how many cases a surgeon has done), VR training, animate lab training, and residency/fellowship experience to drive credentialing decisions. With the increasing availability and diversity of surgical training tools, there is likely to be an increase in the demand for using these tools to demonstrate proficiency in training, and even for credentialing purposes. One current example is the Fundamentals of Laparoscopic Surgery (FLS) examination which is required by the American Board of Surgery for completion and Boarding of a General Surgery Diplomate. In robot-assisted surgery, Goh et al. developed a validated standard assessment tool for surgical skills. The Global Evaluative Assessment of Robotic Skills (GEARS) consists of 6 domains (depth perception, bimanual dexterity, efficiency, autonomy, force sensitivity and robotic control) with proficiency scored on a 5-point Likert scale [46]. This validated tool has been used by some institutions as part of an integrated robotic surgery training curriculum. Other similar simulation training curricula designed to teach surgical technique in different surgical subspecialties and proficiency benchmarks can be further implemented into surgical training programs. Eventually, demonstration of proficiency with simulation platforms may be required as part of formal credentialing by surgical governing bodies, or hospitals and institutions.

Conclusions

A recent focus on improved patient outcomes and safety has led to a shift in approach for medical education, particularly in surgical training. Simulation allows an opportunity for developing and maintaining surgical skills in a safe environment with no risk to patients (learner-centered). Surgical simulation has been implemented in several areas of surgical training over the past several decades. A variety of task-based, reality-based and VR simulation platforms have been utilized and have demonstrated validity and efficacy in improving surgical skills. Furthermore, several studies have demonstrated those skills acquired during surgical simulation translate to improvements in surgical performance in the live patient setting. More and more institutions are including simulation training as part of their formal surgical curricula and this trend is expected to continue. In addition, the use of simulation to demonstrate proficiency will likely have an increased role in surgical credentialing, and the use of patient specific rehearsal through simulation may help improve patient outcomes.