Abstract
Introduction
Research has clearly shown the benefits of surgical simulators to train laparoscopic motor skills required for positive patient outcomes. We have developed the Virtual Basic Laparoscopic Skill Trainer (VBLaST) that simulates tasks from the Fundamentals of Laparoscopic Surgery (FLS) curriculum. This study aims to show convergent validity of the VBLaST pattern cutting module via the CUSUM method to quantify learning curves along with motor skill transfer from simulation environments to ex vivo tissue samples.
Methods
18 medical students at the University at Buffalo, with no prior laparoscopic surgical skills, were placed into the control, FLS training, or VBLaST training groups. Each training group performed pattern cutting trials for 12 consecutive days on their respective simulation trainers. Following a 2-week break period, the trained students performed three pattern cutting trials on each simulation platform to measure skill retention. All subjects then performed one pattern cutting task on ex vivo cadaveric peritoneal tissue. FLS and VBLaST pattern cutting scores, CUSUM scores, and transfer task completion times were reported.
Results
Results indicate that the FLS and VBLaST trained groups have significantly higher task performance scores than the control group in both the VBLaST and FLS environments (p < 0.05). Learning curve results indicate that three out of seven FLS training subjects and four out of six VBLaST training subjects achieved the “senior” performance level. Furthermore, both the FLS and VBLaST trained groups had significantly lower transfer task completion times on ex vivo peritoneal tissue models (p < 0.05).
Conclusion
We characterized task performance scores for trained VBLaST and FLS subjects via CUSUM analysis of the learning curves and showed evidence that both groups have significant improvements in surgical motor skill. Furthermore, we showed that learned surgical skills in the FLS and VBLaST environments transfer not only to the different simulation environments, but also to ex vivo tissue models.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Surgical training follows an apprenticeship model where surgical residents practice operations with the supervision and mentorship of faculty surgeons. This method requires significant time and personal resources while not providing a standardized means for surgical skill evaluation [1, 2]. Traditional surgical assessment methods, such as direct observations by an experienced trainer to assess the skills of the trainee, are generally subjective and use global rating scales (GRS) to score competency. Methods such as Objective Structured Assessment of Technical Skills (OSATS), Global Operative Assessment of Laparoscopic Surgery (GOALS), and Global Rating Index for Technical Skills (GRITS) allow experienced surgeons to use structured checklists for technical criteria and rate the surgical performance of the trainee under direct observation [3,4,5,6]. However, due to their subjective nature, there are serious criticisms of creating a generalized rating assessment across all subjects. Such criticism cites tremendous human resource costs, poor interrater reliability of human observers, and poor correlation with technical skill to patient outcome in the operating room [7, 8].
To provide more objectivity and standardization for laparoscopic skills assessment, the McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS) was developed and validated as an effective simulator to teach and assess laparoscopic surgical skills [9]. The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) adopted MISTELS into a program called the Fundamentals of Laparoscopic Surgery (FLS). Consequently, the FLS program is now the current standard is assessing proficiency in laparoscopic skills and is required for board certification since 2009 [10,11,12]. While the validated FLS simulator is shown to correlate with clinical skill performance, there are inherent drawbacks such as subjectivity in task assessment, high cost for testing administration, and the significant amount of time required for scoring [13,14,15,16,17]. To address the general limitations of physical trainers, virtual reality-based simulators have been developed and shown to provide a safe and effective training and assessment platform for laparoscopic surgical skills [18, 19]. To specifically address the limitations of the FLS training simulator, we have developed the Virtual Basic Laparoscopic Skills Trainer (VBLaST) that is capable of simulating the five FLS task modules in real time [17, 20,21,22,23]. The benefits of the VBLaST system include automated and robust scoring, introduction of kinematic metrics that are correlated to task performance, dramatically increased objectivity in task performance assessment, and the elimination of high cost for administration or testing materials [17, 20,21,22,23]. As with any virtual reality-based simulator, a thorough validation is required to demonstrate its effectiveness as a surgical training and performance assessment tool.
The goal of this study is to demonstrate convergent validation of the VBLaST pattern cutting module as an effective training and task assessment simulator for laparoscopic surgical skills. To achieve this goal, we aim to determine if there are significant improvements in pattern cutting task performance scores between trained VBLaST subjects and untrained control subjects once training is complete. Furthermore, we aim to determine if the acquired surgical motor skill for the trained VBLaST subjects transfer to the FLS training simulator and ex vivo models. We hypothesize that trained VBLaST subjects will not only outperform control subjects in the VBLaST simulation environment, but also in the FLS and ex vivo simulation environments. We propose three different mechanisms to show validity of the VBLaST pattern cutting system. First, we show task performance learning curves for the FLS and VBLaST trainers that are objectively characterized by cumulative summation (CUSUM) criterion [17, 24, 25]. Next, we show that there is significant task performance retention and transfer from the FLS to VBLaST simulation environments, and vice versa (p < 0.05). Finally, we show that task performance transfers from the simulation environments to ex vivo cadaveric models mimicking the pattern cutting task (p < 0.05). Ultimately, we present evidence that we have achieved convergent validity of the VBLaST pattern cutting module and show that it can be used as an effective laparoscopic skills training and task assessment simulator.
Methods
The study was approved by the Institutional Review Board of University of Buffalo and Rensselaer Polytechnic Institute.
Subject recruitment
Prior to subject recruitment, we performed an a priori analysis according to the Mann–Whitney U test to determine the minimum number of subjects required for the FLS training group, VBLaST training group, and the control group. Using FLS and VBLAST task performance scores from pilot study data, we estimated conservative effect sizes for the FLS and VBLaST groups and show that d = 5.67 and d = 2.57, respectively. Based on these effect sizes, a 95% confidence interval, and a minimum power of 0.80, we determined that a minimum of four subjects are required for the FLS training group, three subjects are required for the VBLaST training group, and four subjects are required for the control. Consequently, we recruited seven subjects for the FLS training group, six subjects for the VBLaST training group, and five subjects for the control group. To eliminate any bias due to handedness, all the recruited subjects had no prior skills in laparoscopic surgery and were right-handed. Subjects were monetarily compensated for their participation. The statistical software G*Power was used to determine the effect sizes and the minimum number of subjects required for this study [26].
Hardware
Two different simulators were used over the course of this learning curve study. The FLS group trained on a standard SAGES-certified FLS box trainer with the official supplementary materials to administer the pattern cutting task. The VBLaST group trained on the VBLaST system, specifically on the pattern cutting module. The VBLaST system consists of two major components: hardware interface and the simulation software suite. The hardware interface utilizes two PHANTOM Omni haptic devices (Geomagic, Morrisville, North Carolina), connected to appropriate surgical tool interfaces, that provide positional tracking and real-time force feedback in the virtual environment. The simulation software uses custom-developed algorithms and software to simulate tool to cloth interactions in the virtual environment. Figure 1 displays both the FLS box trainer and the VBLaST simulator.
Learning curve and task retention study design
Recruited subjects were randomly split into three groups: FLS training group, VBLaST training group, and control group with no training. All the subjects were given standardized instructions on how to successfully complete the pattern cutting task for the FLS and VBLaST simulators. The untrained control group performed three FLS trials and three VBLaST trials on the first day. The control group then waited 2 weeks and performed three FLS trials and three VBLaST trials as part of the final task retention day without undergoing any laparoscopic skills training. The FLS and VBLaST training groups were instructed to complete up to 10 trials per day for twelve consecutive days on each group’s respective simulator. Following 12 days of training, each group was instructed to wait 2 weeks without undergoing any laparoscopic training before performing three FLS and three VBLaST trials each as part of the final task retention day. A schematic illustrating the study design is shown in Fig. 1B.
Transfer task study design
Following the task retention trials, each subject was asked to perform a FLS pattern cutting task on ex vivo cadaveric peritoneal tissue to simulate motor skill transfer from the simulation environment to ex vivo tissue models. The transfer task consisted of replicating the FLS pattern cut task on marked excised cadaveric abdominal tissue samples. The official FLS pattern cutting gauze pads were used as a stencil to draw circles on ex vivo samples to ensure that all of the diameters for marked samples remain the same for each sample. Using a standardized set of instructions, the subjects were told to resect the marked peritoneal tissue as accurately and as quickly as possible without damaging the underlying fascia or muscle tissue. Each tissue sample was photographed before and after the completion of the transfer task. Figure 2 shows sample images of before and after the transfer task completion for an example subject.
Task performance metrics
The proprietary FLS scoring metrics for the pattern cutting task was used to manually score each trial for each subject [9]. Each FLS pattern cutting trial completion time was subjectively recorded with an accuracy of ±1 s. FLS scoring metrics were obtained from the FLS committee under a non-disclosure agreement, and hence its details cannot be reproduced in this paper. The VBLaST task performance metric reproduces the same undisclosed FLS scoring formulation in the automated VR environment [23]. The FLS and VBLaST pattern cutting performance scores were used as outcomes measure for the learning curve and task retention tests. Since video recording was not allowed according to institute policies at the gross anatomy lab, the performance metric for the ex vivo-based transfer task was completion time. Completion time consisted of the total time (min) required to completely resect the circle-marked peritoneal tissue from the tissue sample. Each transfer task trial’s completion time was subjectively recorded with an accuracy of ±1 s.
Statistical analysis
Matlab (MathWorks, Natick, MA) was used to perform all statistical analysis in this study. With a 95% confidence interval, Mann–Whitney U tests were used to determine statistically significant differences between any two groups. All box plots display midlines indicating median values along with whiskers that represent interquartile ranges that cover 99.3% of the data distribution, or ±2.7σ, where σ is the standard deviation. Each boxplot also represents all trials for all subjects in each respective group according to training day. CUSUM scores were calculated for each trial per subjects. Each consecutive trial was flagged as a “success” or “failure.” The criterion for a “success” was when the FLS or VBLaST task performance score is equal to or higher than the defined threshold. The criterion for a “failure” was when the FLS or VBLaST task performance score is lower than the defined threshold. In this study, the defined threshold for achieving a “senior” level of mastery is 63 [9]. P 0 equals 5% and is defined as the acceptable failure rate, whereas P 1 equals 10% and is defined as the unacceptable failure rate [17, 27]. Type I and type II errors were defined as 0.05 and 0.2, respectively. Each “success” trial adds the parameter, s = 0.07, to the CUSUM score. Each “failure” trial subtracts the parameter, 1-s, which equals 0.93 from the CUSUM score. These parameters define the decision limits, H 0 and H 1, which are equal to −2.09 and 3.71, respectively. The parameters, s, H 0, and H 1, are independent of the assessment task and have been well defined in previous studies [17, 27]. Subjects that have CUSUM learning curves below the H 0 decision limit indicate that the failure rate of successfully achieving a “senior” mastery level is below 5%.
Results
Figure 3 shows the FLS pattern cutting performance scores, with respect to training days, for the FLS training and control groups. Results show that there are no significant differences between the FLS training group and the control group for the first day of training. FLS pattern cutting retention task scores show that both the FLS-trained (223.5 ± 18) and VBLaST-trained (109.6 ± 26.8) groups significantly outperformed the untrained control group (81.5 ± 25, p < 0.05). Figure 4 shows the VBLaST pattern cutting performance scores, with respect to training days, for the VBLaST training and control groups. Results indicate that there are no significant differences between the VBLaST training group and the control group for the first day of training. However, VBLaST pattern cutting retention task scores indicate that both the VBLaST-trained (209.4 ± 21) and the FLS-trained (175.2 ± 26.3) groups significantly outperformed the untrained control group (155 ± 21.2).
Figure 5A shows the CUSUM learning curve results for subjects trained in the FLS simulator. Three subjects, FLS2, FLS3, and FLS5, passed the acceptable failure rate of 5% (H 0) over the course of the 12 days training period. Specifically, FLS2, FLS3, and FLS5 subjects passed the acceptable failure rate at trials 71, 85, and 85, respectively. Figure 5B shows the CUSUM learning curve results for subjects training in the VBLaST simulator where four subjects, VBLaST1, VBLaST4, VBLaST5, and VBLaST6 all passed the acceptable failure rate of 5% (H 0) over the course of the training period. Specifically, the four subjects VBLaST1, VBLaST4, VBLaST5, and VBLaST6 passed the acceptable failure rate at trials 57, 29, 29, and 29, respectively.
Figure 6 shows the ex vivo transfer task completion times for the trained FLS, trained VBLaST, and untrained control groups. Results indicate that the trained FLS (7.9 ± 3.3) and trained VBLaST (12.3 ± 1.9) subjects completed the transfer task significantly faster than the untrained control group (18.4 ± 3.1, p < 0.05). However, there was no significant differences between the transfer task completion time between the trained FLS and VBLaST groups (p > 0.05).
Discussion
In this study, we establish convergent validity for the VBLaST pattern cutting simulator where trained VBLaST subjects significantly outperform the untrained control students in both FLS and VBLaST simulation environments, indicating motor skill retention and transfer to a new simulation environment. These results are benchmarked against the established FLS simulator where there is also evidence of motor skill learning and transfer to the VBLaST simulation environment. Learning curve studies have shown evidence of laparoscopic skill learning in various laparoscopic-based procedures [17, 27,28,29,30]. Many of these studies utilize the CUSUM method to quantify learning curve outcomes. Specific to the VBLaST trainer, we previously report the learning curves for the VBLaST peg transfer simulator with the “junior,” “intermediate,” and “senior” mastery levels [17]. While the pattern cutting and peg transfer tasks are different, we report an increased number of students that achieve the “senior” mastery level when compared to our previously validated VBLaST peg transfer module [17]. Learning curve rates indicated that only three out of the seven FLS students and four out of six VBLaST students achieved the “senior” mastery level. Although a direct comparison cannot be made, both simulation environments result in comparable learning.
While some studies report laparoscopic skills transfer from simulation environments to the operating room [31,32,33,34], we have chosen an ex vivo cadaveric tissue model to assess laparoscopic motor skill transfer. We observe that task transfer performance completion times for the trained VBLaST and FLS groups are significantly lower than for the control group and there was no significant difference between training on the real and the virtual simulators.
Limitations and future work
Currently, only task performance scores are used to determine surgical motor skill performance on the FLS and VBLaST trainers. Studies have shown that other measure such as kinematic metrics can also be used as effective measure for assessing surgical skill [35, 36]. All of these metrics, such as task performance scores or kinematic metrics are measures of assessing the resulting motor task performance. However, these metrics focus on the outcomes of task performance instead of assessing the underlying neurological responses to fine motor skills. Neurophysiological metrics that can be incorporated into surgical simulator can also provide objective measure of motor skill performance by directly measuring cortical activation during a given task [37]. Ultimately, a multivariate approach that combines numerous distinguishable metrics can be useful in objectively differentiating and classifying laparoscopic motor skills with significantly higher accuracy. Another limitation is the usage of CUSUM scores to objectively measure learning curve outcomes for longitudinal studies. The CUSUM method utilizes a threshold that assigns a binary value of “success” or “failure” trials depending on whether the threshold condition is met. However, many learning curve rates are often non-linear and this non-linearity is not captured in the CUSUM method. Moreover, CUSUM scores utilize arbitrary threshold values that may not directly translate from one simulation environment to another. Traditionally, transfer tasks have been performed on live patients or animal models to show transfer of laparoscopic motor skills from the simulation environment to clinical environments [31,32,33,34]. Due to the complexity and variability of in vivo clinical environments, it is often difficult to standardize the transfer task for each subject. Furthermore, metrics to assess laparoscopic motor skill transfer are often subjective or depend on GRS that are not robust. By utilizing ex vivo-based models it is possible to add more objectivity to assessing laparoscopic motor skill transfer, even if the objective measures are as simple as task completion time. We plan on addressing some of these limitations regarding objective assessment for motor skill learning and transfer in future studies.
References
Darzi A, Smith S, Taffinder N (1999) Assessing operative skill. Needs to become more objective. BMJ 318:887–888
Wanzel KR, Hamstra SJ, Anastakis DJ, Matsumoto ED, Cusimano MD (2002) Effect of visual-spatial ability on learning of spatially-complex surgical skills. Lancet 359:230–231. doi:10.1016/S0140-6736(02)07441-X
Aggarwal R, Grantcharov T, Moorthy K, Milland T, Papasavas P, Dosis A, Bello F, Darzi A (2007) An evaluation of the feasibility, validity, and reliability of laparoscopic skills assessment in the operating room. Ann Surg 245:992–999. doi:10.1097/01.sla.0000262780.17950.e5
Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondré K, Stanbridge D, Fried GM (2005) A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 190:107–113. doi:10.1016/j.amjsurg.2005.04.004
Doyle JD, Webber EM, Sidhu RS (2007) A universal global rating scale for the evaluation of technical skills in the operating room. Am J Surg 193:551–555. doi:10.1016/j.amjsurg.2007.02.003
Martin JA, Regehr G, Reznick R, Macrae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84:273–278. doi:10.1046/j.1365-2168.1997.02502.x
Hogle NJ, Chang L, Strong VEM, Welcome AOU, Sinaan M, Bailey R, Fowler DL (2009) Validation of laparoscopic surgical skills training outside the operating room: a long road. Surg Endosc 23:1476–1482. doi:10.1007/s00464-009-0379-5
Moorthy K, Munz Y (2003) Objective assessment of technical skills in surgery. Br Med J 327:1032–1037. doi:10.1136/bmj.327.7422.1032
Fraser SA, Klassen DR, Feldman LS, Ghitulescu GA, Stanbridge D, Fried GM (2003) Evaluating laparoscopic skills: setting the pass/fail score for the MISTELS system. Surg Endosc 17:964–967. doi:10.1007/s00464-002-8828-4
Soper NJ, Fried GM (2008) The fundamentals of laparoscopic surgery: its time has come. Bull Am Coll Surg 93:30–32
Peters JH, Fried GM, Swanstrom LL, Soper NJ, Sillin LF, Schirmer B, Hoffman K (2004) Development and validation of a comprehensive program of education and assessment of the basic fundamentals of laparoscopic surgery. Surgery 135:21–27. doi:10.1016/S0039-6060(03)00156-9
Fried GM (2008) FLS assessment of competency using simulated laparoscopic tasks. J Gastrointest Surg 12:210–212. doi:10.1007/s11605-007-0355-0
Feldman LS, Sherman V, Fried GM (2004) Using simulators to assess laparoscopic competence: ready for widespread use? Special section: competency-when, why, how? Surgery 135:28–42
Feldman LS, Hagarty SE, Ghitulescu G, Stanbridge D, Fried GM (2004) Relationship between objective assessment of technical skills and subjective in-training evaluations in surgical residents. J Am Coll Surg 198:105–110. doi:10.1016/j.jamcollsurg.2003.08.020
Fried GM, Feldman LS, Vassiliou MC, Fraser SA, Stanbridge D, Ghitulescu G, Andrew CG (2004) Proving the value of simulation in laparoscopic surgery. Ann Surg 240:518–525. doi:10.1097/01.SLA.0000136941.46529.56
Vassiliou MC, Ghitulescu GA, Feldman LS, Stanbridge D, Leffondré K, Sigman HH, Fried GM (2006) The MISTELS program to measure technical skill in laparoscopic surgery: evidence for reliability. Surg Endosc 20:744–747. doi:10.1007/s00464-005-3008-y
Zhang L, Sankaranarayanan G, Arikatla VS, Ahn W, Grosdemouge C, Rideout JM, Epstein SK, De S, Schwaitzberg SD, Jones DB, Cao CGL (2013) Characterizing the learning curve of the VBLaST-PT© (Virtual Basic Laparoscopic Skill Trainer). Surg Endosc 27:3603–3615. doi:10.1007/s00464-013-2932-5
Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, Satava RM (2002) Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg 236:458–463. doi:10.1097/01.SLA.0000028969.51489.B4
Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM (2005) Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 241:364–372
Maciel A, Liu Y, Ahn W, Singh TP, Dunnican W, De S (2008) Development of the VBLaST™: a virtual basic laparoscopic skill trainer. Int J Med Robot Comput Assist Surg 4:131–138. doi:10.1002/rcs.185
Arikatla VS, Sankaranarayanan G, Ahn W, Chellali A, De S, Caroline GL, Hwabejire J, DeMoya M, Schwaitzberg S, Jones DB (2013) Face and construct validation of a virtual peg transfer simulator. Surg Endosc 27:1721–1729. doi:10.1007/s00464-012-2664-y
Sankaranarayanan G, Lin H, Arikatla VS, Mulcare M, Zhang L, Derevianko A, Lim R, Fobert D, Cao C, Schwaitzberg SD, Jones DB, De S (2010) Preliminary face and construct validation study of a virtual basic laparoscopic skill trainer. J Laparoendosc Adv Surg Tech A 20:153–157. doi:10.1089/lap.2009.0030
Chellali A, Ahn W, Sankaranarayanan G, Flinn JT, Schwaitzberg SD, Jones DB, De S, Cao CGL (2015) Preliminary evaluation of the pattern cutting and the ligating loop virtual laparoscopic trainers. Surg Endosc 29:815–821. doi:10.1007/s00464-014-3764-7
Williams SM, Parry BR, Schlup MM (1992) Quality control: an application of the CUSUM. BMJ 304:1359–1361
Biau DJ, Resche-Rigon M, Godiris-Petit G, Nizard RS, Porcher R (2007) Quality control of surgical and interventional procedures: a review of the CUSUM. Qual Saf Health Care 16:203–207. doi:10.1136/qshc.2006.020776
Faul F, Erdfelder E, Lang A-G, Buchner A (2007) G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39:175–191. doi:10.3758/BF03193146
Fraser SA, Feldman LS, Stanbridge D, Fried GM (2005) Characterizing the learning curve for a basic laparoscopic drill. Surg Endosc 19:1572–1578. doi:10.1007/s00464-005-0150-5
Bartlett A, Parry B (2001) CUSUM analysis of trends in operative selection and conversion rates for laparoscopic cholecystectomy. ANZ J Surg 71:453–456. doi:10.1046/j.1440-1622.2001.02163.x
Kye B-H, Kim J-G, Cho H-M, Kim H-J, Suh Y-J, Chun C-S (2011) Learning curves in laparoscopic right-sided colon cancer surgery: a comparison of first-generation colorectal surgeon to advance laparoscopically trained surgeon. J Laparoendosc Adv Surg Tech 21:789–796. doi:10.1089/lap.2011.0086
Okrainec A, Ferri LE, Feldman LS, Fried GM (2011) Defining the learning curve in laparoscopic paraesophageal hernia repair: a CUSUM analysis. Surg Endosc 25:1083–1087. doi:10.1007/s00464-010-1321-6
Hyltander A, Liljegren E, Rhodin PH, Lönroth H (2002) The transfer of basic skills learned in a laparoscopic simulator to the operating room. Surg Endosc 16:1324–1328. doi:10.1007/s00464-001-9184-5
Korndorffer JR, Dunne JB, Sierra R, Stefanidis D, Touchard CL, Scott DJ (2005) Simulator training for laparoscopic suturing using performance goals translates to the operating room. J Am Coll Surg 201:23–29. doi:10.1016/j.jamcollsurg.2005.02.021
McCluney AL, Vassiliou MC, Kaneva PA, Cao J, Stanbridge DD, Feldman LS, Fried GM (2007) FLS simulator performance predicts intraoperative laparoscopic skill. Surg Endosc 21:1991–1995. doi:10.1007/s00464-007-9451-1
Sroka G, Feldman LS, Vassiliou MC, Kaneva PA, Fayez R, Fried GM (2010) Fundamentals of Laparoscopic Surgery simulator training to proficiency improves laparoscopic performance in the operating room—a randomized controlled trial. Am J Surg 199:115–120. doi:10.1016/j.amjsurg.2009.07.035
Rosen J, Brown JD, Chang L, Sinanan MN, Hannaford B (2006) Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete Markov model. IEEE Trans Biomed Eng 53:399–413. doi:10.1109/TBME.2005.869771
Lin HC, Shafran I, Yuh D, Hager GD (2006) Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg 11:220–230. doi:10.3109/10929080600989189
Leff DR, Orihuela-Espina F, Elwell CE, Athanasiou T, Delpy DT, Darzi AW, Yang G-Z (2011) Assessment of the cerebral cortex during motor task behaviours in adults: a systematic review of functional near infrared spectroscopy (fNIRS) studies. Neuroimage 54:2922–2936. doi:10.1016/j.neuroimage.2010.10.058
Acknowledgements
This work is supported by NIBIB 1R01EB014305, NHLBI 1R01HL119248, and NCI 1R01CA197491 Grants awarded to Suvranu De. The authors would like to thank the medical student subjects and their dedication for this study. The authors would also like to thank the anatomical gift program and the gross anatomy lab at University of Buffalo for their support regarding the ex vivo cadaveric samples.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Disclosures
Drs. Arun Nemani, Woojin Ahn, Clairice Cooper, Steven Schwaitzberg, and Suvranu De have no conflict of interest or financial ties to disclose.
Rights and permissions
About this article
Cite this article
Nemani, A., Ahn, W., Cooper, C. et al. Convergent validation and transfer of learning studies of a virtual reality-based pattern cutting simulator. Surg Endosc 32, 1265–1272 (2018). https://doi.org/10.1007/s00464-017-5802-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00464-017-5802-8