Abstract
Background
Laparoscopic training remains inaccessible for surgeons in low- and middle-income countries, limiting its widespread adoption. We developed a novel tool for assessment of laparoscopic appendectomy skills through ALL-SAFE, a low-cost laparoscopy training system.
Methods
This pilot study in Ethiopia, Cameroon, and the USA assessed appendectomy skills using the ALL-SAFE training system. Performance measures were captured using the ALL-SAFE verification of proficiency tool (APPY-VOP), consisting of a checklist, modified Objective Structured Assessment of Technical Skills (m-OSATS), and final rating. Twenty participants, including novice (n = 11), intermediate (n = 8), and expert (n = 1), completed an online module covering appendicitis management and psychomotor skills in laparoscopic appendectomy. After viewing an expert skills demonstration video, participants recorded their performance within ALL-SAFE. Using the APPY-VOP, participants rated their own and three peer videos. We used the Kruskal–Wallis test and a Many-Facet Rasch Model to evaluate (i) capacity of APPY-VOP to differentiate performance levels, (ii) correlation among three APPY-VOP components, and (iii) rating differences across groups.
Results
Checklist scores increased from novice (M = 21.02) to intermediate (M = 23.64) and expert (M = 28.25), with differentiation between experts and novices, P = 0.005. All five m-OSATS domains and global summed, total summed, and final rating discriminated across all performance levels (P < 0.001). APPY-VOP final ratings adequately discriminated Competent (M = 2.0), Borderline (N = 1.8), and Not Competent (M = 1.4) performances, Χ2 (2,85) = 32.3, P = 0.001. There was a positive correlation between ALL-SAFE checklist and m-OSATS summed scores, r(83) = 0.63, P < 0.001. Comparison of ratings suggested no differences across expertise levels (P = 0.69) or location (P = 0.66).
Conclusion
APPY-VOP effectively discriminated between novice and expert performance in laparoscopic appendectomy skills in a simulated setting. Scoring alignment across raters suggests consistent evaluation, independent of expertise. These results support the use of APPY-VOP among all skill levels inside a peer rating system. Future studies will focus on correlating proficiency to clinical practice and scaling ALL-SAFE to other settings.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Significant health care disparities persist in access and training of safe surgical care [1]. Appendicitis is one of the most common surgical diseases worldwide, accounting for 17.7 million cases and 1.50 million global disability-adjusted life years annually (DALY) [2]. While laparoscopic appendectomy has become the standard of care in high-income countries due to its benefits of reduced surgical site infections, shorter recovery time and return to function for patients, and reduced postoperative pain, this operation remains inaccessible for patients in many low- and middle-income countries (LMICs) [3]. Several challenges exist to the uptake of widespread laparoscopy in LMICs including a lack of resources, insufficient finances, limited opportunities to practice, and conflicting stakeholder priorities [4]. However, a lack of training opportunities, experienced instructors, and accessible curricula in laparoscopy for LMIC surgeons may be the most pressing challenges [5].
Despite the well-known need to train LMIC surgeons in laparoscopy, this gap has gone largely unaddressed. Traditional models for training have focused on one-to-one partnership in which high-income country (HIC) institutions offer personnel training and equipment to singular LMIC partners [6,7,8]. While these efforts do focus on training, they lack clear pathways for scalability and sustainability and can reinforce unhelpful power dynamics by doing little to empower local trainees. Experts have recommended leveraging low-cost laparoscopy training simulators and telemedicine platforms to provide more accessible training options for LMIC surgeons [9]. Despite these calls, few innovative programs have been developed for low-cost training models to teach and perform laparoscopy in remote, simulation-based environments [10]. Those that are developed rarely assess the validity of evaluation measures to legitimately incorporate them as part of a scalable and adaptable surgical training curriculum. To comprehensively address the shortage of laparoscopically trained LMIC surgeons, innovative, low-cost, and scalable training modules must be developed and their associated assessment measures’ validity evidence evaluated.
ALL-SAFE is an initiative between the Pan-African Academy of Christian Surgeons (PAACS) and institutions in the USA aimed to address this gap. Since 2021, ALL-SAFE has developed free, open-source, virtual modules with an associated user-built, low-cost simulation system to teach and evaluate different laparoscopic skills in the LMIC setting. A pilot study assessing the ALL-SAFE module for ectopic pregnancy supported the use of the ALL-SAFE simulator and assessment tool to evaluate laparoscopic salpingostomy skills and demonstrated increased knowledge regarding ectopic pregnancy management among trainees [11]. Building from previous ALL-SAFE successes, we developed a novel ALL-SAFE training module and assessment tool to support independent laparoscopic appendectomy practice and skills development. In this pilot study, we evaluated the targeted evidence supporting the performance measures of the novel appendectomy verification of proficiency assessment tool (APPY-VOP), designed to measure laparoscopic appendectomy psychomotor skills. Specifically, we evaluated (i) discrimination between three performance levels (novice, intermediate, and expert), (ii) the correlation between scores among the three APPY-VOP components, and (iii) potential rating differences across the three rater groups.
Materials and methods
Design of the appendicitis simulator
The user-assembled ALL-SAFE box trainer and appendicitis task trainer were designed and constructed using materials readily available in LMICs and costing less than 10 USD (Supplementary files 1 and 2). A video-capable cell phone, laptop computer, and Wi-Fi or Bluetooth connection were recommended for assessment and full module participation. Laparoscopic instruments used in the simulation included a blunt grasper, curved tapered (Maryland) grasper, scissors, needle driver, 2–0 Silk suture (18–26 mm) with a taper needle (0.5 inch), and suture loops or optional pre-tied ligating loops (endoloop; Endoloop®, Ethicon, Raritan, NJ). The operation included identifying the anatomy of the appendix and surrounding structures, mobilizing the appendix with blunt dissection, ligating the appendiceal artery with placement of a figure of eight suture, tying of an intracorporeal knot with a surgeon’s knot, removing the remainder of the mesoappendix, placing two suture or pre-tied ligating loops at the base of the appendix, and transecting and removing the appendix from the laparoscopic box trainer.
Participants
This pilot study was conducted from March to August 2022 at three training hospitals in Sub-Saharan Africa and the USA: Mbingo Baptist Hospital in Cameroon, Soddo Christian Hospital in Ethiopia, and University of Michigan Hospital in the USA. The sites in Sub-Saharan Africa were PAACS training sites. This study received IRB exemption from the University of Michigan’s Institutional Review Board (HUM00199557). Expert laparoscopic surgeons, residents of varying skill levels, and novice medical students were recruited from each study site. All laparoscopic surgeons were rated as expert based on number of laparoscopic operations performed within the last month and over their lifetime. To differentiate skill levels among residents, residency program directors rated their trainees as novice or intermediate based on previous experiences with laparoscopy and level of general surgery residency training. All medical students were considered novice based on no prior experience with laparoscopy.
All participants completed the ALL-SAFE online educational module covering appendicitis management and laparoscopic appendectomy psychomotor skills. After viewing an expert laparoscopic appendectomy demonstration video in the ALL-SAFE simulation system using our low-cost appendix model, participants recorded their own performance within the ALL-SAFE box trainer. Participants were permitted to practice as many times as desired between viewing the expert video and recording their own. Following recording, participants were asked to self-rate their own video and to rate three peer videos uploaded at random by other ALL-SAFE participants across the various training sites. This provided a total of four ratings per uploaded video (oneself, three peer). Participants used the ALL-SAFE APPY-VOP to complete this rating.
Design of the verification of proficiency (APPY-VOP) performance assessment tool
The APPY-VOP was designed through expert consensus following review of the Objective Structured Assessment of Technical Skills (OSATS) [12] and the American College of Surgeons (ACS) and Association of Program Directors in Surgery (APDS) online curriculum [13]. A first version APPY-VOP was drafted by one co-investigator with extensive laparoscopic surgery experience (MB) and reviewed by the entire research team for content and relevance, including four general surgeons and five learners across the three study sites. The reviewed version was further edited by the Principal and Co-investigator to split one item, add three additional “error-based items,” and split the final overall rating designation to “Competent, Borderline, and Not Competent.” Final review was conducted by a psychometrician (DR) for clarity, relevance, and alignment of questions with psychomotor skills.
The APPY-VOP final version had three components: a 13-item ALL-SAFE psychomotor skills checklist of key psychomotor skills (Checklist), a 5-item modified OSATS (m-OSATS), and 1 final overall competency rating (Final rating) (Supplementary file 3). The 13-item ALL-SAFE skills checklist was designed to assess competency in the critical steps of performing laparoscopic appendectomy, including critical errors. Checklist items 1–3, 5–7, 9–11, and 13 were scored up to 2, while items 4, 8, and 12 were scored up to 3 to differentiate the importance of critical errors most relevant to patient safety, for a possible total of 29 points (Summed). The m-OSATS was a shortened version of the original 6-item OSATS, a tool validated for assessing trainees’ surgical skills across a variety of settings [12]. The m-OSATS was used to measure competency across 5 core laparoscopic skills via 5-point scales, with domains that include “Respect for tissue” and “Instrument handling,” with a possible total of 25 points (Global Summed). The maximum combined sum of the ALL-SAFE skills checklist and m-OSATS was 54 points (Total Summed). Finally, the overall competency (Final rating) assessed overall measures of competency and was scored using a three-point scale (1 = “Not competent,” 2 = “Borderline,” 3 = “Competent”).
Data analysis
Once data was confirmed to be non-parametric, the capacity of the three components of the APPY-VOP to differentiate between novice, intermediate, and expert performance levels was evaluated using Kruskal–Wallis test and substantiated with secondary analysis via a Many-Facet Rasch Model (8 facets; ID × Operator Expertise × Operator Continent × Judge Expertise × Judge Continent × Judge/Evaluator × Final Rating × Item). Correlation of participants’ ALL-SAFE checklist summed scores, m-OSATS scores, and overall competency (Final) scores were estimated by Pearson’s r. Inter-rater agreement of novice and expert raters was determined using averaged two-way mixed intraclass correlation, ICC(A,k), across 10 randomly selected performances judged by 11 novice and 9 experienced raters.
Rating differences across expertise levels, continent, and site, that would indicate potential rater bias, were calculated using the same Many-Facet Rasch Model. Statistical analyses were performed using SPSS Statistics for Windows v.25 (IBM, Armonk, NY) and Facets software v. 3.50 (Winsteps.com, Beaverton, OR), with P-values of less than 0.05 considered statistically significant.
Results
Demographics
Twenty participants across three pilot sites participated in the study (Table 1). Participants included expert laparoscopic surgeon (n = 1), general surgery residents (n = 11), and medical students (n = 8). The final number of expert (n = 1), intermediate (n = 8), and novice (n = 11) classifications reflected participants’ training level and experience with laparoscopy specifically.
Discrimination between performance levels using APPY-VOP
ALL-SAFE checklist
For the checklist items, scores increased with experience level, with exception for item 12 [Avoids leaving residual appendix on cecum (< 3 mm)]. Despite this positive trend, item-level differences were not statistically significant across the three groups (Table 2). Item-level Rasch analysis supported this positive, but nonsignificant, trend: novice (M = 1.6), intermediate (M = 1.9), and expert (M = 2.2), P = 0.44. The Checklist summed scores increased from novice (M = 21.02) to intermediate (M = 23.64) and expert (M = 28.25) performers, with statistically significant discrimination between novice and expert performances (P = 0.005).
For the m-OSATS, the Kruskal–Wallis test indicated the five domains were able to discriminate across novice, intermediate, and expert performances (Table 3). These findings were supported by secondary Rasch analyses. The m-OSATS global summed and total summed scores (m-OSTATS global summed + checklist summed) were also able to discriminate across these three levels of performance (P < 0.001). The m-OSATS final rating also adequately differentiated performance levels: Competent (M = 3.8), Borderline (M = 2.7), and Not Competent (M = 1.8), Χ2 (85) = 243.3, P = 0.001. The Many-Facet Rasch Model supported these findings, with statistically significant ratings across performance levels, including Competent (M = 2.0), Borderline (M = 1.8), and Not Competent (M = 1.4), Χ2 (85) = 32.3, P = 0.001.
Correlation between ALL-SAFE checklist and m-OSATS
Testing correlation of all participants’ ALL-SAFE checklist summed scores with m-OSATS summed scores indicated a positive significant relationship, r(83) = 0.63, P < 0.001 (Fig. 1). Similarly, the ALL-SAFE checklist summed score correlated with the combination of the ALL-SAFE checklist summed score and m-OSATS summed score, r(83) = 0.92, P < 0.001. ALL-SAFE checklist summed scores also correlated with the overall (final) rating scored on a three-point scale, r(83) = 0.58, P < 0.001.
Inter-rater agreement
Inter-rater agreement of m-OSATS overall performance ratings suggested mixed rater agreement across novice and experienced judges, ranging from poor to moderate for m-OSATS domains (Table 4). Poorest inter-rater agreement was estimated for two domains: Economy of Time and Motion (ICC = 0.45) and Flow of Operation (ICC = 0.45). Higher scores were present for Respect for Tissue (ICC = 0.70), indicating moderate agreement and for Total Summed scores from both the m-OSATS and checklist scores (ICC = 0.83), indicating good consistency of responses across participants. Rasch analysis suggested no rating differences or biases across expertise levels, continent, or site, P ≥ 0.66.
Discussion
This study used a novel learning and performance assessment tool: the ALL-SAFE appendectomy skills verification of proficiency tool (APPY-VOP) as way to measure skills required for laparoscopic appendectomy among a range of learners in three locations across two continents. Our findings indicate the APPY-VOP can discriminate performance levels regardless of rater experience, especially when all three components and the m-OSATS summed scores are considered. Therefore, this study supports the use of APPY-VOP for performance assessment for the ALL-SAFE appendicitis module among trainees with a variety of experiences.
While individual item levels were unable to discriminate across performance levels, when summed and considered as a whole, the 13-item checklist was able to differentiate across performance levels. The m-OSATS, previously validated to assess surgical performance during simulated ectopic pregnancy, also was able to discriminate across three performance levels when used to measure laparoscopic appendectomy skills performance in the same setting [11]. Additionally, the significant positive correlation between ALL-SAFE checklist summed scores and m-OSATS summed scores and between the ALL-SAFE checklist and ALL-SAFE checklist combined with m-OSATS strongly supports the use of ALL-SAFE checklist to measure laparoscopic skills in this simulated surgical setting. Finally, participants were able to use the overall Final rating to effectively discriminate across three levels of ability, indicating that this singular measurement of competency as part of the APPY-VOP has significant power to separate users’ skill levels.
Furthermore, indistinguishable rating differences via Rasch analysis across novice and expert raters for total summed scores suggests an expert opinion may not be a requirement when evaluating ALL-SAFE users. In practice, this could lessen burdens on surgical faculty who strive to supplement operative training with simulation-based training. Additionally, there is added benefit to engaging trainees in the peer review process, as doing so has been shown to increase individual skills and operational efficiency [14, 15] and may allow reviewers to practice skills for future teaching and mentorship [16]. Similarly, video-based coaching platforms have been shown to improve surgical skills among residents of varying skill levels in other settings [17].
Most importantly, ALL-SAFE addresses one of the key barriers preventing uptake of laparoscopy in LMICs: a lack of accessible training programs and validated assessment tools. Although some LMICs have been able to acquire basic laparoscopic surgery equipment, there is a continued need for proper training such that surgeons in these regions may learn the skills required for laparoscopy [4, 5, 9, 18]. Studies have demonstrated that computer-based, self-directed, and incremental video-based training is effective for teaching surgical skills to learners of all levels [19]. However, these resources are not readily available or co-designed for learners in LMICs. Our system addresses this gap by supporting training in basic laparoscopic skills using readily available, inexpensive materials in LMICs. An even greater disparity in laparoscopic surgical access in LMICs exists in training for complex operations beyond the three most common of appendectomy, cholecystectomy, and laparoscopy [20]. Although this study was designed for skills suited for laparoscopic appendectomy, many of the surgical skills including intracorporeal knot tying, blunt dissection, tissue manipulation, and pre-tied ligating loop placement have applicability for many laparoscopic operations. While a pilot study, these findings of the APPY-VOP have important implications for laparoscopic education in LMICs. Future studies should focus on correlating trainee proficiency to clinical practice, incorporation of artificial intelligence and machine learning to evaluation metrics, and scaling ALL-SAFE to other LMIC settings.
Limitations
This study had several limitations to consider. First, it was conducted with a small group of participants among sites familiar with the ALL-SAFE platform. While appropriate for a pilot study designed to evaluate validity evidence of a novel skills assessment tool, this sampling limits generalizability of findings to other sites. Secondly, uneven distribution of participants, mainly that there was only one expert participant and that the University of Michigan cohort which consisted exclusively of “novice” users who were familiar with the ALL-SAFE system may have inadvertently introduced unexpected scoring patterns with a nested design and negatively impacted the inter-rater reliability estimates. In future studies, recruitment of all skill levels from each site should be conducted, with all submitted performance to be evaluated by multiple judges from other sites to ensure maximization of samples. Finally, potential bias from “experienced novices” will be minimized by recruiting new novice groups unfamiliar with the ALL-SAFE platform and including a wider group of true experts as a “gold standard” comparison group.
Conclusion
This pilot study provided evidence for the use of a novel assessment tool: the APPY-VOP for use in ALL-SAFE, our low-cost laparoscopy training simulator and online learning module. The tool was piloted across three teaching facilitates in two continents, among surgical learners of all skill levels. Our findings suggest that most components of the APPY-VOP effectively discriminated novice, intermediate, and expert performance in laparoscopic appendectomy skills and that rating alignment across novice and expert groups suggested consistent evaluation, independent of expertise. These results support the use of APPY-VOP among users of all skill levels alongside a peer rating system.
References
Meara JG, Greenberg SL (2015) The Lancet Commission on Global Surgery Global surgery 2030: evidence and solutions for achieving health, welfare and economic development. Surgery 157(5):834–835. https://doi.org/10.1016/j.surg.2015.02.009
Institute for Health Metrics and Evaluation (2020) Appendicitis—level 3 cause. https://www.healthdata.org/results/gbd_summaries/2019/appendicitis-level-3-cause. Accessed 23 Nov 2022
Rosenbaum AJ, Maine RG (2019) Improving access to laparoscopy in low-resource settings. Ann Glob Health 85(1):114. https://doi.org/10.5334/aogh.2573
Wilkinson E, Aruparayil N, Gnanaraj J, Brown J, Jayne D (2021) Barriers to training in laparoscopic surgery in low- and middle-income countries: a systematic review. Trop Dr 51(3):408–414. https://doi.org/10.1177/0049475521998186
Choy I, Kitto S, Adu-Aryee N, Okrainec A (2013) Barriers to the uptake of laparoscopic surgery in a lower-middle-income country. Surg Endosc 27(11):4009–4015. https://doi.org/10.1007/s00464-013-3019-z
Bentounsi Z, Nazir A (2020) Building global surgical workforce capacity through academic partnerships. J Public Health Emerg. https://doi.org/10.21037/jphe-20-88
Kang MJ, Apea-Kubi KB, Apea-Kubi KAK, Adoula NG, Odonkor JNN, Ogoe AK (2020) Establishing a sustainable training program for laparoscopy in resource-limited settings: experience in Ghana. Ann Glob Health 86(1):89. https://doi.org/10.5334/aogh.2957
HarveyL CH, Grimm B, Lovett B, Ulysse JC, Sizemore C (2020) Experience with a novel laparoscopic gynecologic curriculum in Haiti: lessons in implementation. Surg Endosc 34(5):2035–2039. https://doi.org/10.1007/s00464-019-06983-9
Chao TE, MandigoM O-A, Maine R (2016) Systematic review of laparoscopic surgery in low- and middle-income countries: benefits, challenges, and strategies. Surg Endosc 30(1):1–10. https://doi.org/10.1007/s00464-015-4201-2
Lukish JR, Ellis-Davy J (2022) A novel low cost minimally invasive surgical system allows for the translation of modern pediatric surgical technology to low- and middle-income nations. J Pediatr Surg 57(6):1099–1103. https://doi.org/10.1016/j.jpedsurg.2022.01.027
Rooney DM, Mott NM, Ryder CY, Snell MJ, Ngam BN, Barnard ML, Jeffcoach DR, Kim GJ (2022) Evidence supporting performance measures of laparoscopic salpingostomy using novel low-cost ectopic pregnancy simulator. Glob Surg Educ 1:41. https://doi.org/10.1007/s44186-022-00044-x
Martin JA, Regehr G, Reznick R, MacRae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278. https://doi.org/10.1046/j.1365-2168.1997.02502.x
American College of Surgeons and Association of Program Directors in Surgery (2017) ACS/APDS surgery resident skills curriculum—phase 2. https://www.facs.org/for-medical-professionals/education/programs/acs-apds-surgery-resident-skills-curriculum/. Accessed 15 Mar 2023
Sheahan G, Reznick R, Klinger D, Flynn L, Zevin B (2019) Comparison of faculty versus structured peer-feedback for acquisitions of basic and intermediate-level surgical skills. Am J Surg 217(2):214–221. https://doi.org/10.1016/j.amjsurg.2018.06.028
Vaughn CJ, Kim E, O’Sullivan P, Huang E, Lin MY, Wyles S, Palmer BJ, Pierce JL, Chern H (2016) Peer video review and feedback improve performance in basic surgical skills. Am J Surg 211(2):355–360. https://doi.org/10.1016/j.amjsurg.2015.08.034
Lin J, Reddy RM (2019) Teaching, mentorship, and coaching in surgical education. Thorac Cardiovasc Surg 29(3):311–320. https://doi.org/10.1016/j.thorsurg.2019.03.008
Daniel R, McKechnie T, Kruse CC, Levin M, Lee Y, Doumouras AG, Hong D, Eskicioglu C (2022) Video-based coaching for surgical residents: a systematic review and meta-analysis. Surg Endosc. https://doi.org/10.1007/s00464-022-09379-4
Robertson F, Mutabazi Z, Kyamanywa P, Ntakiyiruta G, Musafiri S, Walker T, Kayibanda E, Mukabatsinda C, Scott J, Costas-Chavarri A (2019) Laparoscopy in Rwanda: a national assessment of utilization, demands, and perceived challenges. World J Surg 43(2):339–345. https://doi.org/10.1007/s00268-018-4797-1
Kumins NH, Qin VL, Driscoll EC, Morrow KL, Kashyap VS, Ning AY, Tucker NJ, King AH, Quereshy HA, Dash S, Grobaty L, Zhou G (2021) Computer-based video training is effective in teaching basic surgical skills to novices without faculty involvement using a self-directed, sequential and incremental program. Am J Surg 221(4):780–787. https://doi.org/10.1016/j.amjsurg.2020.08.011
Farrow NE, Commander SJ, Reed CR, Mueller JL, Gupta A, Loh AHP, Sekabira J, Fitzgerald TN (2021) Laparoscopic experience and attitudes toward a low-cost laparoscopic system among surgeons in East, Central, and Southern Africa: a survey study. Surg Endosc 35(12):6539–6548. https://doi.org/10.1007/s00464-020-08151-w
Acknowledgements
The authors would like to thank surgery residents at Soddo Christian Hospital, Ethiopia and Mbingo Baptist Hospital, Cameroon, as well as medical students involved in the Surgery Olympics program at University of Michigan Medical School.
Funding
Christopher W. Reynolds was supported by a grant from the National Institutes of Health during this study (T-35 Short-Term Biomedical Research Training Program). This work was supported by a grant from the Intuitive Foundation in partnership with the Massachusetts Institute of Technology Solve Initiative as part of the Global Surgery Training Challenge.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Disclosure
Mr. Christopher W. Reynolds was supported by a grant from the National Institutes of Health during this study (T-35 Short-Term Biomedical Research Training Program). Dr. Grace J. Kim was supported by a grant from the Intuitive Foundation in partnership with the Massachusetts Institute of Technology Solve Initiative as part of the Global Surgery Training Challenge (no grant number). Drs. Deborah M. Rooney, David R. Jeffcoach, Melanie Barnard, Mark J. Snell, Kevin El-Hayek, Blessing Ngoin Ngam, and John Tanyi and Ms. Serena S. Bidwell, Ms. Chioma Anidi, and Ms. C. Yoonhee Ryder have no conflicts of interest or financial ties to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Reynolds, C.W., Rooney, D.M., Jeffcoach, D.R. et al. Evidence supporting performance measures of laparoscopic appendectomy through a novel surgical proficiency assessment tool and low-cost laparoscopic training system. Surg Endosc 37, 7170–7177 (2023). https://doi.org/10.1007/s00464-023-10182-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00464-023-10182-y