Abstract
Background
Digital surgery is a new paradigm within the surgical innovation space that is rapidly advancing and encompasses multiple areas.
Methods
This white paper from the SAGES Digital Surgery Working Group outlines the scope of digital surgery, defines key terms, and analyzes the challenges and opportunities surrounding this disruptive technology.
Results
In its simplest form, digital surgery inserts a computer interface between surgeon and patient. We divide the digital surgery space into the following elements: advanced visualization, enhanced instrumentation, data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence, and robotic surgical platforms. We will define each area, describe specific terminology, review current advances as well as discuss limitations and opportunities for future growth.
Conclusion
Digital Surgery will continue to evolve and has great potential to bring value to all levels of the healthcare system. The surgical community has an essential role in understanding, developing, and guiding this emerging field.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Digital surgery is the next wave of surgical innovation, enabled by advances made in robotic surgery and preceded by the open and laparoscopic surgical eras. Digital surgery inserts a computer interface between surgeon and patient, and encompasses multiple areas including advanced visualization, enhanced instrumentation, intraoperative data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence and robotic surgical platforms.
Along with the growing need for digital surgery has come rapid advances in computational power and internet connectivity, a decreasing cost of hardware, and familiarity with using technology in the surgical setting. The digital surgery paradigm has the potential to improve surgical access, bring transparency to the operating room, disrupt conventional methods of surgical education, and create a global framework for surgical evolution. The fundamental goal of this new technology is to improve the quality of surgical care [1].
Advanced visualization
The current scope of advanced visualization comprises the following areas: three-dimensional (3D) visualization, fluorescence-guided surgery (FGS), and augmented/mediated reality (AR/MR).
Three-dimensional visualization
Three-dimensional visualization in surgery has demonstrated potential benefits to operative planning, procedure performance, surgical skill acquisition, and patient outcomes. Comparison of 2D–3D vision during performance of Fundamentals of Laparoscopic Surgery tasks demonstrate reduced time to task completion and improved ease and efficiency of task performance. Most robotic surgical platforms offer 3D visualization, which allows for improved proficiency with greater speed to task completion and decreased errors [2, 3]. Robotic surgery platforms have either closed or open consoles. In a closed console system, the surgeon is able to fix their head position for viewing which leads to a standard field of view. In an open console system, the operator is not able to fix their head position, rather can move their head freely during the operation; although head motion can contribute to decreased efficiency, open console systems can allow for improved surgeon communication with team members since they are not encumbered behind a closed console.
Fluorescence-guided surgery
Fluorescence-guided surgery (FGS) is an imaging technique that uses a fluorescent dye or a near-infrared-emitting light source in conjunction with a near-infrared camera to identify anatomic structures or evaluate tissue perfusion during surgery [4].
Recent research comparing indocyanine green (ICG) cholangiography to standard cholecystectomy found that ICG use significantly reduced operative time, common bile duct injury, rate of conversion to open operation, hospital length of stay, and mortality [4, 5]. ICG has also been used to visualize and quantify bowel perfusion of colorectal anastomoses, with some studies demonstrating decreased anastomotic leak rates using this technique [6].
Augmented reality/mediated reality
Augmented reality/Mediated Reality (AR/MR) is a technology that superimposes computer-generated objects onto real images and video in real time. AR is described on a virtuality continuum between real environment (direct view of real environment) and virtual reality (immersion in a fully digital environment). Recent definitions of AR/MR platforms require the combination of real and virtual environments to be interactive in real time and registered in 3D [7]. The application of AR/MR to medical imaging data offers a number of advantages. By using AR/MR guided surgery, the proceduralist’s attention is not divided between navigation method and the patient resulting in improved hand–eye coordination, accuracy, and time efficiency [8]. In addition, AR/MR platforms enable stereoscopic/3D visualization of volumetric data to improve physician perception and support clinical decision-making.
Following is a discussion of current implementation of AR/MR technology in surgery, as well as a brief description of the key technologies involved in developing AR/MR platforms: medical imaging segmentation and modeling, tracking, registration, and visualization.
Image segmentation
The datasets presented from medical imaging are large, making them challenging to manipulate in real time. Image segmentation is the processing of medical imaging to isolate regions of interest and generate a model to interactively visualize the area. Identifying the relevant anatomy has traditionally been achieved by marking structures either manually or semi-autonomously within each individual image. Although a number of open-source packages are available to facilitate this process, image segmentation remains a limiting factor to clinical application of AR surgical guidance. Significant research efforts are underway to develop generalizable autonomous image segmentation, however a proven and clinically accepted method is yet to be established because of similar contrast intensity between neighboring tissue, unclear lesion boundary, and variation in lesion shape [9].
Registration
The exact alignment of the virtual image to the real environment is crucial to the clinical application of AR platforms. Image registration is the process to determine the spatial correspondence of two or more image sets. In image-guided surgery, the two image sets are defined as a static and a moving set, and an algorithm is applied to determine the optimal translation that would minimize the difference seen between the virtual and real environment. The accurate alignment of both images can be achieved with a coordinate of trackers, used to determine the exact position and orientation of the camera and the patient’s body. Marker-based registration relies on rigid calibration of markers to real objects to allow for precise estimation of the real object as detected by either an external or internal sensor. Marker-free registration exploits the natural features observed by the tracking device within the real environment. An example of this is the simultaneous localization and mapping (SLAM) technique [10]. Registration can also be performed manually.
Registration is complicated when the organ of interest does not behave as expected. This issue is particularly highlighted in general surgery where target anatomy is not rigid and deforms in a dynamic manner (e.g., deformations with heartbeat and respiration) [11].
Current implementation
AR systems are currently best implemented during surgery when there is little to no movement of the real environment and when tissue deformation is minimal, as these use-cases require less tracking and processing power as compared to applications on mobile organs where tracking and display are significantly more complicated. Thus, clinical application of AR systems has been successfully implemented in the fields of neurosurgery, orthopedics, and otolaryngology, including for bone dissection, clipping of cerebral aneurysms, microvascular decompression, and placement of pedicle screws [12,13,14,15]. AR systems have demonstrated accurate registration (0.2–3 mm), but increased operative time and associated financial costs have been reported with the use of AR systems for intraoperative guidance [13, 16].
AR has been difficult to apply to abdominal surgery due to the amount of organ movement. Successful use cases have been reported for liver and pancreatic surgery, where AR has been used to compare the reconstructed virtual model with intraoperative ultrasound to identify lesions for resection [7, 17]. Successful superimposed 3D representation of the patient’s hepatobiliary structures over the surgeon’s field of view has been studied during open liver surgery [16]. Also intraoperative AR has been used to accurately detect sentinel lymph node using preoperative SPECT/CT scan and RSIP Vision has developed RSIP Neph, an intraoperative AR tool that assists surgeons during partial nephrectomy to accurately locate and resect an intracapsular renal lesion [18, 19].
Limitations and future directions
Advanced visualization can add clinical value by allowing surgeons to see more intraoperatively, which can enable improved surgical decision-making, as well as improved patient outcomes. Three-dimensional visualization via robotic surgical platforms and NIR-enabled surgical cameras are tools available in many operating rooms, and use should continue to increase as more surgeons learn of the benefits of these modalities.
One limitation of FGS studies is that most include non-randomized study designs, thus larger, well-designed trials are imperative to further validate FGS. Current research is ongoing to identify novel fluorescent biomarkers for clinical use. Furthermore, new assessment platforms, such as laser speckle contrast imaging (LCSI) are being developed to assess perfusion without the need for a fluorophore [20]. One benefit of LCSI is that it enables real time, reproducible perfusion assessment, in contrast to ICG where residual intravascular ICG can sometimes lead to false positive perfusion results.
Although AR/MR application in surgery has shown great promise through individual specialty applications and pre-clinical research, a number of issues limit the further clinical adoption of AR/MR systems. Medical imaging continues to advance to provide more detail for preoperative planning but the amount of data to be processed to segment image datasets are large, and current segmentation methods are time-intensive and potentially expensive. Further research is needed to identify generalizable segmentation methods [21].
Additional research is required to understand the challenges surgeons may encounter in utilizing augmented information. Head-mounted displays may unintentionally obscure vision of the surgical field, create visual clutter from the virtual model overlay, or may distract surgeons from the procedure [2, 22]. Poor ergonomics with head-mounted displays may lead to fatigue and virtual overlays may cause simulator sickness exhibited by nausea, headache, and vertigo [23, 24].
Enhanced instrumentation
The growth and adoption of robotic surgery is inextricably tied to the evolution of surgical instrumentation. In open surgery the surgeon directly “drives” traditional surgical tools which interact with patient tissues, and all information received is direct information. In minimally invasive surgery, the direct links between surgeon and tissues are mediated by laparoscopic or robotic instruments and a video display [25]. The most current enhanced instrumentation consists of devices equipped with one or more of the following elements: a power system, sensors, and automation, as well as safeguards to ensure consistent performance. The main areas of enhanced instrumentation include intelligent staplers and energy devices, as well as robotized devices.
Recent advances in stapling technology include powered staplers with automated firing mechanisms and tissue compression sensing that provide well-formed, reliable staple lines [26, 27]. The benefit of this technology is observed in gastrointestinal operations. Johnson et al. compared outcomes between powered versus manual staplers for bariatric surgery and demonstrated significantly lower hospital costs and bleeding rates in the powered stapler group. Despite higher device cost, several studies show the benefits of powered stapler use in left-sided colorectal anastomosis, with decreased leak rates, bleeding, ileus, 30-day inpatient readmission, hospital length of stay, and decreased overall healthcare costs [28,29,30].
Advanced bipolar and ultrasonic energy devices are routinely used in the operating room. These systems incorporate software algorithms to measure changes in tissue impedance and adjust energy output to deliver the appropriate amount of energy for the desired tissue effect, with advantages including: limited risk of thermal injury, elimination of dispersive electrodes, enhanced sealing capability, as well as decreased blood loss and procedure time [31,32,33].
Roboticized devices are surgical devices which integrate robotic technology with a hardware component, some examples are the HandX™ a powered laparoscopic device for grasping, ligation, and suturing, as well as the Neoguide endoscope which uses a computer algorithm to help optimize scope movement though the bowel [34].
Limitations and future directions
The past two decades have seen incredible growth in precision surgical tools enhanced with a degree of embedded intelligence and autonomy, whose consistent performance abilities have added clinical value by increasing surgeon operative capability and improved surgical outcomes. Regarding implementation, both enhanced energy and stapling devices are found in most modern operating rooms, in comparison, roboticized devices are less commonly seen. With the explosion of robotic surgical technology, this category of devices should increase in the next decade. One barrier to implementation is the cost of such newer devices, thus data is needed to demonstrate the added benefits of these technologies offset their cost.
Some areas of future development include a cordless operating room (OR), wireless data transmission for laparoscopic devices, and handheld robotic tools.
Data capture
There is an abundance of data in the operative environment that is being captured and utilized. These can be broadly summarized as including data from the operating room (e.g., OR personnel, system-level processes and quality metrics), data from operative equipment (e.g., kinematics data from robotic platforms), or data from the surgical field (e.g., video stream from image-guided platforms). In addition, there is an increasing abundance of integrated platforms that facilitate and streamline data collection and data exchange, so called “Integrated Operating Rooms.”
Data from the operating room
Various tools to measure OR efficiency exist, including surgical checklists and Metric for Evaluating Task Execution in the Operating Room (METEOR), all of which perform data collection, analysis, evaluation with iterative correction, and dissemination to staff and institution [35,36,37].
The airline industry uses a “black box” recording device to track large amounts of flight data for both real time and future analysis, which has led to quality improvements and unprecedented safety for passengers. Commercially available platforms such as the OR Black Box (Toronto, ON, Canada) allow for the capture of audio, visual, and other data on various elements of a procedure including both in the operative field and in the operating room itself (e.g., tracking team members) and use artificial intelligence/machine learning to help teams improve quality and efficiency. These systems can identify and remove personal information of patients and providers while retaining clinically relevant data and potentially identify intraoperative errors, events, and distractions [38, 39].
Data from operative equipment
Kinematics is the study of the motion of mechanical points, bodies, and systems. The goals of kinematics in surgical robots determine tool positions and joint positions with respect to operator control and patient anatomy [40, 41]. Surgical robots can capture quantitative instrument motion trajectories during surgery, enabling analysis of surgical activity that is not possible with traditional instrumentation [42]. Surgeon kinematics can give insights to operative efficiency as well as guiding eventual autonomous robots. Additional operative instruments which supply data are the advanced energy devices and powered staplers discussed in the previous section.
Surgical video data
As digital technology continues to advance, so have opportunities for recording and disseminating surgical videos. Surgical video recordings capture many different aspects of the operative process and serve a variety of purposes including training, coaching, research, assessment, and quality improvement [43,44,45,46,47,48]. With the advent of 4 K and 8 K video resolution, the amount of surgical video data is increasing exponentially, and there are dozens of commercially available solutions for secure cloud storage of video data, which enable upload of surgical videos without the need for USB drives, DVDs or encrypted drives.
The use of video to improve performance has been documented in many disciplines and numerous manuscripts describe the advantages of surgical video recording [43, 47,48,49,50,51,52,53,54]. Videos are used by medical trainees to learn about surgical anatomy, procedures and technical details, and prepare for cases. Video-based curricula have been shown to increase knowledgebase, technical performance, and shorten learning curves. Retrospective review of surgeons’ videos augmented with expert critique has shown improved patient outcomes [55, 56]. Surgical videos have even made their way to social media platforms in efforts to solicit peer feedback or advice on operative approaches. As the advantages of intraoperative surgical video recording continue to emerge, surgeons performing open operations have adapted technologies to capture open cases. Saun et al. identified 176 clinical applications for open video recording, and 125 different types of recording cameras used to record open intraoperative cases [49].
The integrated operating room
Integrated or Digital Operating Rooms are installations that functionally connect the OR environment and enable various equipment sources to communicate with one another; these can include high resolution video displays, video routing systems, touch-screen control, digital information archiving, as well as a central hub that connects multiple ORs to one another and the outside world. In addition to simplifying workflows for the surgical team and improving safety, the Integrated OR enables live consultation with medical teams (i.e., pathology), real-time collaboration with virtual surgeons, data exchange with the electronic medical record system, and live feeds for training or teaching purposes.
Limitations and future directions
The benefit of operative data capture is that it can be used to improve OR efficiency, which has implications for cost savings, quality improvement, patient satisfaction, and medical team morale [35]. Decades of literature in surgical performance suggests that multiple factors drive positive operative outcomes, especially advanced cognitive skills (i.e., situation awareness, judgment, decision-making), interpersonal skills (e.g., communication, teamwork), personal resourcefulness and other human factors [57].
Of the types of intraoperative data available, the most widely implemented is surgical video capture. Despite improved accessibility and the abundance of literature supporting the benefits of intraoperative video capture, there are persistent and emerging challenges surrounding this process. These range from ownership of the media and individual patient privacy concerns to hospital-related legal issues. With the exchange of video files, questions have surfaced regarding ownership of the individual media file, and ongoing debate surrounds the rights/responsibilities of the patient, hospital system and operating surgeon. These same questions of ownership apply to other data elements as well.
Some challenges to implementation of an Integrated OR include the reliability and speed of the hospital network, as well as discrepancies between the operating room system and the display technology at an existing remote site. To enable quality communication within this system, the Integrated OR, data network and remote sites must act as a whole, thus network configuration, bandwidth availability, reliability, and information security must be addressed to ensure compatibility. Adoption of efficiency and quality systems is limited by lack of widespread awareness of their utility, as well as cost constraints. Kinematic data captured by surgical robots is stored by individual robot companies, thus not readily accessible for academic research.
Going forward, developing standards defining the appropriate ethical use of perioperative data is necessary. This is a good opportunity for surgeons to advocate for transparency and improved mechanisms for sharing surgical data, as well as help shape policy surrounding data-sharing between healthcare systems, industry, and clinicians.
Data analytics
Data analytics has changed the landscape of everyday life by merging three trends: faster and smaller computer processors, proven statistical methods, and large data sets. The convergence of these factors has changed many facets of our everyday lives and is now changing healthcare.
Artificial intelligence
One area of analytics that has gained notoriety is artificial intelligence (AI), a loosely defined field that seeks to design systems to mimic human thought and behavior. Among AI surgical applications, most involve machine learning (ML), wherein a machine can learn and make predictions by recognizing patterns; this is useful for identifying subtle patterns in large datasets. ML allows a computer to use partial data labeling (supervised learning) or the structure detected in the data (unsupervised learning) to make predictions without specific instructions [58].
Artificial neural networks (ANNs), a subfield of ML, processes signals in layers of simple computational units (nodes). Unlike regression, ANNs are good at managing multidimensional, covariable data. The application of one or more ANNs to create a system capable of autonomously or semi-autonomously executing tasks is known as “deep learning.”
One exciting application of ML is in computer vision, loosely defined as machines understanding and interpreting pixelated-data (i.e., images and videos), and is a rapidly growing area within healthcare. Currently there are several applications, mostly in fields outside of surgery (e.g., radiology, pathology) for computer-aided diagnosis [55,56,57,58,59,60].
AI has broad potential applications in clinical practice, quality improvement, research, and industry. In surgery, its applications are broad and include real-time decision support, surgical education, risk prediction, processes and resource management, and autonomous surgery.
Surgical decision-making and operative resource management
In multiple surgical subspecialties, AI has potential to predict outcomes, prevent complications and missed diagnoses, lessen the cognitive load of busy physicians, and allow for more informed discussions between physician and patient [61]. AI applications in image-recognition may also have far-reaching implications by aiding radiologists and surgeons in detecting or evaluating disease. AI image recognition systems can be applied to video to enhance endoscopic cancer screening, a technique which has been shown to improve adenoma detection during colonoscopy [62]. Computer vision has also been applied intraoperatively. Ample evidence suggests that most surgical adverse events have root causes that occur at the time of surgery and are often due to preventable errors in judgment and decision-making. One potential method to address this is by using AI for augmenting surgeons’ mental model at the time of surgery. Most proof-of-concept algorithms have been developed in the context of laparoscopic cholecystectomy, such as GoNoGoNet for the identification of safe and dangerous areas of dissection to avoid major bile duct injuries during hepatocystic triangle dissection, and CVSNet for confirming whether or not a Critical View of Safety has been achieved [63, 64]. With respect to patient flow and operative resource management, ML algorithms have been used to estimate case length, coordinate between the OR and post-anesthesia care unit, and predict case cancelation [65].
Skills assessment
Surgical skill, as rated by one’s peers, correlates with surgical outcomes [66]. Such ratings are predictive even if the surgeries are not rated by other surgeons—skill assessments can be crowdsourced to trained lay people with similar results [67]. The realm of automated skill assessment investigates whether AI systems can make similar, automated assessments of surgeon skill. Such a system might play an important role in resident training and evaluation, as well as mentoring or remediating surgeons in practice. So far, the AI skill assessment systems that have seen good results in trials consist of automated kinematic metrics, combined with human observer-generated skill metrics, both fed into an NN [68]. Truly automated skill assessments remain a challenge, particularly because of the difficulty of building an AI that can evaluate skills such as needle handling and respect for tissue. Though difficult, such AI-generated assessments of technical skill have achieved good concordance with human graders for certain surgical tasks, though challenges remain with regard to consistency, generalizability, and generating useful feedback for surgeons [69, 70].
Patient care
Using the extensive American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database, a machine learning predictive algorithm was created to continuously improve risk prediction [71]. NNs have shown promise in detecting cancers of the lung, breast, and skin, and in predicting outcomes for complex surgical patients [72,73,74]. In critical care, AI-driven tools can be used to predict clinical decompensation in patients hours before it occurs, while experimental systems have shown promise in autonomously determining how to treat patients once they deteriorate [75, 76]. An ML based hypotension prediction algorithm was shown to reduce time in a hypotensive state compared to standard care in a randomized single center trial [77]. Such AI systems might be used as decision-support aids, lessening cognitive load for physicians [78].
Automated surgery
Surgery requires a number of basic proficiencies: perception (the ability to discern tissues and planes); intelligence (the ability to decide how to manipulate the tissues); and dexterity (the ability to manipulate the tissues themselves). Several autonomous or semi-autonomous surgical systems already exist that display all or some of these abilities, such as the CyberKnife system for stereotactic body radiotherapy [79]. For the gut and soft tissues, pliability, distensibility, and discernibility limit the ability of AIs to autonomously perform tasks. Current robotic platforms, like the DaVinci Surgical System, act as surgical “assistants” by stabilizing instruments and interpreting movements, rather than performing the movements themselves. However, there is hope that autonomous surgery might make its way to the gastrointestinal realm. In the lab, autonomous surgical AIs have been able to perform basic tasks like peg transfer and cutting simulated 2D and 3D tissues [80, 81]. Likely the most feature-complete AI system for automated soft tissue surgery is the Smart Tissue Anastomosis Robot (STAR), an automated suturing robot that has successfully completed laparoscopic bowel anastomoses in porcine models [82]. Though impressive, this AI system still requires an experienced surgeon assistant to prepare and align the bowel for anastomosis and to manage suture.
Limitations and future directions
While AI systems have shown promise in simulations and in the laboratory setting, no AI system has yet been adopted into widespread clinical practice. Before adoption, AIs must be demonstrated as trustworthy; must address pertinent, impactful surgical issues; and must be practical and cost-effective [83]. Furthermore, despite many AI algorithms reported in the literature, most of these models are not generalizable to real-world data and there is a lack of infrastructure and data pipeline to make them available for real-time support. Because AI systems are “trained” on data drawn from actual patients in real-world healthcare systems, the AI systems will reflect the systemic biases of the system on which they were trained, this can affect racial minorities and women due to underrepresentation in patient registry and clinical trial populations [84,85,86]. Surgeons must ensure that clinical AIs are designed to avoid systemic bias, rather than perpetuate it.
Another area of AI which requires further definition is how such services will be billed and reimbursed when used in a medical setting. Recently the American Medical Association released CPT® Appendix S: Artificial Intelligence Taxonomy for Medical Services and Procedures, which provides guidance for classifying various artificial intelligence (AI) applications (e.g., expert systems, machine learning, algorithm-based services) for medical services and procedures into one of three categories: assistive, augmentative, and autonomous [58].
Finally, as AI-enhanced surgery becomes more prevalent, surgeons need to become familiar with the underpinnings of this technology to use it effectively and participate in its development.
Connectivity
The Digital Surgery element of Connectivity comprises novel methods of connecting surgeon to surgeon, such as telementoring in both educational and clinical spaces, as well as connecting surgeon to patient via telesurgery.
Several factors converge to create a need for increased access to surgical care and education beyond the status quo of in-person, unenhanced surgery. The most obvious are the restrictions posed by the COVID-19 pandemic. As in-person attendance became difficult or impossible, virtual access became a necessity. From the perspective of trainees, this challenge affected everyone from students considering a field in medicine to surgical residents and fellows. Practicing surgeons were also not immune; those looking to expand their surgical capabilities were cut off from training courses or having proctors visit their facilities. Many sites had to halt elective operations altogether due to hospital capacity or equipment limitations (Fig. 1).
Telementoring for education
Even before COVID-19 there has been a brewing mismatch between surgical learners and learning opportunities with the number of students and trainees outpacing the capacity of hospitals to provide sufficient access to operative training experiences. This has been further compounded by decreasing autonomy and work hour restrictions.
With current surgical training program volumes and population growth rates, there is estimated to be a deficit of almost 30,000 surgeons by the year 2030 [87]. It is further estimated to cost an additional $10 billion dollars to train enough surgeons to address this need. This clearly presents a significant challenge but also an opportunity to assess training paradigms.
The SAGES Project 6 working group determined that basic technical requirements for telementoring fall into the five key areas of (1) safety, (2) reliability, (3) transmission quality, (4) ease of use, and (5) cost [88]. In addition, key elements of a digital surgery training experience were broken down into recording video for later review (“video coaching”), advanced analysis of surgical video, and telestration [89].
Telementoring in the setting of trainees is different from practicing surgeons due to a more variable skill set, the presence of a supervising attending, the need to meet specific educational requirements, and work-hour limitations among others factors.
A telementoring program for education should follow evidence-based practices in forming the curriculum, evaluating the performance of the mentor, and assessing the improvement in the mentee. Augestad et al. have nicely described components of both a “train the trainer” program and mentee development again as part of the SAGES Project 6 initiative. They also review methods of providing structured feedback some of which are summarized in Fig. 2 [90, 91].
Telementoring for clinical practice
Surgical telementoring for clinical practice has been used successfully for over 20 years in various forms. In 1998, Rosser et al. applied teleproctoring to guide a safe laparoscopic cholecystectomy for patients in rural Ecuador [92]. In 2020, amidst the COVID-19 pandemic, telementoring using real-time bidirectional audiovisual communication with digital transmission of live videos and direct observation of the operative field by a remote proctor enabled a valve-in-valve transcatheter mitral valve replacement for an 82-year-old patient [93]. Other excellent clinical outcomes of surgical telementoring have been reported in the literature [94,95,96,97].
Two fundamental applications of surgical telementoring in clinical practice are skill acquisition and virtual intraoperative consultation. An example of skill acquisition includes where the remote mentor surgeon proctors the mentee for an entire operation. This application of planned surgical telementoring is a valuable resource for performing new or complex procedures, allowing surgeons to undertake new operations with an experienced proctor virtually present. It also adds value to the local hospital being able to offer more complex procedures, while decreasing risk compared with a scenario where a local surgeon performs a new or complex procedure without assistance.
An example of the latter application: “Virtual Surgical Assist” is where a surgeon can obtain real time intraoperative consultation during a challenging operation, allowing the local surgeon virtual assistance when no physical consultation is possible or will cause a delay. This could be used during after-hours, cases where an unexpected finding occurs (i.e., a mass or abnormal anatomy), or in practice settings where surgeons with the required expertise cannot be physically available. This application also brings advanced surgical expertise to the local site and can add value by decreasing the need for transfers to a tertiary care facility.
Currently, Skill Acquisition is active and currently growing while Virtual Surgical Assist is not frequently being used largely due to systemic limitations to implementation [98].
Telesurgery
Telesurgery is defined as a remote surgery performed where the surgeon is not at the immediate site of the patient. Visualization and manipulation of the tissues and equipment are performed using teleoperation [99].
The first telesurgery was a transatlantic robotic cholecystectomy performed in 2001 by Dr. Jacques Marescaux, with the surgical team located in New York and the patient in Strasbourg, France. [100, 101]. In 2003, two Canadian hospitals established the first telerobotic surgical service, completing 21 telerobotic surgeries without serious complications and no conversions to open operations [102]. The remainder of the current experience in telesurgery has used inanimate models [103, 104].
Limitations and future directions
There has not been, to our knowledge, study of a residency or fellowship with digital surgical modalities formally integrated into its curriculum. Initial experience with this integration will lead to important lessons learned and refinements. It will also be an opportunity to compare traditional models with integrated models in terms of efficacy, efficiency, and practicality. Future directions include guidance around how to integrate telementoring in a stepwise fashion into the curriculum and provide training programs with a practical way to integrate the necessary technology.
To grow into its full potential, surgical telementoring requires clarity around legal concerns, licensing/credentialing guidelines, and coding/reimbursement models. A sustainable business model is needed that accounts for costs and potential payors to develop a viable reimbursement system. Costs include: technology, mentor’s time, mentor training, curriculum development, and legal fees to include malpractice coverage and indemnification. Potential payers include the recipient hospital, the expert hospital, government healthcare providers (Medicare/Medicaid), insurance companies, and industry. Some of the legal barriers inherent to telementoring include litigation risks to mentor, mentee, respective hospitals, need for informed patient consent, and a better-defined licensing and credentialing process.
The mentor–mentee relationship also impacts logistical considerations. Potential scenarios include use in a regional hospital system, a national hospital system, or an unaffiliated relationship. The first is the most likely scenario to emerge given that the surgeons will be employed within the same hospital system and state therefore liability, licensing, and credentialing will be less of a challenge.
Robotic surgical platforms
Since the late 1990s minimally invasive Robotic-Assisted Surgery (RAS) has become an avenue to integrate current technological advancements into traditional minimally invasive surgery. In the past decade, the number of robotic surgical and endoscopic platforms have rapidly increased [105]. A study examining the use of robotic surgery in Michigan describes an increase in the use of robotic surgery for general surgery procedures from 1.8% in 2012 to 15.1% in 2018, with a concurrent decrease in laparoscopic surgery [106].
Within the surgical arena there are three main types of robotic systems: active systems which work autonomously to complete pre-programmed tasks, semi-active systems which allow for a surgeon-driven element to complement the pre-programmed element, and finally systems in which are entirely dependent on surgeon activity.
In the U.S., the three FDA- approved robotic platforms for general surgery include the Da Vinci Xi and Single Port Systems from Intuitive, and Senhance from Asensus Surgical. Two additional platforms which are FDA pending, but currently in use in Europe, are Hugo by Medtronic and Versius by CMR Surgical. As the market continues to diversify, novel modalities with decreased costs and smaller size are emerging [107]. There are over 20 robotic surgical platforms under development including: Ottava by Johnson & Johnson, SPORT™ Surgical System by Titan Medical, MicroSurge by the DLR Institute of Robotics and Mechatronics, Beta 2 by Vicarious Surgical and MIRA by Virtual Incision.
Within this space, there has been significant growth in endoluminal robotics both for laparoendoscopic single-port surgery (LESS) as well as natural orifice translumenal endoscopic surgery (NOTES). These approaches help minimize collateral tissue damage. Examples of said systems include: the NeoGuide endoscopy system, a computer-aided colonoscope which utilizes real-time 3D computerized mapping to travel along the natural curves of the colon, the FlexⓇ robotic system from Medrobotics Corp, which is a joystick controlled single port platform used in in surgeries of the oropharynx, hypopharynx and larynx, and STRAS from iCUBE, a flexible endoscopic system developed for single-port intraluminal surgery.
Limitations and future directions
Although the current robotic surgical landscape is dominated by a few monolithic systems, ongoing growth in the number of FDA approved robotic platforms for general surgery and endoscopy will lead to decreased costs, as well as increased adoption and innovation. Some limitations of this technology include concerns that it contributes to an escalating cost of care with limited evidence supporting superior clinical benefits relative to standard MIS techniques.
While all robotic systems currently FDA approved are essentially robotically assisted telemetry manipulation devices, the data generated and augmented reality that is possible using these platforms is beginning to take shape. Several systems are working diligently to create haptic feedback to allow the surgeon to regain some of the feeling lost in a purely robotic procedure. The ability to accomplish this is not easy as it requires active sensors on the end of factors. The Senhace robot provides moderate haptic feedback from gauging resistance of motors, joint angles, and overall robot positioning. Abiri and colleagues recently published early work with experimental force sensors installed on a da Vinci fenestrated bipolar grasper that allows the user to palpate soft and hard objects hidden from view [108]. The ability to physically experience touch while working remotely with a robotic system could significantly alter user experience and potentially benefit patients and surgical teams.
Furthermore, research into autonomous surgery continues to progress with recent advances in autonomous robotic surgery for suturing and intestinal anastomosis [109]. In a recent article by Saeidi et al. autonomous anastomosis with soft tissue surgical tracking was possible in an in vivo model. Authors noted better consistency and spacing of sutures placed. Suggesting the possibility of improved outcomes if such a system was available currently in commercial robotics [82]. These theoretic capabilities could revolutionize how robotic surgery is used in surgical practice.
Conclusion
Digital surgery is a nascent technology that has great potential to bring value to all levels of the healthcare system. It encompasses a range of subjects including advanced visualization, enhanced instrumentation, data capture, data analytics, connectivity, and robotic surgical platforms. These technologies are the cusp of a revolution that will impact the surgical field. Surgeons have a uniquely important role to play in guiding these advancements and collaborating with other stakeholders to ensure maximum safety and quality of this evolving paradigm.
References
Atallah S (ed) (2020) Digital surgery, 1st edn. Springer, Cham
Guanà R, Ferrero L, Garofalo S, Cerrina A, Cussa D, Arezzo A et al (2017) Skills comparison in pediatric residents using a 2-dimensional versus a 3-dimensional high-definition camera in a pediatric laparoscopic simulator. J Surg Educ 74(4):644–649. https://doi.org/10.1016/j.jsurg.2016.12.002
Tanagho YS, Andriole GL, Paradis AG, Madison KM, Sandhu GS, Varela JE et al (2012) 2D versus 3D visualization: impact on laparoscopic proficiency using the fundamentals of laparoscopic surgery skill set. J Laparoendosc Adv Surg Tech A 22(9):865–870. https://doi.org/10.1089/lap.2012.0220
Reinhart MB, Huntington CR, Blair LJ, Heniford BT, Augenstein VA (2016) Indocyanine green: historical context, current applications, and future considerations. Surg Innov 23(2):166–175. https://doi.org/10.1177/1553350615604053
Broderick RC, Lee AM, Cheverie JN, Zhao B, Blitzer RR, Patel RJ et al (2021) Fluorescent cholangiography significantly improves patient outcomes for laparoscopic cholecystectomy. Surg Endosc 35(10):5729–5739. https://doi.org/10.1007/s00464-020-08045-x
Blanco-Colino R, Espin-Basany E (2018) Intraoperative use of ICG fluorescence imaging to reduce the risk of anastomotic leakage in colorectal surgery: a systematic review and meta-analysis. Tech Coloproctol 22(1):15–23. https://doi.org/10.1007/s10151-017-1731-8
Gupta A, Ruijters D, Flexman ML (2021) Augmented reality for interventional procedures. Digital surgery. Springer, Cham
Shuhaiber JH (2004) Augmented reality in surgery. Arch Surg 139(2):170–174. https://doi.org/10.1001/archsurg.139.2.170
Ajstefansic JH (1999) Image-guided surgery: preliminary feasibility studies of frameless stereotactic liver surgery. Arch Surg 134(6):644–650
Mountney P, Yang G-Z (2010) Motion compensated SLAM for image guided surgery. Med Image Comput Comput Assist Interv 13(Pt 2):496–504. https://doi.org/10.1007/978-3-642-15745-5_61
Bernhardt S, Nicolau SA, Soler L, Doignon C (2017) The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 37:66–90. https://doi.org/10.1016/j.media.2017.01.007
Gmeiner M, Dirnberger J, Fenz W, Gollwitzer M, Wurm G, Trenkler J et al (2018) Virtual cerebral aneurysm clipping with real-time haptic force feedback in neurosurgical education. World Neurosurg 112:e313–e323. https://doi.org/10.1016/j.wneu.2018.01.042
Gsaxner C (2021) Augmented reality in oral and maxillofacial surgery. Computer-aided oral and maxillofacial surgery. Academic Press, Cambridge
Kang X, Azizian M, Wilson E, Wu K, Martin AD, Kane TD et al (2014) Stereoscopic augmented reality for laparoscopic surgery. Surg Endosc 28(7):2227–2235. https://doi.org/10.1007/s00464-014-3433-x
Xin B, Huang X, Wan W, Lv K, Hu Y, Wang J et al (2020) The efficacy of immersive virtual reality surgical simulator training for pedicle screw placement: a randomized double-blind controlled trial. Int Orthop 44(5):927–934. https://doi.org/10.1007/s00264-020-04488-y
Gavaghan KA, Peterhans M, Oliveira-Santos T, Weber S (2011) A portable image overlay projection device for computer-aided open liver surgery. IEEE Trans Biomed Eng 58(6):1855–1864. https://doi.org/10.1109/TBME.2011.2126572
Reitinger B, Bornik A, Beichel R, Schmalstieg D (2006) Liver surgery planning using virtual reality. IEEE Comput Graph Appl 26(6):36–47. https://doi.org/10.1109/mcg.2006.131
Wendler T, Herrmann K, Schnelzer A, Lasser T, Traub J, Kutter O et al (2010) First demonstration of 3-D lymphatic mapping in breast cancer using freehand SPECT. Eur J Nucl Med Mol Imaging 37(8):1452–1461. https://doi.org/10.1007/s00259-010-1430-4
RSIP Vision (2023) RSIP Neph announces a revolutionary intra-op solution for partial nephrectomy surgeries. PR Newswire. https://www.prnewswire.com/news-releases/rsip-neph-announces-a-revolutionary-intra-op-solution-for-partial-nephrectomy-surgeries-301731484.html. Accessed 19 Mar 2023
Shah SK, Nwaiwu CA, Agarwal A, Bajwa KS, Felinski M, Walker PA et al (2021) First-in-human (FIH) safety, feasibility, and usability trial of a laparoscopic imaging device using laser speckle contrast imaging (LSCI) visualizing real-time tissue perfusion and blood flow without fluorophore in colorectal and bariatric patients. J Am Coll Surg 233(5):S45–S46. https://doi.org/10.1016/j.jamcollsurg.2021.07.070
Vávra P, Roman J, Zonča P, Ihnát P, Němec M, Kumar J et al (2017) Recent development of augmented reality in surgery: a review. J Healthc Eng 2017:1–9. https://doi.org/10.1155/2017/4574172
Pratt P, Stoyanov D, Visentini-Scarzanella M, Yang G-Z (2010) Dynamic guidance for robotic surgery using image-constrained biomechanical models. Med Image Comput Comput Assist Interv 13(Pt 1):77–85. https://doi.org/10.1007/978-3-642-15705-9_10
Condino S, Carbone M, Piazza R, Ferrari M, Ferrari V (2020) Perceptual limits of optical see-through visors for augmented reality guidance of manual tasks. IEEE Trans Biomed Eng 67(2):411–419. https://doi.org/10.1109/TBME.2019.2914517
Edwards PJ, Chand M, Birlo M, Stoyanov D (2021) The challenge of augmented reality in surgery. In: Digital surgery. Springer, Cham, pp 121–135
Dario P, Hannaford B, Menciassi A (2003) Smart surgical tools and augmenting devices. IEEE Trans Rob Autom 19(5):782–792. https://doi.org/10.1109/tra.2003.817071
Gaidry AD, Tremblay L, Nakayama D, Ignacio RC Jr (2019) The history of surgical staplers: a combination of Hungarian, Russian, and American innovation. Am Surg 85(6):563–566. https://doi.org/10.1177/000313481908500617
Kim J-S, Park S-H, Kim N-S, Lee IY, Jung HS, Ahn H-M et al (2022) Compression automation of circular stapler for preventing compression injury on gastrointestinal anastomosis. Int J Med Robot 18(3):e2374. https://doi.org/10.1002/rcs.2374
Roy S, Yoo A, Yadalam S, Fegelman EJ, Kalsekar I, Johnston SS (2017) Comparison of economic and clinical outcomes between patients undergoing laparoscopic bariatric surgery with powered versus manual endoscopic surgical staplers. J Med Econ 20(4):423–433. https://doi.org/10.1080/13696998.2017.1296453
Pla-Martí V, Martín-Arévalo J, Moro-Valdezate D, García-Botello S, Mora-Oliver I, Gadea-Mateo R et al (2021) Impact of the novel powered circular stapler on risk of anastomotic leakage in colorectal anastomosis: a propensity score-matched study. Tech Coloproctol 25(3):279–284. https://doi.org/10.1007/s10151-020-02338-y
Pollack E, Johnston S, Petraiuolo WJ, Roy S, Galvain T (2021) Economic analysis of leak complications in anastomoses performed with powered versus manual circular staplers in left-sided colorectal resections: a US-based cost analysis. Clinicoecon Outcomes Res 13:531–540. https://doi.org/10.2147/ceor.s305296
Levy B, Emery L (2003) Randomized trial of suture versus electrosurgical bipolar vessel sealing in vaginal hysterectomy. Obstet Gynecol 102(1):147–151. https://doi.org/10.1016/s0029-7844(03)00405-8
Sutton C, Abbott J (2013) History of power sources in endoscopic surgery. J Minim Invasive Gynecol 20(3):271–278. https://doi.org/10.1016/j.jmig.2013.03.001
Singleton D, Chekan E, Davison M, Mennone J, Hinoul P (2015) Consistency and sealing of advanced bipolar tissue sealers. Med Devices (Auckl). https://doi.org/10.2147/mder.s79642
Eickhoff A, Van Dam J, Jakobs R, Kudis V, Hartmann D, Damian U, Weickert U, Schilling D, Riemann JF (2007) Computer-assisted colonoscopy (the neoguide endoscopy system): results of the first human clinical trial. Am J Gastroenterol 102:261–266
Rothstein DH, Raval MV (2018) Operating room efficiency. Semin Pediatr Surg 27(2):79–85. https://doi.org/10.1053/j.sempedsurg.2018.02.004
Mayer EK, Sevdalis N, Rout S, Caris J, Russ S, Mansell J et al (2016) Surgical checklist implementation project: the impact of variable WHO checklist compliance on risk-adjusted clinical outcomes after national implementation. A longitudinal study. Ann Surg 263(1):58–63. https://doi.org/10.1097/sla.0000000000001185
Russ S, Arora S, Wharton R, Wheelock A, Hull L, Sharma E et al (2013) Measuring safety and efficiency in the operating room: development and validation of a metric for evaluating task execution in the operating room. J Am Coll Surg 216(3):472–481. https://doi.org/10.1016/j.jamcollsurg.2012.12.013
Ayas S, Gordon L, Donmez B, Grantcharov T (2021) The effect of intraoperative distractions on severe technical events in laparoscopic bariatric surgery. Surg Endosc 35(8):4569–4580. https://doi.org/10.1007/s00464-020-07878-w
Jung JJ, Jüni P, Lebovic G, Grantcharov T (2020) First-year analysis of the operating room black box study. Ann Surg 271(1):122–127. https://doi.org/10.1097/sla.0000000000002863
Okamura AM (2016) ICRA lecture 2: kinematics and control of medical robots. In: ICRA 2016 Tutorial on Medical Robotics
Kuo C-H, Dai JS, Dasgupta P (2012) Kinematic design considerations for minimally invasive surgical robots: an overview: kinematic design considerations for MIS robots. Int J Med Robot 8(2):127–145. https://doi.org/10.1002/rcs.453
van Amsterdam B, Clarkson MJ, Stoyanov D (2021) Gesture recognition in robotic surgery: a review. IEEE Trans Biomed Eng 68(6):2021–2035. https://doi.org/10.1109/tbme.2021.3054828
Green CA, Kim EH, O’Sullivan PS, Chern H (2018) Using technological advances to improve surgery curriculum: experience with a mobile application. J Surg Educ 75(4):1087–1095. https://doi.org/10.1016/j.jsurg.2017.12.005
Catchpole K, Perkins C, Bresee C, Solnik MJ, Sherman B, Fritch J et al (2016) Safety, efficiency and learning curves in robotic surgery: a human factors analysis. Surg Endosc 30(9):3749–3761. https://doi.org/10.1007/s00464-015-4671-2
Weigl M, Weber J, Hallett E, Pfandler M, Schlenker B, Becker A et al (2018) Associations of intraoperative flow disruptions and operating room teamwork during robotic-assisted radical prostatectomy. Urology 114:105–113. https://doi.org/10.1016/j.urology.2017.11.060
Law KE, Ray RD, D’Angelo ALD, Cohen ER, DiMarco SM, Linsmeier E et al (2016) Exploring senior residents’ intraoperative error management strategies: a potential measure of performance improvement. J Surg Educ 73(6):e64–e70. https://doi.org/10.1016/j.jsurg.2016.05.016
Ibrahim AM, Varban OA, Dimick JB (2016) Novel uses of video to accelerate the surgical learning curve. J Laparoendosc Adv Surg Tech A 26(4):240–242. https://doi.org/10.1089/lap.2016.0100
Saun TJ, Zuo KJ, Grantcharov TP (2019) Video technologies for recording open surgery: a systematic review. Surg Innov 26(5):599–612. https://doi.org/10.1177/1553350619853099
Abdelsattar JM, Pandian TK, Finnesgard EJ, El Khatib MM, Rowse PG, Buckarma ENH et al (2015) Do you see what I see? How we use video as an adjunct to general surgery resident education. J Surg Educ 72(6):e145–e150. https://doi.org/10.1016/j.jsurg.2015.07.012
Isaacson D, Green C, Topp K, O’Sullivan P, Kim E (2017) Guided laparoscopic video tutorials for medical student instruction in abdominal anatomy. Mededportal 13(1):10559. https://doi.org/10.15766/mep_2374-8265.10559
Dominguez CO, Flach JM, McKellar DP, Dunn M (2002) Using videotaped cases to elicit perceptual expertise in laparoscopic surgery. In: Proceedings third annual symposium on human interaction with complex systems HICS’96. IEEE Computer Society Press
Mazer L, Varban O, Montgomery JR, Awad MM, Schulman A (2022) Video is better: why aren’t we using it? A mixed-methods study of the barriers to routine procedural video recording and case review. Surg Endosc 36(2):1090–1097. https://doi.org/10.1007/s00464-021-08375-4
Green JL, Suresh V, Bittar P, Ledbetter L, Mithani SK, Allori A (2019) The utilization of video technology in surgical education: a systematic review. J Surg Res 235:171–180. https://doi.org/10.1016/j.jss.2018.09.015
Hu Y-Y, Mazer LM, Yule SJ, Arriaga AF, Greenberg CC, Lipsitz SR et al (2017) Complementing operating room teaching with video-based coaching. JAMA Surg 152(4):318–325. https://doi.org/10.1001/jamasurg.2016.4619
Stulberg JJ, Huang R, Kreutzer L, Ban K, Champagne BJ, Steele SR et al (2020) Association between surgeon technical skills and patient outcomes. JAMA Surg 155(10):960–968. https://doi.org/10.1001/jamasurg.2020.3007
Grenda TR, Pradarelli JC, Dimick JB (2016) Using surgical video to improve technique and skill. Ann Surg 264(1):32–33. https://doi.org/10.1097/sla.0000000000001592
Madani A, Vassiliou MC, Watanabe Y, Al-Halabi B, Al-Rowais MS, Deckelbaum DL et al (2017) What are the principles that guide behaviors in the operating room?: Creating a framework to define and measure performance. Ann Surg 265(2):255–267. https://doi.org/10.1097/sla.0000000000001962
Hashimoto DA, Rosman G, Rus D, Meireles OR (2018) Artificial intelligence in surgery: promises and perils. Ann Surg 268(1):70–76. https://doi.org/10.1097/sla.0000000000002693
Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O et al (2021) Computer vision in surgery. Surgery 169(5):1253–1256. https://doi.org/10.1016/j.surg.2020.10.039
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y et al (2022) Computer vision in surgery: from potential to clinical value. NPJ Digit Med. https://doi.org/10.1038/s41746-022-00707-5
Loftus TJ, Tighe PJ, Filiberto AC, Efron PA, Brakenridge SC, Mohr AM et al (2020) Artificial intelligence and surgical decision-making. JAMA Surg 155(2):148–158. https://doi.org/10.1001/jamasurg.2019.4917
Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W et al (2018) Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology 155(4):1069-1078.e8. https://doi.org/10.1053/j.gastro.2018.06.037
Madani A, Namazi B, Altieri MS, Hashimoto DA, Rivera AM, Pucher PH et al (2022) Artificial intelligence for intraoperative guidance: using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann Surg 276(2):363–369. https://doi.org/10.1097/sla.0000000000004594
Zhang X, Chen F, Yu T, An J, Huang Z, Liu J et al (2019) Real-time gastric polyp detection using convolutional neural networks. PLoS ONE 14(3):e0214133. https://doi.org/10.1371/journal.pone.0214133
Bellini V, Guzzon M, Bigliardi B, Mordonini M, Filippelli S, Bignami E (2019) Artificial Intelligence: a new tool in operating room management. Role of machine learning models in operating room optimization. J Med Syst 44(1):20. https://doi.org/10.1007/s10916-019-1512-1
Wang P, Liu X, Berzin TM, Glissen Brown JR, Liu P, Zhou C et al (2020) Effect of a deep-learning computer-aided detection system on adenoma detection during colonoscopy (CADe-DB trial): a double-blind randomised study. Lancet Gastroenterol Hepatol 5(4):343–351. https://doi.org/10.1016/S2468-1253(19)30411-X
Mascagni P, Vardazaryan A, Alapatt D, Urade T, Emre T, Fiorillo C et al (2022) Artificial intelligence for surgical safety: automatic assessment of the critical view of safety in laparoscopic cholecystectomy using deep learning. Ann Surg 275(5):955–961. https://doi.org/10.1097/SLA.0000000000004351
Birkmeyer JD, Finks JF, O’reilly A, Oerline M, Carlin AM, Nunn AR et al (2013) Michigan bariatric surgery collaborative. Surgical skill and complication rates after bariatric surgery. N Engl J Med 369:1434–1442
Katz AJ (2016) The role of crowdsourcing in assessing surgical skills. Surg Laparosc Endosc Percutan Tech 26:271–277
Hung AJ, Chen J, Gill IS (2018) Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg. https://doi.org/10.1001/jamasurg.2018.1512
Bertsimas D, Dunn J, Velmahos GC, Kaafarani HMA (2018) Surgical risk is not linear: derivation and validation of a novel, user-friendly, and machine-learning-based predictive OpTimal Trees in Emergency Surgery Risk (POTTER) calculator: derivation and validation of a novel, user-friendly, and machine-learning-based predictive OpTimal trees in emergency surgery risk (POTTER) calculator. Ann Surg 268(4):574–583. https://doi.org/10.1097/SLA.0000000000002956
Loftus TJ, Vlaar APJ, Hung AJ, Bihorac A, Dennis BM, Juillard C et al (2022) Executive summary of the artificial intelligence in surgery series. Surgery 171(5):1435–1439. https://doi.org/10.1016/j.surg.2021.10.047
Cheng J-Z, Ni D, Chou Y-H, Qin J, Tiu C-M, Chang Y-C et al (2016) Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep 6(1):24454. https://doi.org/10.1038/srep24454
Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM et al (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118. https://doi.org/10.1038/nature21056
Donald R, Howells T, Piper I, Enblad P, Nilsson P, Chambers I et al (2019) Forewarning of hypotensive events using a Bayesian artificial neural network in neurocritical care. J Clin Monit Comput 33(1):39–51. https://doi.org/10.1007/s10877-018-0139-y
CLEW (2021) CLEW Medical receives FDA clearance for AI-based predictive analytics technology to support adult ICU patient assessment. PR Newswire. https://www.prnewswire.com/il/news-releases/clew-medical-receives-fda-clearance-for-ai-based-predictive-analytics-technology-to-support-adult-icu-patient-assessment-301221173.html. Accessed 19 Mar 2023
Wijnberge M, Geerts BF, Hol L, Lemmers N, Mulder MP, Berge P et al (2020) Effect of a machine learning-derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: the HYPE randomized clinical trial: the HYPE randomized clinical trial. JAMA 323(11):1052–1060. https://doi.org/10.1001/jama.2020.0592
Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA (2018) The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med 24(11):1716–1720. https://doi.org/10.1038/s41591-018-0213-5
Adler JR Jr, Chang SD, Murphy MJ, Doty J, Geis P, Hancock SL (1997) The cyberknife: a frameless robotic system for radiosurgery. Stereotact Funct Neurosurg 69(1–4 Pt 2):124–128. https://doi.org/10.1159/000099863
Gonzalez GT, Kaur U, Rahman M, Venkatesh V, Sanchez N, Hager G et al (2021) From the dexterous surgical skill to the battlefield—a robotics exploratory study. Robot Explor Study 186:288–294
Murali A, Sen S, Kehoe B, Garg A, McFarland S, Patil S et al (2015) Learning by observation for surgical subtasks: multilateral cutting of 3D viscoelastic and 2D orthotropic tissue phantoms. In: 2015 IEEE international conference on robotics and automation (ICRA). IEEE
Saeidi H, Opfermann JD, Kam M, Wei S, Leonard S, Hsieh MH et al (2022) Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci Robot 7(62):eabj908. https://doi.org/10.1126/scirobotics.abj2908
Balch J, Upchurch GR Jr, Bihorac A, Loftus TJ (2021) Bridging the artificial intelligence valley of death in surgical decision-making. Surgery 169(4):746–748. https://doi.org/10.1016/j.surg.2021.01.008
Lifshitz B (2021) Racism is systemic in artificial intelligence systems, too. Georget Secur Stud Rev. https://georgetownsecuritystudiesreview.org/2021/05/06/racism-is-systemic-in-artificial-intelligence-systems-too/. Accessed 19 Mar 2023
Murthy VH, Krumholz HM, Gross CP (2004) Participation in cancer clinical trials: race-, sex-, and age-based disparities. JAMA 291(22):2720. https://doi.org/10.1001/jama.291.22.2720
CPT® Appendix S: artificial intelligence taxonomy for medical services and procedures. Ama-assn.org. 2022. https://www.ama-assn.org/system/files/cpt-appendix-s.pdf. Accessed 19 Mar 2023
Williams TE Jr, Satiani B, Thomas A, Ellison EC (2009) The impending shortage and the estimated cost of training the future surgical workforce. Ann Surg 250(4):590–597. https://doi.org/10.1097/SLA.0b013e3181b6c90b
Bogen EM, Schlachta CM, Ponsky T (2019) White paper: technology for surgical telementoring-SAGES Project 6 Technology Working Group. Surg Endosc 33(3):684–690. https://doi.org/10.1007/s00464-018-06631-8
Augestad KM, Han H, Paige J, Ponsky T, Schlachta CM, Dunkin B et al (2017) Educational implications for surgical telementoring: a current review with recommendations for future practice, policy, and research. Surg Endosc 31(10):3836–3846. https://doi.org/10.1007/s00464-017-5690-y
Butt K, Augestad KM (2021) Educational value of surgical telementoring. J Surg Oncol 124:231–240
Wyles SM, Miskovic D, Ni Z, Darzi AW, Valori RM, Coleman MG et al (2016) Development and implementation of the Structured Training Trainer Assessment Report (STTAR) in the English National Training Programme for laparoscopic colorectal surgery. Surg Endosc 30(3):993–1003. https://doi.org/10.1007/s00464-015-4281-z
Rosser JC Jr, Bell RL, Harnett B, Rodas E, Murayama M, Merrell R (1999) Use of mobile low-bandwith telemedical techniques for extreme telemedicine applications. J Am Coll Surg 189(4):397–404. https://doi.org/10.1016/s1072-7515(99)00185-4
Goel SS, Greenbaum AB, Patel A, Little SH, Parikh R, Wyler von Ballmoos MC et al (2020) Role of teleproctoring in challenging and innovative structural interventions amid the COVID-19 pandemic and beyond. JACC Cardiovasc Interv 13(16):1945–1948. https://doi.org/10.1016/j.jcin.2020.04.013
Erridge S, Yeung DKT, Patel HRH, Purkayastha S (2019) Telementoring of surgeons: a systematic review. Surg Innov 26(1):95–111. https://doi.org/10.1177/1553350618813250
Huang EY, Knight S, Guetter CR, Davis CH, Moller M, Slama E et al (2019) Telemedicine and telementoring in the surgical specialties: a narrative review. Am J Surg 218(4):760–766. https://doi.org/10.1016/j.amjsurg.2019.07.018
El-Sabawi B, Magee W (2016) The evolution of surgical telementoring: current applications and future directions. Ann Transl Med 4(20):391. https://doi.org/10.21037/atm.2016.10.04
Hung AJ, Chen J, Shah A, Gill IS (2018) Telementoring and telesurgery for minimally invasive procedures. J Urol 199(2):355–369. https://doi.org/10.1016/j.juro.2017.06.082
Shin DH, Dalag L, Azhar RA, Santomauro M, Satkunasivam R, Metcalfe C et al (2015) A novel interface for the telementoring of robotic surgery. BJU Int 116(2):302–308. https://doi.org/10.1111/bju.12985
Shahzad N, Chawla T, Gala T (2019) Telesurgery prospects in delivering healthcare in remote areas. J Pak Med Assoc 69(Supp 1):S69–S71
Marescaux J (2002) Code name: “Lindbergh operation.” Ann Chir 127(1):2–4. https://doi.org/10.1016/s0003-3944(01)00658-7
Xu S, Perez M, Yang K, Perrenot C, Felblinger J, Hubert J (2014) Determination of the latency effects on surgical performance and the acceptable latency levels in telesurgery using the dV-Trainer(®) simulator. Surg Endosc 28(9):2569–2576. https://doi.org/10.1007/s00464-014-3504-z
Anvari M, McKinley C, Stein H (2005) Establishment of the world’s first telerobotic remote surgical service: for provision of advanced laparoscopic surgery in a rural community. Ann Surg 241(3):460–464. https://doi.org/10.1097/01.sla.0000154456.69815.ee
Wirz R, Torres LG, Swaney PJ, Gilbert H, Alterovitz R, Webster RJ 3rd et al (2015) An experimental feasibility study on robotic endonasal telesurgery. Neurosurgery 76(4):479–484 (discussion 484). https://doi.org/10.1227/NEU.0000000000000623
Acemoglu A, Peretti G, Trimarchi M, Hysenbelli J, Krieglstein J, Geraldes A et al (2020) Operating from a distance: robotic vocal cord 5G telesurgery on a cadaver. Ann Intern Med 173(11):940–941. https://doi.org/10.7326/M20-0418
Peters BS, Armijo PR, Krause C, Choudhury SA, Oleynikov D (2018) Review of emerging surgical robotic technology. Surg Endosc 32(4):1636–1655. https://doi.org/10.1007/s00464-018-6079-2
Sheetz KH, Claflin J, Dimick JB (2020) Trends in the adoption of robotic surgery for common surgical procedures. JAMA Netw Open 3(1):e1918911. https://doi.org/10.1001/jamanetworkopen.2019.18911
Khandalavala K, Shimon T, Flores L, Armijo PR, Oleynikov D (2020) Emerging surgical robotic technology: a progression toward microbots. Ann Laparosc Endosc Surg. 5:3. https://doi.org/10.21037/ales.2019.10.02
Abiri A, Juo Y-Y, Tao A, Askari SJ, Pensa J, Bisley JW et al (2019) Artificial palpation in robotic surgery using haptic feedback. Surg Endosc 33(4):1252–1259. https://doi.org/10.1007/s00464-018-6405-8
Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PCW (2016) Supervised autonomous robotic soft tissue surgery. Sci Transl Med 8(337):337ra64. https://doi.org/10.1126/scitranslmed.aad9398
Acknowledgements
None.
Funding
This paper received no funding.
Author information
Authors and Affiliations
Consortia
Corresponding author
Ethics declarations
Disclosures
Yang, Schlachta, Rothenberg, and Reed have no disclosures. Green reports honoraria from Intuitive Surgical for educational events. Hazey reports CME support from Memorial Hospital of Union County, and a patent for a endoluminal gastric restriction device. Madani reports consulting fees from Activ Surgical, and that he is chair of the board for the Global Surgical AI Collaborative. Ponsky reports honoraria and support for travel from MSKSCC and Standford for grand rounds presentations. He received travel support for the AIMED Global Summit 2023. Ali reports participating in the advisory boards and owning stock options in Orchestra Health, OptiSurg, and ClearCam. He has received honoraria and support for travel from AcuityMD. He has received consulting fees from MedTrak, Pristine Surgical, AMBU, and Moon Surgical. Oleynikov reports an NIH grant, honoraria from Medtronic, and stock in Virtual Incision Corp. Szoka reports a research grant from Digbi Health and consulting fees from CSATS for surgical video review services. She is a founder of Endolumik, Inc, in which she owns stock. She holds several patents and is in a licensing agreement with West Virginia University regarding one such patent.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
The SAGES Digital Surgery Working Group., Ali, J.T., Yang, G. et al. Defining digital surgery: a SAGES white paper. Surg Endosc 38, 475–487 (2024). https://doi.org/10.1007/s00464-023-10551-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00464-023-10551-7