Digital surgery is the next wave of surgical innovation, enabled by advances made in robotic surgery and preceded by the open and laparoscopic surgical eras. Digital surgery inserts a computer interface between surgeon and patient, and encompasses multiple areas including advanced visualization, enhanced instrumentation, intraoperative data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence and robotic surgical platforms.

Along with the growing need for digital surgery has come rapid advances in computational power and internet connectivity, a decreasing cost of hardware, and familiarity with using technology in the surgical setting. The digital surgery paradigm has the potential to improve surgical access, bring transparency to the operating room, disrupt conventional methods of surgical education, and create a global framework for surgical evolution. The fundamental goal of this new technology is to improve the quality of surgical care [1].

Advanced visualization

The current scope of advanced visualization comprises the following areas: three-dimensional (3D) visualization, fluorescence-guided surgery (FGS), and augmented/mediated reality (AR/MR).

Three-dimensional visualization

Three-dimensional visualization in surgery has demonstrated potential benefits to operative planning, procedure performance, surgical skill acquisition, and patient outcomes. Comparison of 2D–3D vision during performance of Fundamentals of Laparoscopic Surgery tasks demonstrate reduced time to task completion and improved ease and efficiency of task performance. Most robotic surgical platforms offer 3D visualization, which allows for improved proficiency with greater speed to task completion and decreased errors [2, 3]. Robotic surgery platforms have either closed or open consoles. In a closed console system, the surgeon is able to fix their head position for viewing which leads to a standard field of view. In an open console system, the operator is not able to fix their head position, rather can move their head freely during the operation; although head motion can contribute to decreased efficiency, open console systems can allow for improved surgeon communication with team members since they are not encumbered behind a closed console.

Fluorescence-guided surgery

Fluorescence-guided surgery (FGS) is an imaging technique that uses a fluorescent dye or a near-infrared-emitting light source in conjunction with a near-infrared camera to identify anatomic structures or evaluate tissue perfusion during surgery [4].

Recent research comparing indocyanine green (ICG) cholangiography to standard cholecystectomy found that ICG use significantly reduced operative time, common bile duct injury, rate of conversion to open operation, hospital length of stay, and mortality [4, 5]. ICG has also been used to visualize and quantify bowel perfusion of colorectal anastomoses, with some studies demonstrating decreased anastomotic leak rates using this technique [6].

Augmented reality/mediated reality

Augmented reality/Mediated Reality (AR/MR) is a technology that superimposes computer-generated objects onto real images and video in real time. AR is described on a virtuality continuum between real environment (direct view of real environment) and virtual reality (immersion in a fully digital environment). Recent definitions of AR/MR platforms require the combination of real and virtual environments to be interactive in real time and registered in 3D [7]. The application of AR/MR to medical imaging data offers a number of advantages. By using AR/MR guided surgery, the proceduralist’s attention is not divided between navigation method and the patient resulting in improved hand–eye coordination, accuracy, and time efficiency [8]. In addition, AR/MR platforms enable stereoscopic/3D visualization of volumetric data to improve physician perception and support clinical decision-making.

Following is a discussion of current implementation of AR/MR technology in surgery, as well as a brief description of the key technologies involved in developing AR/MR platforms: medical imaging segmentation and modeling, tracking, registration, and visualization.

Image segmentation

The datasets presented from medical imaging are large, making them challenging to manipulate in real time. Image segmentation is the processing of medical imaging to isolate regions of interest and generate a model to interactively visualize the area. Identifying the relevant anatomy has traditionally been achieved by marking structures either manually or semi-autonomously within each individual image. Although a number of open-source packages are available to facilitate this process, image segmentation remains a limiting factor to clinical application of AR surgical guidance. Significant research efforts are underway to develop generalizable autonomous image segmentation, however a proven and clinically accepted method is yet to be established because of similar contrast intensity between neighboring tissue, unclear lesion boundary, and variation in lesion shape [9].

Registration

The exact alignment of the virtual image to the real environment is crucial to the clinical application of AR platforms. Image registration is the process to determine the spatial correspondence of two or more image sets. In image-guided surgery, the two image sets are defined as a static and a moving set, and an algorithm is applied to determine the optimal translation that would minimize the difference seen between the virtual and real environment. The accurate alignment of both images can be achieved with a coordinate of trackers, used to determine the exact position and orientation of the camera and the patient’s body. Marker-based registration relies on rigid calibration of markers to real objects to allow for precise estimation of the real object as detected by either an external or internal sensor. Marker-free registration exploits the natural features observed by the tracking device within the real environment. An example of this is the simultaneous localization and mapping (SLAM) technique [10]. Registration can also be performed manually.

Registration is complicated when the organ of interest does not behave as expected. This issue is particularly highlighted in general surgery where target anatomy is not rigid and deforms in a dynamic manner (e.g., deformations with heartbeat and respiration) [11].

Current implementation

AR systems are currently best implemented during surgery when there is little to no movement of the real environment and when tissue deformation is minimal, as these use-cases require less tracking and processing power as compared to applications on mobile organs where tracking and display are significantly more complicated. Thus, clinical application of AR systems has been successfully implemented in the fields of neurosurgery, orthopedics, and otolaryngology, including for bone dissection, clipping of cerebral aneurysms, microvascular decompression, and placement of pedicle screws [12,13,14,15]. AR systems have demonstrated accurate registration (0.2–3 mm), but increased operative time and associated financial costs have been reported with the use of AR systems for intraoperative guidance [13, 16].

AR has been difficult to apply to abdominal surgery due to the amount of organ movement. Successful use cases have been reported for liver and pancreatic surgery, where AR has been used to compare the reconstructed virtual model with intraoperative ultrasound to identify lesions for resection [7, 17]. Successful superimposed 3D representation of the patient’s hepatobiliary structures over the surgeon’s field of view has been studied during open liver surgery [16]. Also intraoperative AR has been used to accurately detect sentinel lymph node using preoperative SPECT/CT scan and RSIP Vision has developed RSIP Neph, an intraoperative AR tool that assists surgeons during partial nephrectomy to accurately locate and resect an intracapsular renal lesion [18, 19].

Limitations and future directions

Advanced visualization can add clinical value by allowing surgeons to see more intraoperatively, which can enable improved surgical decision-making, as well as improved patient outcomes. Three-dimensional visualization via robotic surgical platforms and NIR-enabled surgical cameras are tools available in many operating rooms, and use should continue to increase as more surgeons learn of the benefits of these modalities.

One limitation of FGS studies is that most include non-randomized study designs, thus larger, well-designed trials are imperative to further validate FGS. Current research is ongoing to identify novel fluorescent biomarkers for clinical use. Furthermore, new assessment platforms, such as laser speckle contrast imaging (LCSI) are being developed to assess perfusion without the need for a fluorophore [20]. One benefit of LCSI is that it enables real time, reproducible perfusion assessment, in contrast to ICG where residual intravascular ICG can sometimes lead to false positive perfusion results.

Although AR/MR application in surgery has shown great promise through individual specialty applications and pre-clinical research, a number of issues limit the further clinical adoption of AR/MR systems. Medical imaging continues to advance to provide more detail for preoperative planning but the amount of data to be processed to segment image datasets are large, and current segmentation methods are time-intensive and potentially expensive. Further research is needed to identify generalizable segmentation methods [21].

Additional research is required to understand the challenges surgeons may encounter in utilizing augmented information. Head-mounted displays may unintentionally obscure vision of the surgical field, create visual clutter from the virtual model overlay, or may distract surgeons from the procedure [2, 22]. Poor ergonomics with head-mounted displays may lead to fatigue and virtual overlays may cause simulator sickness exhibited by nausea, headache, and vertigo [23, 24].

Enhanced instrumentation

The growth and adoption of robotic surgery is inextricably tied to the evolution of surgical instrumentation. In open surgery the surgeon directly “drives” traditional surgical tools which interact with patient tissues, and all information received is direct information. In minimally invasive surgery, the direct links between surgeon and tissues are mediated by laparoscopic or robotic instruments and a video display [25]. The most current enhanced instrumentation consists of devices equipped with one or more of the following elements: a power system, sensors, and automation, as well as safeguards to ensure consistent performance. The main areas of enhanced instrumentation include intelligent staplers and energy devices, as well as robotized devices.

Recent advances in stapling technology include powered staplers with automated firing mechanisms and tissue compression sensing that provide well-formed, reliable staple lines [26, 27]. The benefit of this technology is observed in gastrointestinal operations. Johnson et al. compared outcomes between powered versus manual staplers for bariatric surgery and demonstrated significantly lower hospital costs and bleeding rates in the powered stapler group. Despite higher device cost, several studies show the benefits of powered stapler use in left-sided colorectal anastomosis, with decreased leak rates, bleeding, ileus, 30-day inpatient readmission, hospital length of stay, and decreased overall healthcare costs [28,29,30].

Advanced bipolar and ultrasonic energy devices are routinely used in the operating room. These systems incorporate software algorithms to measure changes in tissue impedance and adjust energy output to deliver the appropriate amount of energy for the desired tissue effect, with advantages including: limited risk of thermal injury, elimination of dispersive electrodes, enhanced sealing capability, as well as decreased blood loss and procedure time [31,32,33].

Roboticized devices are surgical devices which integrate robotic technology with a hardware component, some examples are the HandX™ a powered laparoscopic device for grasping, ligation, and suturing, as well as the Neoguide endoscope which uses a computer algorithm to help optimize scope movement though the bowel [34].

Limitations and future directions

The past two decades have seen incredible growth in precision surgical tools enhanced with a degree of embedded intelligence and autonomy, whose consistent performance abilities have added clinical value by increasing surgeon operative capability and improved surgical outcomes. Regarding implementation, both enhanced energy and stapling devices are found in most modern operating rooms, in comparison, roboticized devices are less commonly seen. With the explosion of robotic surgical technology, this category of devices should increase in the next decade. One barrier to implementation is the cost of such newer devices, thus data is needed to demonstrate the added benefits of these technologies offset their cost.

Some areas of future development include a cordless operating room (OR), wireless data transmission for laparoscopic devices, and handheld robotic tools.

Data capture

There is an abundance of data in the operative environment that is being captured and utilized. These can be broadly summarized as including data from the operating room (e.g., OR personnel, system-level processes and quality metrics), data from operative equipment (e.g., kinematics data from robotic platforms), or data from the surgical field (e.g., video stream from image-guided platforms). In addition, there is an increasing abundance of integrated platforms that facilitate and streamline data collection and data exchange, so called “Integrated Operating Rooms.”

Data from the operating room

Various tools to measure OR efficiency exist, including surgical checklists and Metric for Evaluating Task Execution in the Operating Room (METEOR), all of which perform data collection, analysis, evaluation with iterative correction, and dissemination to staff and institution [35,36,37].

The airline industry uses a “black box” recording device to track large amounts of flight data for both real time and future analysis, which has led to quality improvements and unprecedented safety for passengers. Commercially available platforms such as the OR Black Box (Toronto, ON, Canada) allow for the capture of audio, visual, and other data on various elements of a procedure including both in the operative field and in the operating room itself (e.g., tracking team members) and use artificial intelligence/machine learning to help teams improve quality and efficiency. These systems can identify and remove personal information of patients and providers while retaining clinically relevant data and potentially identify intraoperative errors, events, and distractions [38, 39].

Data from operative equipment

Kinematics is the study of the motion of mechanical points, bodies, and systems. The goals of kinematics in surgical robots determine tool positions and joint positions with respect to operator control and patient anatomy [40, 41]. Surgical robots can capture quantitative instrument motion trajectories during surgery, enabling analysis of surgical activity that is not possible with traditional instrumentation [42]. Surgeon kinematics can give insights to operative efficiency as well as guiding eventual autonomous robots. Additional operative instruments which supply data are the advanced energy devices and powered staplers discussed in the previous section.

Surgical video data

As digital technology continues to advance, so have opportunities for recording and disseminating surgical videos. Surgical video recordings capture many different aspects of the operative process and serve a variety of purposes including training, coaching, research, assessment, and quality improvement [43,44,45,46,47,48]. With the advent of 4 K and 8 K video resolution, the amount of surgical video data is increasing exponentially, and there are dozens of commercially available solutions for secure cloud storage of video data, which enable upload of surgical videos without the need for USB drives, DVDs or encrypted drives.

The use of video to improve performance has been documented in many disciplines and numerous manuscripts describe the advantages of surgical video recording [43, 47,48,49,50,51,52,53,54]. Videos are used by medical trainees to learn about surgical anatomy, procedures and technical details, and prepare for cases. Video-based curricula have been shown to increase knowledgebase, technical performance, and shorten learning curves. Retrospective review of surgeons’ videos augmented with expert critique has shown improved patient outcomes [55, 56]. Surgical videos have even made their way to social media platforms in efforts to solicit peer feedback or advice on operative approaches. As the advantages of intraoperative surgical video recording continue to emerge, surgeons performing open operations have adapted technologies to capture open cases. Saun et al. identified 176 clinical applications for open video recording, and 125 different types of recording cameras used to record open intraoperative cases [49].

The integrated operating room

Integrated or Digital Operating Rooms are installations that functionally connect the OR environment and enable various equipment sources to communicate with one another; these can include high resolution video displays, video routing systems, touch-screen control, digital information archiving, as well as a central hub that connects multiple ORs to one another and the outside world. In addition to simplifying workflows for the surgical team and improving safety, the Integrated OR enables live consultation with medical teams (i.e., pathology), real-time collaboration with virtual surgeons, data exchange with the electronic medical record system, and live feeds for training or teaching purposes.

Limitations and future directions

The benefit of operative data capture is that it can be used to improve OR efficiency, which has implications for cost savings, quality improvement, patient satisfaction, and medical team morale [35]. Decades of literature in surgical performance suggests that multiple factors drive positive operative outcomes, especially advanced cognitive skills (i.e., situation awareness, judgment, decision-making), interpersonal skills (e.g., communication, teamwork), personal resourcefulness and other human factors [57].

Of the types of intraoperative data available, the most widely implemented is surgical video capture. Despite improved accessibility and the abundance of literature supporting the benefits of intraoperative video capture, there are persistent and emerging challenges surrounding this process. These range from ownership of the media and individual patient privacy concerns to hospital-related legal issues. With the exchange of video files, questions have surfaced regarding ownership of the individual media file, and ongoing debate surrounds the rights/responsibilities of the patient, hospital system and operating surgeon. These same questions of ownership apply to other data elements as well.

Some challenges to implementation of an Integrated OR include the reliability and speed of the hospital network, as well as discrepancies between the operating room system and the display technology at an existing remote site. To enable quality communication within this system, the Integrated OR, data network and remote sites must act as a whole, thus network configuration, bandwidth availability, reliability, and information security must be addressed to ensure compatibility. Adoption of efficiency and quality systems is limited by lack of widespread awareness of their utility, as well as cost constraints. Kinematic data captured by surgical robots is stored by individual robot companies, thus not readily accessible for academic research.

Going forward, developing standards defining the appropriate ethical use of perioperative data is necessary. This is a good opportunity for surgeons to advocate for transparency and improved mechanisms for sharing surgical data, as well as help shape policy surrounding data-sharing between healthcare systems, industry, and clinicians.

Data analytics

Data analytics has changed the landscape of everyday life by merging three trends: faster and smaller computer processors, proven statistical methods, and large data sets. The convergence of these factors has changed many facets of our everyday lives and is now changing healthcare.

Artificial intelligence

One area of analytics that has gained notoriety is artificial intelligence (AI), a loosely defined field that seeks to design systems to mimic human thought and behavior. Among AI surgical applications, most involve machine learning (ML), wherein a machine can learn and make predictions by recognizing patterns; this is useful for identifying subtle patterns in large datasets. ML allows a computer to use partial data labeling (supervised learning) or the structure detected in the data (unsupervised learning) to make predictions without specific instructions [58].

Artificial neural networks (ANNs), a subfield of ML, processes signals in layers of simple computational units (nodes). Unlike regression, ANNs are good at managing multidimensional, covariable data. The application of one or more ANNs to create a system capable of autonomously or semi-autonomously executing tasks is known as “deep learning.”

One exciting application of ML is in computer vision, loosely defined as machines understanding and interpreting pixelated-data (i.e., images and videos), and is a rapidly growing area within healthcare. Currently there are several applications, mostly in fields outside of surgery (e.g., radiology, pathology) for computer-aided diagnosis [55,56,57,58,59,60].

AI has broad potential applications in clinical practice, quality improvement, research, and industry. In surgery, its applications are broad and include real-time decision support, surgical education, risk prediction, processes and resource management, and autonomous surgery.

Surgical decision-making and operative resource management

In multiple surgical subspecialties, AI has potential to predict outcomes, prevent complications and missed diagnoses, lessen the cognitive load of busy physicians, and allow for more informed discussions between physician and patient [61]. AI applications in image-recognition may also have far-reaching implications by aiding radiologists and surgeons in detecting or evaluating disease. AI image recognition systems can be applied to video to enhance endoscopic cancer screening, a technique which has been shown to improve adenoma detection during colonoscopy [62]. Computer vision has also been applied intraoperatively. Ample evidence suggests that most surgical adverse events have root causes that occur at the time of surgery and are often due to preventable errors in judgment and decision-making. One potential method to address this is by using AI for augmenting surgeons’ mental model at the time of surgery. Most proof-of-concept algorithms have been developed in the context of laparoscopic cholecystectomy, such as GoNoGoNet for the identification of safe and dangerous areas of dissection to avoid major bile duct injuries during hepatocystic triangle dissection, and CVSNet for confirming whether or not a Critical View of Safety has been achieved [63, 64]. With respect to patient flow and operative resource management, ML algorithms have been used to estimate case length, coordinate between the OR and post-anesthesia care unit, and predict case cancelation [65].

Skills assessment

Surgical skill, as rated by one’s peers, correlates with surgical outcomes [66]. Such ratings are predictive even if the surgeries are not rated by other surgeons—skill assessments can be crowdsourced to trained lay people with similar results [67]. The realm of automated skill assessment investigates whether AI systems can make similar, automated assessments of surgeon skill. Such a system might play an important role in resident training and evaluation, as well as mentoring or remediating surgeons in practice. So far, the AI skill assessment systems that have seen good results in trials consist of automated kinematic metrics, combined with human observer-generated skill metrics, both fed into an NN [68]. Truly automated skill assessments remain a challenge, particularly because of the difficulty of building an AI that can evaluate skills such as needle handling and respect for tissue. Though difficult, such AI-generated assessments of technical skill have achieved good concordance with human graders for certain surgical tasks, though challenges remain with regard to consistency, generalizability, and generating useful feedback for surgeons [69, 70].

Patient care

Using the extensive American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database, a machine learning predictive algorithm was created to continuously improve risk prediction [71]. NNs have shown promise in detecting cancers of the lung, breast, and skin, and in predicting outcomes for complex surgical patients [72,73,74]. In critical care, AI-driven tools can be used to predict clinical decompensation in patients hours before it occurs, while experimental systems have shown promise in autonomously determining how to treat patients once they deteriorate [75, 76]. An ML based hypotension prediction algorithm was shown to reduce time in a hypotensive state compared to standard care in a randomized single center trial [77]. Such AI systems might be used as decision-support aids, lessening cognitive load for physicians [78].

Automated surgery

Surgery requires a number of basic proficiencies: perception (the ability to discern tissues and planes); intelligence (the ability to decide how to manipulate the tissues); and dexterity (the ability to manipulate the tissues themselves). Several autonomous or semi-autonomous surgical systems already exist that display all or some of these abilities, such as the CyberKnife system for stereotactic body radiotherapy [79]. For the gut and soft tissues, pliability, distensibility, and discernibility limit the ability of AIs to autonomously perform tasks. Current robotic platforms, like the DaVinci Surgical System, act as surgical “assistants” by stabilizing instruments and interpreting movements, rather than performing the movements themselves. However, there is hope that autonomous surgery might make its way to the gastrointestinal realm. In the lab, autonomous surgical AIs have been able to perform basic tasks like peg transfer and cutting simulated 2D and 3D tissues [80, 81]. Likely the most feature-complete AI system for automated soft tissue surgery is the Smart Tissue Anastomosis Robot (STAR), an automated suturing robot that has successfully completed laparoscopic bowel anastomoses in porcine models [82]. Though impressive, this AI system still requires an experienced surgeon assistant to prepare and align the bowel for anastomosis and to manage suture.

Limitations and future directions

While AI systems have shown promise in simulations and in the laboratory setting, no AI system has yet been adopted into widespread clinical practice. Before adoption, AIs must be demonstrated as trustworthy; must address pertinent, impactful surgical issues; and must be practical and cost-effective [83]. Furthermore, despite many AI algorithms reported in the literature, most of these models are not generalizable to real-world data and there is a lack of infrastructure and data pipeline to make them available for real-time support. Because AI systems are “trained” on data drawn from actual patients in real-world healthcare systems, the AI systems will reflect the systemic biases of the system on which they were trained, this can affect racial minorities and women due to underrepresentation in patient registry and clinical trial populations [84,85,86]. Surgeons must ensure that clinical AIs are designed to avoid systemic bias, rather than perpetuate it.

Another area of AI which requires further definition is how such services will be billed and reimbursed when used in a medical setting. Recently the American Medical Association released CPT® Appendix S: Artificial Intelligence Taxonomy for Medical Services and Procedures, which provides guidance for classifying various artificial intelligence (AI) applications (e.g., expert systems, machine learning, algorithm-based services) for medical services and procedures into one of three categories: assistive, augmentative, and autonomous [58].

Finally, as AI-enhanced surgery becomes more prevalent, surgeons need to become familiar with the underpinnings of this technology to use it effectively and participate in its development.

Connectivity

The Digital Surgery element of Connectivity comprises novel methods of connecting surgeon to surgeon, such as telementoring in both educational and clinical spaces, as well as connecting surgeon to patient via telesurgery.

Several factors converge to create a need for increased access to surgical care and education beyond the status quo of in-person, unenhanced surgery. The most obvious are the restrictions posed by the COVID-19 pandemic. As in-person attendance became difficult or impossible, virtual access became a necessity. From the perspective of trainees, this challenge affected everyone from students considering a field in medicine to surgical residents and fellows. Practicing surgeons were also not immune; those looking to expand their surgical capabilities were cut off from training courses or having proctors visit their facilities. Many sites had to halt elective operations altogether due to hospital capacity or equipment limitations (Fig. 1).

Fig. 1
figure 1

The elements of digital surgery include advanced visualization, enhanced instrumentation, robotic surgical platforms, connectivity, data capture, and data analytics

Telementoring for education

Even before COVID-19 there has been a brewing mismatch between surgical learners and learning opportunities with the number of students and trainees outpacing the capacity of hospitals to provide sufficient access to operative training experiences. This has been further compounded by decreasing autonomy and work hour restrictions.

With current surgical training program volumes and population growth rates, there is estimated to be a deficit of almost 30,000 surgeons by the year 2030 [87]. It is further estimated to cost an additional $10 billion dollars to train enough surgeons to address this need. This clearly presents a significant challenge but also an opportunity to assess training paradigms.

The SAGES Project 6 working group determined that basic technical requirements for telementoring fall into the five key areas of (1) safety, (2) reliability, (3) transmission quality, (4) ease of use, and (5) cost [88]. In addition, key elements of a digital surgery training experience were broken down into recording video for later review (“video coaching”), advanced analysis of surgical video, and telestration [89].

Telementoring in the setting of trainees is different from practicing surgeons due to a more variable skill set, the presence of a supervising attending, the need to meet specific educational requirements, and work-hour limitations among others factors.

A telementoring program for education should follow evidence-based practices in forming the curriculum, evaluating the performance of the mentor, and assessing the improvement in the mentee. Augestad et al. have nicely described components of both a “train the trainer” program and mentee development again as part of the SAGES Project 6 initiative. They also review methods of providing structured feedback some of which are summarized in Fig. 2 [90, 91].

Fig. 2
figure 2

Overview of frameworks when integrating telementoring into post-graduate training. (1) Objective Structured Assessment of Technical Skills. (2) Global Operative Assessment of Laparoscopic Skills. (3) Global Evaluative Assessment of Robotic Skills. 4. Structured Training Trainer Assessment Report

Telementoring for clinical practice

Surgical telementoring for clinical practice has been used successfully for over 20 years in various forms. In 1998, Rosser et al. applied teleproctoring to guide a safe laparoscopic cholecystectomy for patients in rural Ecuador [92]. In 2020, amidst the COVID-19 pandemic, telementoring using real-time bidirectional audiovisual communication with digital transmission of live videos and direct observation of the operative field by a remote proctor enabled a valve-in-valve transcatheter mitral valve replacement for an 82-year-old patient [93]. Other excellent clinical outcomes of surgical telementoring have been reported in the literature [94,95,96,97].

Two fundamental applications of surgical telementoring in clinical practice are skill acquisition and virtual intraoperative consultation. An example of skill acquisition includes where the remote mentor surgeon proctors the mentee for an entire operation. This application of planned surgical telementoring is a valuable resource for performing new or complex procedures, allowing surgeons to undertake new operations with an experienced proctor virtually present. It also adds value to the local hospital being able to offer more complex procedures, while decreasing risk compared with a scenario where a local surgeon performs a new or complex procedure without assistance.

An example of the latter application: “Virtual Surgical Assist” is where a surgeon can obtain real time intraoperative consultation during a challenging operation, allowing the local surgeon virtual assistance when no physical consultation is possible or will cause a delay. This could be used during after-hours, cases where an unexpected finding occurs (i.e., a mass or abnormal anatomy), or in practice settings where surgeons with the required expertise cannot be physically available. This application also brings advanced surgical expertise to the local site and can add value by decreasing the need for transfers to a tertiary care facility.

Currently, Skill Acquisition is active and currently growing while Virtual Surgical Assist is not frequently being used largely due to systemic limitations to implementation [98].

Telesurgery

Telesurgery is defined as a remote surgery performed where the surgeon is not at the immediate site of the patient. Visualization and manipulation of the tissues and equipment are performed using teleoperation [99].

The first telesurgery was a transatlantic robotic cholecystectomy performed in 2001 by Dr. Jacques Marescaux, with the surgical team located in New York and the patient in Strasbourg, France. [100, 101]. In 2003, two Canadian hospitals established the first telerobotic surgical service, completing 21 telerobotic surgeries without serious complications and no conversions to open operations [102]. The remainder of the current experience in telesurgery has used inanimate models [103, 104].

Limitations and future directions

There has not been, to our knowledge, study of a residency or fellowship with digital surgical modalities formally integrated into its curriculum. Initial experience with this integration will lead to important lessons learned and refinements. It will also be an opportunity to compare traditional models with integrated models in terms of efficacy, efficiency, and practicality. Future directions include guidance around how to integrate telementoring in a stepwise fashion into the curriculum and provide training programs with a practical way to integrate the necessary technology.

To grow into its full potential, surgical telementoring requires clarity around legal concerns, licensing/credentialing guidelines, and coding/reimbursement models. A sustainable business model is needed that accounts for costs and potential payors to develop a viable reimbursement system. Costs include: technology, mentor’s time, mentor training, curriculum development, and legal fees to include malpractice coverage and indemnification. Potential payers include the recipient hospital, the expert hospital, government healthcare providers (Medicare/Medicaid), insurance companies, and industry. Some of the legal barriers inherent to telementoring include litigation risks to mentor, mentee, respective hospitals, need for informed patient consent, and a better-defined licensing and credentialing process.

The mentor–mentee relationship also impacts logistical considerations. Potential scenarios include use in a regional hospital system, a national hospital system, or an unaffiliated relationship. The first is the most likely scenario to emerge given that the surgeons will be employed within the same hospital system and state therefore liability, licensing, and credentialing will be less of a challenge.

Robotic surgical platforms

Since the late 1990s minimally invasive Robotic-Assisted Surgery (RAS) has become an avenue to integrate current technological advancements into traditional minimally invasive surgery. In the past decade, the number of robotic surgical and endoscopic platforms have rapidly increased [105]. A study examining the use of robotic surgery in Michigan describes an increase in the use of robotic surgery for general surgery procedures from 1.8% in 2012 to 15.1% in 2018, with a concurrent decrease in laparoscopic surgery [106].

Within the surgical arena there are three main types of robotic systems: active systems which work autonomously to complete pre-programmed tasks, semi-active systems which allow for a surgeon-driven element to complement the pre-programmed element, and finally systems in which are entirely dependent on surgeon activity.

In the U.S., the three FDA- approved robotic platforms for general surgery include the Da Vinci Xi and Single Port Systems from Intuitive, and Senhance from Asensus Surgical. Two additional platforms which are FDA pending, but currently in use in Europe, are Hugo by Medtronic and Versius by CMR Surgical. As the market continues to diversify, novel modalities with decreased costs and smaller size are emerging [107]. There are over 20 robotic surgical platforms under development including: Ottava by Johnson & Johnson, SPORT™ Surgical System by Titan Medical, MicroSurge by the DLR Institute of Robotics and Mechatronics, Beta 2 by Vicarious Surgical and MIRA by Virtual Incision.

Within this space, there has been significant growth in endoluminal robotics both for laparoendoscopic single-port surgery (LESS) as well as natural orifice translumenal endoscopic surgery (NOTES). These approaches help minimize collateral tissue damage. Examples of said systems include: the NeoGuide endoscopy system, a computer-aided colonoscope which utilizes real-time 3D computerized mapping to travel along the natural curves of the colon, the Flex robotic system from Medrobotics Corp, which is a joystick controlled single port platform used in in surgeries of the oropharynx, hypopharynx and larynx, and STRAS from iCUBE, a flexible endoscopic system developed for single-port intraluminal surgery.

Limitations and future directions

Although the current robotic surgical landscape is dominated by a few monolithic systems, ongoing growth in the number of FDA approved robotic platforms for general surgery and endoscopy will lead to decreased costs, as well as increased adoption and innovation. Some limitations of this technology include concerns that it contributes to an escalating cost of care with limited evidence supporting superior clinical benefits relative to standard MIS techniques.

While all robotic systems currently FDA approved are essentially robotically assisted telemetry manipulation devices, the data generated and augmented reality that is possible using these platforms is beginning to take shape. Several systems are working diligently to create haptic feedback to allow the surgeon to regain some of the feeling lost in a purely robotic procedure. The ability to accomplish this is not easy as it requires active sensors on the end of factors. The Senhace robot provides moderate haptic feedback from gauging resistance of motors, joint angles, and overall robot positioning. Abiri and colleagues recently published early work with experimental force sensors installed on a da Vinci fenestrated bipolar grasper that allows the user to palpate soft and hard objects hidden from view [108]. The ability to physically experience touch while working remotely with a robotic system could significantly alter user experience and potentially benefit patients and surgical teams.

Furthermore, research into autonomous surgery continues to progress with recent advances in autonomous robotic surgery for suturing and intestinal anastomosis [109]. In a recent article by Saeidi et al. autonomous anastomosis with soft tissue surgical tracking was possible in an in vivo model. Authors noted better consistency and spacing of sutures placed. Suggesting the possibility of improved outcomes if such a system was available currently in commercial robotics [82]. These theoretic capabilities could revolutionize how robotic surgery is used in surgical practice.

Conclusion

Digital surgery is a nascent technology that has great potential to bring value to all levels of the healthcare system. It encompasses a range of subjects including advanced visualization, enhanced instrumentation, data capture, data analytics, connectivity, and robotic surgical platforms. These technologies are the cusp of a revolution that will impact the surgical field. Surgeons have a uniquely important role to play in guiding these advancements and collaborating with other stakeholders to ensure maximum safety and quality of this evolving paradigm.