Abstract
The growth of medical robotics since the mid-1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.
Medical robots may be classified in many ways: by manipulator design (e. g., kinematics, actuation); by level of autonomy (e. g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e. g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e. g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.
First, we introduce basic concepts of computer-integrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical GlossaryTerm
CAD
/GlossaryTermCAM
) and surgical assistance. In subsequent sections, we provide an overview of the technology of medical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Core Concepts
A fundamental property of robotic systems is their ability to couple complex information to physical action in order to perform a useful task. This ability to replace, supplement, or transcend human performance has had a profound influence on many fields of our society, including industrial production, exploration, quality control, and laboratory processes. Although robots have often been first introduced to automate or improve discrete processes such as welding or test probe placement or to provide access to environments where humans cannot safely go, their greater long-term impact has often come indirectly as essential enablers of computer integration of entire production or service processes.
1.1 Medical Robotics, Computer-Integrated Surgery, and Closed-Loop Interventions
Medical robots have a similar potential to fundamentally change surgery and interventional medicine as part of a broader, information-intensive environment that exploits the complementary strengths of humans and computer-based technology. The robots may be thought of as information-driven surgical tools that enable human surgeons to treat individual patients with greater safety, improved efficacy, and reduced morbidity than would otherwise be possible. Further, the consistency and information infrastructure associated with medical robotic and computer-assisted surgery systems have the potential to make computer-integrated surgery as important to health care as computer-integrated manufacturing is to industrial production.
Figure 63.1 illustrates this view of computer-integrated surgery (GlossaryTerm
CIS
). The process starts with information about the patient, which can include medical images (computed tomography (GlossaryTermCT
), magnetic resonance imaging (GlossaryTermMRI
), positron emission tomography (GlossaryTermPET
), etc.), lab test results, and other information. This patient-specific information is combined with statistical information about human anatomy, physiology, and disease to produce a comprehensive computer representation of the patient, which can then be used to produce an optimized interventional plan. In the operating room, the preoperative patient model and plan must be registered to the actual patient. Typically, this is done by identifying corresponding landmarks or structures on the preoperative model and the patient, either by means of additional imaging (x-ray, ultrasound, video), by the use of a tracked pointing device, or by the robot itself. If the patient’s anatomy has changed, then the model and plan are updated appropriately, and the planned procedure is carried out with assistance of the robot. As the intervention continues, additional imaging or other sensing is used to monitor the progress of the procedure, to update the patient model, and to verify that the planned procedure has been successfully executed. After the procedure is complete, further imaging, modeling, and computer-assisted assessment is performed for patient follow-up and to plan subsequent interventions, if any should be required. Further, all the patient-specific data generated during the planning, execution, and follow-up phases can be retained. These data can subsequently be analyzed statistically to improve the rules and methods used to plan future procedures.1.2 Factors Affecting the Acceptance of Medical Robots
Medical robotics is ultimately an application-driven research field. Although the development of medical robotic systems requires significant innovation and can lead to very real, fundamental advances in technology, medical robots must provide measurable and significant advantages if they are to be widely accepted and deployed. The situation is complicated by the fact that these advantages are often difficult to measure, can take an extended period to assess, and may be of varying importance to different groups. Table 63.1 lists some of the more important factors that researchers contemplating the development of a new medical robot system should consider in assessing their proposed approach.
Broadly, the advantages offered by medical robots may be grouped into three areas. The first is the potential of a medical robot to significantly improve surgeons’ technical capability to perform procedures by exploiting the complementary strengths of humans and robots summarized in Table 63.2. Medical robots can be constructed to be more precise and geometrically accurate than an unaided human. They can operate in hostile radiological environments and can provide great dexterity for minimally invasive procedures inside the patient’s body. These capabilities can both enhance the ability of an average surgeon to perform procedures that only a few exceptionally gifted surgeons can perform unassisted and can also make it possible to perform interventions that would otherwise be completely infeasible.
A second, closely related capability is the potential of medical robots to promote surgical safety both by improving a surgeon’s technical performance and by means of active assists such as no-fly zones or virtual fixtures (Sect. 63.2.3) to prevent surgical instruments from causing unintentional damage to delicate structures. Furthermore, the integration of medical robots within the information infrastructure of a larger CIS system can provide the surgeon with significantly improved monitoring and online decision supports, thus further improving safety.
A third advantage is the inherent ability of medical robots and CIS systems to promote consistency while capturing detailed online information for every procedure. Consistent execution (e. g., in spacing and tensioning of sutures or in placing of components in joint reconstructions) is itself an important quality factor. If saved and routinely analyzed, the flight data recorder information inherently available with a medical robot can be used both in morbidity and mortality assessments of serious surgical incidents and, potentially, in statistical analyses examining many cases to develop better surgical plans. Furthermore, such data can provide valuable input for surgical simulators, as well as a database for developing skill assessment and certification tools for surgeons.
1.3 Medical Robotics System Paradigms: Surgical CAD/CAM and Surgical Assistance
We call the process of computer-assisted planning, registration, execution, monitoring, and assessment surgical CAD/CAM, emphasizing the analogy to manufacturing CAD/CAM. Just as in manufacturing, robots can be critical in this CAD/CAM process by enhancing the surgeon’s ability to execute surgical plans. The specific role played by the robot depends somewhat on the application, but current systems tend to exploit the geometric accuracy of the robot and/or its ability to function concurrently with x-ray or other imaging devices. Typical examples include radiation therapy delivery robots such as Accuray’s CyberKnife [63.2] (Accuray, Inc., Sunnyvale, CA), shaping of bone in orthopaedic joint reconstructions (discussed further in Sect. 63.3.2) and image-guided placement of therapy needles (Sect. 63.3.3).
Surgery is often highly interactive; many decisions are made by the surgeon in the operating room and executed immediately, usually with direct visual or haptic feedback. Generally, the goal of surgical robotics is not to replace the surgeon so much as to improve his or her ability to treat the patient. The robot is thus a computer-controlled surgical tool in which control of the robot is often shared in one way or another between the human surgeon and a computer. We thus often speak of medical robots as surgical assistants.
Broadly, robotic surgical assistants may be broken into two subcategories. The first category, surgeon extender robots, manipulate surgical instruments under the direct control of the surgeon, usually through a teleoperation or hands-on cooperative control interface. The primary value of these systems is that they can overcome some of the perception and manipulation limitations of the surgeon. Examples include the ability to manipulate surgical instruments with superhuman precision by eliminating hand tremor, the ability to perform highly dexterous tasks inside the patient’s body, or the ability to perform surgery on a patient who is physically remote from the surgeon. Although setup time is still a serious concern with most surgeon extender systems, the greater ease of manipulation that such systems offer has the potential to reduce operative times. One widely deployed example of a surgeon extender is the da Vinci system [63.3] (Intuitive Surgical Systems, Sunnyvale, CA) shown in Fig. 63.2. Other examples (among many) incude the Sensei catheter system [63.7] (Hansen Medical Systems, Mountain View, CA.). the Johns Hopkins University (GlossaryTerm
JHU
) Steady Hand microsurgery robot [63.4, 63.5, 63.6] shown in Fig. 63.3 and discussed in Sect. 63.3, the Rio orthopaedic robot [63.8] (Mako Surgical Systems, Ft. Lauderdale, Florida), the GlossaryTermDLR
Miro system [63.9], the Surgica Robotica’s Sergenius system (Surgica Robotica, Udine), and Titan Medical’s Amadeus System (Titan Medical, Toronto, Canada).A second category, auxiliary surgical support robots, generally work alongside the surgeon and perform such routine tasks as tissue retraction, limb positioning, or endoscope holding. One primary advantage of such systems is their potential to reduce the number of people required in the operating room, although that advantage can only be achieved if all the tasks routinely performed by an assisting individual can be automated. Other advantages can include improved task performance (e. g., a steadier endoscopic view), safety (e. g., elimination of excessive retraction forces), or simply giving the surgeon a greater feeling of control over the procedure. One of the key challenges in these systems is providing the required assistance without posing an undue burden on the surgeon’s attention. A variety of control interfaces are common, including joysticks, head tracking, voice recognition systems, and visual tracking of the surgeon and surgical instruments, for example, the Aesop endoscope positioner [63.10] used both a foot-actuated joystick and a very effective voice recognition system. Again, further examples are discussed in Sect. 63.3.
It is important to realize that surgical CAD/CAM and surgical assistance are complementary concepts. They are not at all incompatible, and many systems have aspects of both.
2 Technology
The mechanical design of a surgical robot depends crucially on its intended application. For example, robots with high precision, stiffness and (possibly) limited dexterity are often very suitable for orthopaedic bone shaping or stereotactic needle placement, and medical robots for these applications [63.17, 63.18, 63.19, 63.20] frequently have high gear ratios and consequently, low back-drivability, high stiffness, and low speed. On the other hand, robots for complex, minimally invasive surgery (GlossaryTerm
MIS
) on soft tissues require compactness, dexterity, and responsiveness. These systems [63.21, 63.3] frequently have relatively high speed, low stiffness, and highly back-drivable mechanisms.2.1 Mechanical Design Considerations
Many early medical robots [63.17, 63.20, 63.22] were essentially modified industrial robots. This approach has many advantages, including low cost, high reliability, and shortened development times. If suitable modifications are made to ensure safety and sterility, such systems can be very successful clinically [63.18], and they can also be invaluable for rapid prototyping and research use.
However, the specialized requirements of surgical applications have tended to encourage more specialized designs. For example, laparoscopic surgery and percutaneous needle placement procedures typically involve the passage or manipulation of instruments about a common entry point into the patient’s body. There are three basic design approaches. The first approach uses a passive wrist to allow the instrument to pivot about the insertion point and has been used in the commercial Aesop and Zeus robots [63.21, 63.23] as well as several research systems. The second approach mechanically constrains the motion of the surgical tool to rotate about a remote center of motion (GlossaryTerm
RCM
) distal to the robot’s structure. In surgery, the robot is positioned so that the RCM point coincides with the entry point into the patient’s body. This approach has been used by the commercially developed da Vinci robot [63.3], as well as by numerous research groups, using a variety of kinematic designs [63.24, 63.25, 63.26]. Finally, a third approach uses an active external wrist [63.17, 63.9]and thus supports robot-assisted interventions that do not require a pivot point, potentially extending robotic surgery to other field of surgery.The emergence of minimally invasive surgery has created a need for robotic systems that can provide high degrees of dexterity in very constrained spaces inside the patient’s body, and at smaller and smaller scales. Figure 63.4 shows several typical examples of current approaches. One common response has been to develop cable-actuated wrists [63.3]. However, a number of investigators have investigated other approaches, including bending structural elements [63.11], shape-memory alloy actuators [63.27, 63.28], microhydraulic systems [63.29], and electroactive polymers [63.30]. Similarly, the problem of providing access to surgical sites inside the body has led several groups to develop semiautonomously moving robots for epicardial [63.31] or endoluminal applications [63.32, 63.33].
Two growing trends MIS are natural orifice transluminal surgery (GlossaryTerm
NOTES
) [63.34, 63.35] and single port laparoscopy (GlossaryTermSPL
) [63.36]: the idea is to get access to the abdominal cavity by using natural orifices and internal incisions (in NOTES) or existing human scars (e. g., the navel, in SPL). From the mechanical design viewpoint, there is the need to develop deployable surgical instruments or accessorised endoscopes and to combine flexibility (to reach the target) and stability of the platform (to achieve precision). Clashing of instruments and difficulty in triangulation are the main limitations which companies and research groups try to approach [63.37, 63.38].The problem of distal operation, already present in MIS, is becoming more dramatic in NOTES and SPL and several solutions for helping surgical tasks requiring triangulation have been developed by different research groups [63.39]. They are based on magnetic fields which can generate an internal force without constraining the internal tool to the access port [63.40, 63.41, 63.42].
Another significant development in recent years has been the emergence and widespread deployment of three-dimensional (GlossaryTerm
3-D
) printing and other rapid prototyping technologies for clinically usable medical devices and medical robot components, as well as for construction of realistic patient-specific models [63.43, 63.44]. This trend has promoted very rapid progress in medical robot design and will be increasingly important in coming years.Although most surgical robots are mounted to the surgical table, to the operating room ceiling, or to the floor, there has been growing interest in developing systems that directly attach to the patient [63.45, 63.46], and clinically deployed examples exist [63.47]. The main advantage of this approach is that the relative position of the robot and patient is unaffected if the patient moves. The challenges are that the robot must be smaller and that relatively nonintrusive means for mounting it must be developed.
Finally, robotic systems intended for use in specific imaging environments pose additional design challenges. First, there is the geometric constraint that the robot (or at least its end-effector) must fit within the scanner along with the patient. Second, the robot’s mechanical structure and actuators must not interfere with the image formation process. In the case of x-ray and CT, satisfying these constraints is relatively straightforward. The constraints for MRI are more challenging [63.48].
2.2 Control Paradigms
Surgical robots assist surgeons in treating patients by moving surgical instruments, sensors, or other devices in relation to the patient. Generally, these motions are controlled by the surgeon in one of three ways:
-
Preprogrammed, semi-autonomous motion: The desired behavior of the robot’s tools is specified interactively by the surgeon, usually based on medical images. The computer fills in the details and obtains the surgeon’s concurrence before the robot is moved. Examples include the selection of needle target and insertion points for percutaneous therapy and tool cutter paths for orthopaedic bone machining.
-
Teleoperator control: The surgeon specifies the desired motions directly through a separate human interface device and the robot moves immediately. Examples include common telesurgery systems such as the da Vinci [63.3]. Although physical master manipulators are the most common input devices, other human interfaces are also used, notably voice control [63.21].
-
Hands-on compliant control: The surgeon grasps the surgical tool held by the robot or a control handle on the robot’s end-effector. A force sensor senses the direction that the surgeon wishes to move the tool and the computer moves the robot to comply. Early experiences with Robodoc [63.17] and other surgical robots [63.25] showed that surgeons found this form of control to be very convenient and natural for surgical tasks. Subsequently, a number of groups have exploited this idea for precise surgical tasks, notably the JHU Steady Hand microsurgical robot [63.4] shown in Fig. 63.3, the Rio orthopaedic robot [63.8] (Mako Surgical Systems, Ft. Lauderdale, Florida) and the Imperial College Acrobot orthopaedic system [63.49] shown in Fig. 63.5c,d.
These control modes are not mutually exclusive and are frequently mixed. For example, the Robodoc system [63.17, 63.18] uses hands-on control to position the robot close to the patient’s femur or knee and preprogrammed motions for bone machining. Similarly, the IBM/JHU LARS robot. [63.25] used both cooperative and telerobotic control modes. The cooperatively controlled Acrobot [63.49] uses preprogrammed virtual fixtures (Sect. 63.1.3) derived from the implant shape and its planned position relative to medical images.
Each mode has advantages and limitations, depending on the task. Preprogrammed motions permit complex paths to be generated from relatively simple specifications of the specific task to be performed. They are most often encountered in surgical CAD/CAM applications where the planning uses two- (GlossaryTerm
2-D
) or three-dimensional (3-D) medical images. However, they can also provide useful complex motions combining sensory feedback in teleoperated or hands-on systems. Examples might include passing a suture or inserting a needle into a vessel after the surgeon has prepositioned the tip. On the other hand, interactive specification of motions based on real-time visual appreciation of deforming anatomy would be very difficult.Teleoperated control provides the greatest versatility for interactive surgery applications, such as dexterous MIS [63.21, 63.26, 63.3, 63.50] or remote surgery [63.51, 63.52]. It permits motions to be scaled, and (in some research systems) facilitates haptic feedback between master and slave systems. The main drawbacks are complexity, cost, and disruption to standard operating room work flow associated with having separate master and slave robots.
Hands-on control combines the precision, strength, and tremor-free motion of robotic devices with some of the immediacy of freehand surgical manipulation. These systems tend to be less expensive than telesurgical systems, since there is less hardware, and they can be easier to introduce into existing surgical settings. They exploit a surgeon’s natural eye–hand coordination in an intuitively appealing way, and they can be adapted to provide force scaling [63.4, 63.5]. Although direct motion scaling is not possible, the fact that the tool moves in the direction that the surgeon pulls it makes this limitation relatively unimportant when working with a surgical microscope. The biggest drawbacks are that hands-on control is inherently incompatible with any degree of remoteness between the surgeon and the surgical tool and that it is not practical to provide hands-on control of instruments with distal dexterity.
Teleoperation and hands-on control are both compatible with shared control modes in which the robot controller constrains or augments the motions specified by the surgeon, as discussed in Sect. 63.2.3.
2.3 Virtual Fixtures and Human–Machine Cooperative Systems
Although one goal of both teleoperation and hands-on control is often transparency, i. e., the ability to move an instrument with the freedom and dexterity he/she might expect with a handheld tool, the fact that a computer is actually controlling the robot’s motion creates many more possibilities. The simplest is a safety barrier or no-fly zone, in which the robot’s tool is constrained from entering certain portions of its workspace. More sophisticated versions include virtual springs, dampers, or complex kinematic constraints that help a surgeon align a tool, maintain a desired force, or maintain a desired anatomical relationship. The Acrobot system shown in Fig. 63.5c,d represents a successful clinical application of the concept, which has many names, of which virtual fixtures seems to be the most popular [63.53, 63.54]. A number of groups are exploring extensions of the concept to active cooperative control, in which the surgeon and robot share or trade off control of the robot during a surgical task or subtask. As the ability of computers to model and follow along surgical tasks improves, these modes will become more and more important in surgical assistant applications. Figure 63.6 illustrates the overall concept of human–machine cooperative systems in surgery, and Fig. 63.7 illustrates the use of registered anatomical models to generate constraint-based virtual fixtures. These approaches are equally valid whether the surgeon interacts with the system through classical teleoperation or through hands-on compliant control. See also Chap. 43.
Both teleoperation and hands-on control are likewise used in human–machine cooperative systems for rehabilitation and disability assistance systems. Constrained hands-on systems offer special importance for rehabilitation applications and for helping people with movement disorders. Similarly, teleoperation and intelligent task following and control are likely to be vital for further advances in assistive systems for people with severe physical disabilities. See Chap. 64 for a further discussion of human–machine cooperation in assistive systems.
2.4 Safety and Sterility
Medical robots are safety-critical systems, and safety should be considered from the very beginning of the design process [63.55, 63.56]. Although there is some difference in detail, government regulatory bodies require a careful and rigorous development process with extensive documentation at all stages of design, implementation, testing, manufacturing, and field support. Generally, systems should have extensive redundancy built into hardware and control software, with multiple consistency conditions constantly enforced. The basic consideration is that no single point of failure should cause the robot to go out of control or to injure a patient. Although there is some difference of opinion as to the best way to make trade-offs, medical manipulators are usually equipped with redundant position encoders and ways to mechanically limit the speed and/or force that the robot can exert. If a consistency check failure is detected, two common approaches are to freeze robot motion or to cause the manipulator to go limp. Which is better depends strongly on the particular application.
Sterilizability and biocompatibility are also crucial considerations. Again, the details are application dependent. Common sterilization methods include gamma rays (for disposable tools), autoclaving, soaking or gas sterilization, and the use of sterile drapes to cover unsterile components. Soaking or gas sterilization are less likely to damage robot components, but very rigorous cleaning is required to prevent extraneous foreign matter from shielding microbes from the sterilizing agent.
Careful attention to higher levels of application protocols is also essential. Just like any other tool, surgical robots must be used correctly by surgeons, and careful training is essential for safe practice. Surgeons must understand both the capabilities and limitations of the robot and of the surgical process as well, since safety is a systemic property. This adds new requirements to training programs, which must include robotic capabilities and nontechnical skills (Sect. 63.3.8 ). In surgical CAD/CAM applications, the surgeon must understand how the robot will execute the plan and be able to verify that the plan is being followed. If the surgeon is interactively commanding the robot, it is essential that the robot interpret these commands correctly. Similarly, it is essential that the robot’s model of its task environment correspond correctly to the actual environment. The availability of task models is necessary to the development of autonomous execution of robotic gestures as well as the formal verification of task correctness [63.57]. Although careful design and implementation can practically eliminate the likelihood of a runaway condition by the manipulator, this will do little good if the robot is badly registered to the patient images used to control the procedure. If the robot fails for any reason, there must be well-documented and planned procedures for recovery (and possibly continuing the procedure manually).
Finally, it is important to remember that a well-designed robot system can actually enhance patient safety. The robot is not subject to fatigue or momentary lapses of attention. Its motions can be more precise and there is less chance that a slip of the scalpel may damage some delicate structure. In fact, the system can be programmed to provide virtual fixtures (Sect. 63.2.3) preventing a tool from entering a forbidden region unless the surgeon explicitly overrides the system.
2.5 Imaging and Modelling of Patients
As the capabilities of medical robots continue to evolve, the use of computer systems to model dynamically changing patient-specific anatomy will become increasingly important. There is a robust and diverse research community addressing a very broad range of research topics, including the creation of patient-specific models from medical images, techniques for updating these models based upon real-time image and other sensor data, and the use of these models for planning and monitoring of surgical procedures. Some of the pertinent research topics include the following:
-
Medical image segmentation and image fusion to construct and update patient-specific anatomic models
-
Biomechanical property measurement and modelling for analyzing and predicting tissue deformations and functional factors affecting surgical planning, control, and rehabilitation
-
Optimization methods for treatment planning and interactive control of systems
-
Methods for registering the virtual reality of images and computational models to the physical reality of an actual patient
-
Methods for characterizing treatment plans and individual task steps such as suturing, needle insertion, or limb manipulation for purposes of planning, monitoring, control, and intelligent assistance
-
Real-time data fusion for such purposes as updating models from intraoperative images
-
Methods for human–machine communication, including real-time visualization of data models, natural language understanding, gesture recognition, etc.
-
Methods for characterizing uncertainties in data, models, and systems and for using this information in developing robust planning and control methods.
An in-depth examination of this research is beyond the scope of this article. A more complete discussion of these topics may be found in the suggested further reading in Sect. 63.4.
2.6 Registration
Geometric relationships are fundamental in medical robotics, especially in surgical CAD/CAM. There is an extensive literature on techniques for coregistering coordinate systems associated with robots, sensors, images, and the patient [63.58, 63.59]. Following [63.59], we briefly summarize the main concepts here. Suppose that we have coordinates
corresponding to comparable locations in two coordinate systems Ref and Ref. Then the process of registration is simply that of finding a function T such that
Generally, is assumed to be a rigid transformation of the form
where RAB represents a rotation and pAB represents a translation, but nonrigid transformations are becoming increasingly common. There are hundreds of methods for computing . The most common for medical robotics involve finding a set of corresponding geometric features ΓA and ΓB whose coordinates can be determined in both coordinate systems and then finding a transformation that minimizes some distance function . Typical features can include artificial fiducial objects (pins, implanted spheres, rods, etc.) or anatomical features such as point landmarks, ridge curves, or surfaces.
One common class of methods is based on the iterated closest-point algorithm of Besl and McKay [63.60], for example, 3-D robot coordinates aj may be found for a collection of points known to be on the surface of an anatomical structure that can also be found in a segmented 3-D image. Given an estimate Tk of the transformation between image and robot coordinates, the method iteratively finds corresponding points bj on the surface that are closest to and then finds a new transformation
The process is repeated until some suitable termination condition is reached.
3 Systems, Research Areas, and Applications
Medical robots are not ends in themselves. As the late Hap Paul often remarked, the robot is a surgical tool designed to improve the efficacy of a procedure. (Paul was the founder of Integrated Surgical Systems. Along with William Bargar, he was one of the first people to recognize the potential of robots to fundamentally improve the precision of orthopaedic surgery.)
3.1 Nonrobotic Computer-Assisted Surgery: Navigation and Image Overlay Devices
In cases where the role of the robot is placing instruments on targets determined from medical images, surgical navigation is often a superior alternative. In surgical navigation [63.61], the positions of instruments relative to the reference markers on the patient are tracked using specialized electromechanical, optical, electromagnetic, or sonic digitizers or by more general computer vision techniques. After the relationships between key coordinate systems (patient anatomy, images, surgical tools, etc.) are determined through a registration process (Sect. 63.2.6), a computer workstation provides graphical feedback to the surgeon to assist in performing the planned task, usually by displaying instrument positions relative to medical images, as shown in Fig. 63.8a. Although the registration is usually performed computationally, a simple mechanical alignment of an image display with an imaging device can be surprisingly effective in some cases. One example [63.62] is shown in Fig. 63.8b.
The main advantages of surgical navigation systems are their versatility, their relative simplicity, and their ability to exploit the surgeon’s natural dexterity and haptic sensitivity. They are readily combined with passive fixtures and manipulation aids [63.65, 63.66]. The main drawbacks, compared to active robots, are those associated with human limitations in accuracy, strength, ability to work in certain imaging environments, and dexterity inside the patient’s body (Table 63.2).
Because these advantages often outweigh the limitations, surgical navigation systems are achieving widespread and increasing acceptance in such fields as neurosurgery, otolaryngology, and orthopaedics. Since much of the technology of these systems is compatible with surgical robots and since technical problems such as registration are common among all these systems, we may expect to see a growing number of hybrid applications combining medical robots and navigation.
3.2 Orthopaedic Systems
Orthopaedic surgery represents a natural surgical CAD/CAM application, and both surgical navigation systems and medical robots have been applied to orthopaedics. Bone is rigid and is easily imaged in CT and intraoperative x-rays, and surgeons are accustomed to doing at least some preplanning based on these images. Geometric accuracy in executing surgical plans is very important, for example, bones must be shaped accurately to ensure proper fit and positioning of components in joint replacement surgery. Similarly, osteotomies require both accurate cutting and placement of bone fragments. Spine surgery often requires screws and other hardware to be placed into vertebrae without damage to the spinal cord, nerves, and nearby blood vessels.
The Robodoc system shown in Fig. 63.5a-d a,b represents the first clinically applied robot for joint reconstruction surgery [63.17, 63.18]. Since 1992, it has been applied successfully to both primary and revision hip replacement surgery, as well as knee surgery. Since this system exhibits many of the characteristics of surgical CAD/CAM, we will discuss it is some detail. In the surgical CAD phase, the surgeon selects the desired based on preoperative CT images and interactively specifies the desired position of the implant components. In the surgical CAM phase, surgery proceeds normally up to the point where the patient’s bones are to be prepared to receive the implant. The robot is moved up to the operating table, the patient’s bones are attached rigidly to the robot’s base, and the robot is registered to the CT images either by use of implanted fiducial pins or by use of a 3-D digitizer to match bone surfaces to the CT images. After registration, the surgeon’s hand guides the robot to an approximate initial starting position. Then, the robot autonomously machines the desired shape with a high-speed rotary cutter while the surgeon monitors progress. During cutting, the robot monitors cutting forces, bone motion, and other safety sensor, and either the robot controller or the surgeon can pause execution at any time. If the procedure is paused for any reason, there are a number of error recovery procedures available to permit the procedure to be resumed or restarted at one of several defined checkpoints. Once the desired shape has been machined, surgery proceeds manually in the normal manner.
Subsequently, several other robotic systems for joint replacement surgery have been introduced or proposed. The references in Sect. 63.4 provide numerous examples. Notable hands-on guided systems include Rio surgical robot [63.8] (Mako Surgical, Ft. Lauderdale, Florida) and the Acrobot [63.49] system for knee surgery shown in Figs. 63.5c,d. Similarly, several groups have recently proposed small orthopaedic robots attaching directly to the patient’s bones [63.45] or completely freehand systems such as the NavioPFS surgical system (Blue Belt Technlogies, Pittsburgh, Pa.) which combine surgical navigation with very fast on-off control of a surgical cutter [63.67, 63.68]. A recent example of a hybrid passive-active robot is [63.69].
3.3 Percutaneous Needle Placement Systems
Needle placement procedures have become ubiquitous in image-guided intervention, typically performed through the skin but also through cavities. These procedures fit within the broader paradigm of surgical CAD/CAM, where the process involves use of patient images to identify targets within the patient and planning needle trajectories; inserting the needles and verifying their placement; performing some action such as an injection or taking a biopsy sample; and assessing the results. In most cases, an accuracy of 1–2 mm is acceptable, which is not easy to achieve freehand, because the target is not directly visible, soft tissues tend to deform and deflect, and needles often bend. The procedures typically rely on some form of intraoperative imaging (x-ray, CT, MRI, and ultrasound) for both guidance and verification. The surgical motion sequence typically has three decoupled phases: place the needle tip at the entry point, orient the needle by pivoting around the needle tip, and insert the needle into the body along a straight trajectory. These motions are often performed freehand, with varying degrees of information feedback for the physician. However, passive, semiautonomous, and active robotic systems have all been introduced. Figure 63.9 shows several clinically deployed systems for needle placement.
3.3.1 Freehand Needle Placement Systems
Freehand needle placement with CT and MRI guidance uses skin markers to locate the entry point [63.62], reference to the scanner’s alignment laser to control needle direction, and markers on the needle to control depth. With ultrasound, the primary reliance is on surgeon experience or the use of some sort of needle guide to drive the needle to target while it passes in the ultrasound plane. Tracked ultrasound snapshots guidance combines orthogonal viewpoints with frozen ultrasound image frames [63.73]. Mechanical needle guides [63.61], hand-held navigation guidance [63.74], and optical guides have been combined with most imaging modalities. These include laser guidance devices [63.75], augmented reality systems [63.76]. Augmented reality with 2-D ultrasound [63.77] and CT/MRI slices [63.62] (Fig. 63.8b) was developed, where a semi-transparent mirror is used together with a flat-panel display to create the appearance of a virtual image floating inside the body in the correct position and size.
3.3.2 Passive and Semiautonomous Devices for Needle Placement
Passive, encoded manipulator arms were proposed for image-guided needle placement [63.78], where following a registration step, the position and orientation of a passive needle guide is tracked and the corresponding needle path is displayed on CT or MRI images. Semiautonomous systems allow remote, interactive image-guided placement of needles, such as transrectal prostate needle placement in MRI environment [63.72] with an actuated manipulator from outside the scanner bore, while the needle driver is tracked in MRI with active coils.
3.3.3 Active Robots for Needle Placement
Neurosurgery was one of the first clinical applications of active robots [63.19, 63.20, 63.22], a natural application for surgical CAD/CAM. The entry and target points are planned on CT/MRI images, the robot coordinate system is registered to the image coordinate system (typically with markers affixed to the patient’s head), and then the robot positions a needle or drill guide. The marker structure may be a conventional stereotactic head frame or, as in the Neuromate system [63.19], registration is achieved by simultaneous tracking of the robot and markers attached to the patient’s skull. Spatial constraints in needle placement led to the development of structures achieving remote center of motion (RCM) or fulcrum motion [63.24, 63.25]. In these systems, the RCM is positioned at the entry point, typically with an active Cartesian stage or a passive adjustable arm, and the robot sets the needle direction and (sometimes) the depth. To speed up imaging, planning, registration, and execution, the robot can work concurrently with imaging devices, such as variants of an RCM-based system that was deployed with x-ray [63.24] and CT guidance [63.70, 63.71]. In [63.71], a marker structure was incorporated in the needle driver to register the robot with a single image slice. MRI has an excellent potential for guiding, monitoring, and controlling therapy, invoking intensive research on MRI-compatible robotic systems for needle placement [63.79] and other more interactive procedures [63.50]. Ultrasound guidance offers many unique advantages: is relatively inexpensive and compact, provides real-time images, does not involve any ionizing radiation, and does not impose significant materials constraints on the robot design. Several robotic systems have been proposed for prostate interventions [63.80] using transrectal ultrasound guidance. For other ultrasound-guided needle placement applications, examples include experimental systems for liver [63.64, 63.81], gallbladder [63.82], and breast [63.83]. Figure 63.8d shows one example of the use of information overlay to assist in needle placement in a telesurgical application [63.64]. Whatever form of image feedback is available, steering flexible needles to hit desired targets while avoiding obstacles is a ubiquitous problem, having led to several novel approaches [63.84, 63.85] and reviewed most recently in [63.86]. Concentric tube robots (also interchangeably called active cannulas due to their usefulness in medicine), consist of several precurved elastic tubes nested within one another. Actuators grasp these tubes at their bases and extend them telescopically while also applying axial rotations. These movements cause the overall device to elongate and bend, providing a needle-sized device that moves in a manner analogous to a tentacle. Chronologically, a precursor to current concentric tube robots was an early steerable needle design where a curved stylet was used at the tip of a needle to induce steering [63.87]. These robots fall within the broader class of continuously flexible robots called continuum robots (see [63.15] for a review.) Over the past few years, mechanics-based models of active cannulas have rapidly matured due to the simultaneous parallel efforts of several research groups [63.14, 63.88]. Today, these models provide the basis for an active subfield of research on teleoperative control, surgery-specific design, and motion planning research. Surgical applications suggested for concentric tube robots and are in various stages of development, ranging from purely conceptual to animal studies, which will make the next several years exciting for translational clinical research.
3.4 Telesurgical Systems
The concepts of telemedicine, telesurgery, and telepresence in surgery date from the 1970s. Since then, the potential for telesurgical systems to facilitate effective interventions in remote or hostile environments such as the battlefield, space, or thinly populated areas has continued to be recognized [63.89], and there have been some spectacular demonstrations including a transatlantic cholecystectomy [63.51] in 2001, experiments in Italy [63.90] and Japan [63.91] as well as more nearly routine use in Canada [63.52]. The operational difficulties due to the intrinsic communication delay of long distance tele-surgery affect usability and safety and make the regular use of tele-surgery quite uncommon.
However, the primary uses of telesurgical systems have been with the surgeon and patient in the same operating room. Teleoperated robots have been used for over 15 years in MIS, both as auxiliary surgical support systems to hold endoscopes or retractors [63.23, 63.25, 63.92, 63.93, 63.94] and as surgeon extender systems to manipulate surgical instruments [63.26, 63.3]. There has also been recent work to develop telesurgical systems for use within imaging environments such as MRI [63.95].
A primary challenge for auxiliary support systems is to permit the surgeon to command the robot while his or her hands are otherwise occupied. Typical approaches have included conventional foot switches [63.23], instrument-mounted joysticks [63.25], voice control [63.10, 63.25], and computer vision [63.25, 63.96, 63.97].
A common goal in surgeon extender systems is to provide a measure of telepresence to the surgeon, specifically, to give the surgeon the sensation of performing open surgery from inside the patient. In early work, Green et al. [63.98] developed a successful prototype system for telesurgery combining remote manipulation, force feedback, stereoscopic imaging, ergonomic design, etc. Subsequently, several commercial telesurgical systems have been applied clinically for MIS. Of these, Intuitive Surgical’s da Vinci [63.3] has been the most successful, with over 2500 systems deployed as of 2013. Experience with these systems has demonstrated that a high-dexterity wrist is often critical for surgeon acceptance. Although originally targeted at cardiac surgery, as well as more general interventions, the most successful clinical applications to date are radical prostatectomies, where significant improvements in outcomes have been reported [63.99], and hysterectomy, where the clinical benefits compared to conventional laparoscopy are still being studied [63.100].
One emerging area for research exploits the inherent ability of telesurgical systems to act as flight data recorders during surgical procedures. Several authors [63.101, 63.102, 63.103, 63.104] have begun analyzing such data for such purposes as measuring surgical skill, learning surgical gestures and motions, and providing data for surgical simulators. Another emerging area for research [63.105] focuses on semi-automation of surgical gestures between the surgeon and the robot, often based on learned models. Other research [63.106, 63.107, 63.108] exploits augmented reality methods to enhance the information available to the surgeon during telesurgical procedures.
3.5 Microsurgery Systems
Although microsurgery is not a consistently defined term, it generally indicates procedures performed on very small, delicate structures, such as those found in the eye, brain, spinal cord, small blood vessels, nerves, or the like. Microsurgical procedures are commonly performed under direct or video visualization, using some form of magnification (e. g., microscope, surgical loupes, high-magnification endoscope). The surgeon typically has little or no tactile appreciation of the forces being exerted by the surgical instruments and physiological hand tremor can be a significant factor limiting surgical performance. Robotic systems can help overcome these human sensory-motor limitations, and efforts to develop specialized systems for applications such as ophthalmology [63.109, 63.110, 63.111, 63.112, 63.113, 63.114, 63.115, 63.6]) and otology [63.116, 63.117, 63.118], Several groups have also experimented with magnetic manipulation for these applications [63.113, 63.119]. There have been several efforts to compare microsurgical anastamosis procedures using laparoscopic telesurgical systems to conventional microsurgery. Schiff et al. [63.120] among others reported significant reductions in tremor with either robot and significantly improved technical quality and operative times compared to conventional microsurgery. As the nubber of da Vinci systems has proliferated, such applications are increasingly common [63.121]. A number of groups have implemented telesurgery systems specifically for microsurgery [63.122, 63.123, 63.124, 63.13] [63.109, 63.95]. These systems are in various stages of development, from laboratory prototype to preliminary clinical experimentation.
Not all microsurgical robots are teleoperated. For example, the cooperatively controlled JHU Steady Hand robots [63.4, 63.5] [63.115, 63.6] shown in Fig. 63.3 are being developed for retinal, head-and-neck, neurosurgery, and other microsurgical applications. A modified version of this system has also been used for microinjections into single mouse embryos [63.125].
There have also been efforts to develop completely hand-held instruments that actively cancel physiological tremor, for example, Riviere et al. [63.126, 63.127, 63.128] have developed an ophthalmic instrument using optical sensors to sense handle motion and adaptive filtering to estimate the tremulous component of instrument motion. A micromanipulator built into the instrument deflects the tip with an equal but opposite motion, compensating the tremor. Simple mechanical devices [63.129] for reducing tremor in specific tasks have also been developed.
An additional type of hand-held microsurgical and micro-therapeutic devices is reported in [63.130], which describes an active microendoscope for neuroendoscopy and therapy of the spinal cord able to safely navigate in the subarachnoid space and to avoid dangerous contact with the internal delicate structures thanks to a system based on hydrojets. Hydrojets come from the lateral surface of the catheter and, appropriately tuned and oriented, allow the tip of the endoscope to proceed without touching the spinal cord internal walls. The shared control system of the neuroendoscope, based on processing, segmentation, and analysis of the endoscopic images, assists the safe advancement of the tool in real time [63.131].
3.6 Endoluminal Robots
The term endoluminal surgery was first coined by Cuschieri et al. [63.132] as a major component of endoscopic surgery. Endoluminal procedures consist of bringing a set of advanced therapeutic and surgical tools to the area of interest by navigating in the lumina (i. e., the tube-like structures) of the human body, such as the gastrointestinal (GlossaryTerm
GI
) tract, the urinary tract, the circulatory system, etc. Currently, most endoluminal robots are designed for gastrointestinal applications, although there has been some initial work for other areas. There are several advantages (from a robotics research perspective) of working in the GI tract. The GI tract is not sterile and relatively large in diameter. Further it can be punctured intentionally to reach other abdominal cavities in NOTES approaches.Traditionally, catheters and flexible endoscopes for endoluminal procedures have been inserted and manipulated manually from outside the body with the assistance of one or more visualization systems (e. g., direct endoscopic video, x-ray fluoroscopy, ultrasound). One major challenge is limited dexterity making it difficult to reach the desired target. Typically, flexible endoscopes have a bendable tip that can be steered by means of cable drives, and catheters may have only a fixed bend on a guide wire. There is also the inherent difficulty of pushing a rope, which some companies are trying to address by using external magnetic field (e. g., [63.133]). Once the target site is reached, these limitations become even more significant. Very simple instruments can be inserted through working channels or slid over guide wires, but dexterity is severely limited and there is no force feedback beyond what can be felt through the long, flexible instrument shaft.
These limitations have led a number of researchers to explore integration of more degrees of freedom in the catheter/endoscope body, as well as the design of intelligent tips with higher dexterity and sensing capabilities. Early work by Ikuta et al. led to the development of a five-segment, 13 mm-diameter sigmoidscope using shape-memory alloy (GlossaryTerm
SMA
) actuators. Subsequently, Ikuta et al. developed 3 mm-diameter active endovascular devices using hydraulic actuators incorporating a novel band pass valve fabricated using micro-stereolithographic techniques [63.29].Several examples exist of instrumented catheter tips with force sensors [63.134] that allow the right branch of the circulatory systems to be found by estimating the force generated between the tip and the vessel walls. Basically, these sensorized endoluminal devices belong to the larger group of micro-electromechanical systems (GlossaryTerm
MEMS
)-instrumented surgical devices and the same sensing technologies can be also exploited for microsurgery. A survey article by Rebello [63.135] provides an excellent overview of sensorized catheters and other MEMS-based devices in endoluminal and microsurgical applications.Another approach to endoluminal robots is represented by systems that move under their own power through the body, rather than being pushed. Early work on such systems is well summarized in [63.136]. In 1995 Burdick et al. developed an inchworm-like mechanism for use in the colon. This device combined a central extensor for propulsion and inflatable balloons for friction enhancement with the slippery colon tissue. A more advanced inchworm design for a semiautonomous robotic colonoscope was developed by Dario et al. [63.33] (Fig. 63.10). This device consists of a central silicone elongator, two clamping systems based on suction and gentle mechanical grasping of the colon tissue, and a silicone steerable tip integrating a complementary metal-oxide-semiconductor (GlossaryTerm
CMOS
) camera and a light-emitting diode (GlossaryTermLED
)-based illumination system. Thanks to its intrinsic flexibility, the robotic colonoscope applies forces on colon tissues that are ten times lower than those produced by traditional colonoscopies. This system is now clinically tested [63.137], and similar devices combining flexibility and painless operation have been proposed by some companies [63.138, 63.139]. Although the application is not endoluminal, the HeartLander system of Riviere et al. [63.31] shown in Fig. 63.11a,b shares many of the characteristics of these systems. It uses an inchworm-like gait to traverse the surface of the heart and to perform simple operation. An instructive paper on the combination between flexibility and stiffness of endoscopic devices for allowing painless manoeuvrability and stable anchoring when performing surgical tasks has been recently published [63.140]. It presents a number of design criteria and solutions for transforming endoscopic devices from diagnostic tools to stable surgical paltforms.The natural evolution of GI endoscopic devices is represented by endoscopic capsules [63.141] which keep the promise to make endoscopy of the GI tract a screening method highly tolerated by patients. For transforming simple CMOS swallowable cameras with illumination and transmission functionalities into useful diagnostic devices, several research groups have explored variegated solutions for active capsule locomotions. An example of legged locomotion capsules for GI application is illustrated in Fig. 63.11c and described in [63.142, 63.28]. In order to overcome dramatic powering problems, magnetic capsules propelled by permanent magnets [63.143] or by modified MRI fields [63.144] have been proposed. An earlier application of electromagnetic manipulation of an object within the body was the video tumor fighter of Ritter et al. [63.145].
3.7 Sensorized Instruments and Haptic Feedback
Surgical procedures almost always involve some form of physical interaction between surgical instruments and patient anatomy, and surgeons are accustomed to use their haptic appreciation of tool-to-tissue interaction forces in performing surgical procedures. In situations such as where the clumsiness of instrumentation or human sensory-motor limitations limit the surgeon’s ability to feel these forces, surgeons have been trained to rely on visual or other cues to compensate. Apart from the need for haptic feedback for diagnostic procedures and tissue identification, it has been demonstrated that reduced tactile and force information can result in the application of unnecessarily large forces on the patient during surgery, with possible harm to the patient [63.146]. The quality of haptic feedback available in currently deployed surgical robots is poor or nonexistent. Current research addresses the limitations firstly by integrating force and other sensors into surgical end-effectors and secondly by developing improved methods for processing and conveying the sensed information to the surgeon. However, the debate of the usefulness of force feedback in robot-assisted surgery is still open and is made more complex by the many scientific and technical difficulties due to the intrinsic presence of (possibly variable) communication time delay in the robotic system [63.147].
Although the most obvious way to display haptic information in a telesurgical system is directly through the master hand controllers, this method has several limitations, including friction and limited bandwidth in the hand controllers. Although these issues may be addressed through specialized design (which may raise costs and require other compromises) and improved control, there has been considerable interest in sensory substitution schemes [63.149, 63.150, 63.151] in which force or other sensor information is displayed visually, aurally, or by other means. Figure 63.8c shows one example of sensory substitution for warning when a da Vinci robot is about to break a suture [63.63].
Starting in the 1990s, several groups [63.151, 63.152, 63.153] have sensorized surgical instruments for microsurgery and MIS by equipping them with force sensors. Generally, these efforts relied on sensory substitution to display force data, either for freehand or telesurgical application. For example, Poulose et al. [63.152] demonstrated that a force sensing instrument used together with an IBM/JHU LARS robot [63.25] could significantly reduce both average retraction force and variability of retraction force during Nissen fundoplication. The first surgical robot with force feedback used in in vivo experiments was the GlossaryTerm
JPL
-NASA GlossaryTermRAMS
system, which was tested on animal models for vascular anastomosis [63.154] and carotid arteriotomies [63.155]. Rosen et al. [63.153] developed a force-controlled teleoperated endoscopic grasper equipped with position sensors and actuated by direct-current (GlossaryTermDC
) motors whose output torque is sensed and fed back through motors integrated in a grasping handle. A similar approach was used by Menciassi et al. [63.156] for a microsurgery gripper equipped with semiconductor strain gauges and a PHANTOM (SensAble Technologies, Inc.) haptic interface. More recent work at Johns Hopkins has focused on incorporation of optical fiber force [63.157, 63.158, 63.159, 63.160] and OCT [63.161, 63.162, 63.163] sensors into microsurgical tools. Both video and auditory sensory substitution [63.164], as well as direct feedback to robotic devices have been used to help surgeons control tool-tissue interaction forces and distance.Several researchers [63.165] have focused on specialized fingers and display devices for palpation tasks requiring delicate tactile feedback (e. g., for detecting hidden blood vessels or cancerous tissues beneath normal tissues). There has also been work to integrate nonhaptic sensors into surgical instruments, for example, Fischer et al. have developed instruments measuring both force and tissue oxygenation levels [63.166]. This information can be used for such purposes as assessing tissue viability, distinguishing between different tissue types, and controlling retraction so as not to cause ischemic tissue damage.
Finally, it is important to note that sensorized surgical tools have important application beyond their direct use in surgical procedures, for example, one use is in biomechanical studies to measure organ and tissue mechanical properties to improve surgical simulators.
3.8 Surgical Simulators and Telerobotic Systems for Training
Medical education is undergoing significant changes. The traditional paradigm for surgical technical training is often summarized as see one, do one, teach one. This method can be effective in open surgery, where the surgical trainee directly observes the expert surgeon hands, sees the instrument motion, and follows the organ manipulation. However, in endoscopic surgery it is difficult to observe the surgeon’s hand movements (outside the body) and the surgical tool actions (inside the body and only visible on a video monitor). In addition, endoscopic surgery requires different skills than open surgery, such as spatial orientation, interpretation of 2-D images in 3-D, and manipulating instruments through entry portals. These considerations led to introduction of surgical simulation systems of varying degrees of complexity and realism for endoscopic and other minimally invasive procedures. Nowadays, training simulators have achieved widespread acceptance in the field of anaesthesia, intensive care, flexible endoscopy, surgery, interventional radiology, and other fields. The use of simulators for training is so common that working groups have been set up in order to evaluate these training systems based on shared guidelines [63.167] and many teaching hospitals have extensive simulation training centers. Simulators are being validated on their basic parameters (face, content and construct) [63.168], whereas results on concurrent and predictive validity of simulation training are still not available.
Survey [63.169] divides training simulators into three groups, depending on the type of technology used: mechanical, hybrid, and virtual reality.
Mechanical simulators are boxes where objects or organs are placed and manipulated using surgical instruments. The manipulation is observed through a laparoscope and a screen. The simulated surgical task is observed by an experienced surgeon, who gives feedback to the trainee. Generally, there are no registration processes and the simulator must be reassembled after any training session. The LapTrainer with SimuVision (Simulab Inc., Seattle, USA) is a training system with a simulated laparoscope that consists of a boom-mounted digital camera in an open box trainer.
A hybrid simulator uses a box with objects or organs as a mechanical simulator, but in addition the performance of the trainee are monitored by a computer which gives guidance for the task execution and an objective feedback based on preplanned metrics. Thanks to this feedback, the assistance and judgement of an experienced surgeon are not strictly necessary, for example, the ProMIS (Haptica Inc., Boston, USA) is a typical hybrid simulator for training basic skills such as suturing and knot tying. Surgical instruments are passed through dedicated ports and the trainee receives the same haptic feedback as in real surgery during manipulation in the box. In addition, ProMIS analyzes the trainee’s performance by tracking the instrument position in 3-D and by measuring the execution time, path length, and smoothness of task execution. Another recent hybrid simulator is the Perk Tutor open-source platform for ultrasound-guided percutaneous needle insertion training [63.170], which has been successfully applied in teaching of ultrasound-guided facet joint injections [63.171]. Specific to robotic surgery is the RoSS system [63.172], which simulates the operator console of the da Vinci robotic system. Intuitive Surgical currently markets the dv-Trainer [63.173] simulator for the da Vinci system, comprising the surgeon console from a da Vinci Si system and a computer back end to simulate the patient-side manipulators, and its efficacy is being evaluated [63.174].
Finally, virtual-reality training systems combine visualization and haptic interfaces to enable surgeons to interact efficiently and naturally with real-time computational models of patient anatomy [63.175]. The development of these systems is inherently a multidisciplinary effort, including real-time computer graphics, the development of high-bandwidth haptic devices, real-time biomechanical modeling of organs, tool–tissue interactions, expertise in training assessment, and human–machine information exchange, etc. [63.176]. Research in these areas is closely related to and synergistic with comparable developments in technology and systems for performing real interventions, for example, modeling of organ motion in response to forces is necessary to improve the accuracy of needle placement procedures. Haptic feedback devices must meet similar requirements whether the forces displayed are simulated or measured directly in telesurgery, and so on. Finally, as noted earlier, sensorized instruments and real-time imaging are critical sources of data needed to create realistic biomechanical models. The use of the computer graphical processor (GlossaryTerm
GPU
) allows the simulation of organs at rates in excess of 10 kHz [63.177], which enables good haptic feedback. Similarly, new physics-based graphical libraries support the development of low cost virtual reality simulators that correct render the interaction of instruments and anatomical parts, and thus can support large training classes [63.178].The variety of interface devices and virtual reality laparoscopic simulators is quite wide and increasing numbers of systems are becoming commercially available. The Phantom interface is used in conjunction with virtual simulators to provide users with realistic haptic feedback (SensAble Technologies Inc., Woburn, MA, USA). The Xitact LS500 laparoscopy simulator (Xitact S.A., Lausanne, Switzerland) is a modular virtual-reality simulation platform with software for training and assessing performance in laparoscopic tasks. It is an open system including all or some of these subsystems: laparoscopic tools, mechanical abdomen, a personal computer (GlossaryTerm
PC
) providing the virtual-reality scenario, a haptic interface, a force feedback system and a tracking system for the tools. Several other systems for virtual reality simulation exist that exploit the hardware from Xitact or Immersion Medical, Inc. (Gaithersburg, MD, USA) and that are dedicated to specific surgical tasks: Lapmentor [63.179], the Surgical Education platform [63.180], LapSim [63.181], Procedicus MIST [63.182], EndoTower [63.183], the Reachin laparoscopic trainer [63.184], Simendo [63.185], and the Vest system [63.186]. Specifically for training eye surgeons, the surgical simulator EYESi [63.187] uses advanced computer technology and virtual reality to simulate the feel of real eye surgery, making it possible for surgeons at all levels to acquire new skills and perfect their techniques in preparation for surgery on the human eye. Training for dexterity skills in robotic surgery is supported by the new virtual simulator XRON developed at the University of Verona, which interfaces physics-based simulation with several commercial joysticks and provides evaluation metrics of interactive tasks [63.188].Making a comparison between these different categories of simulators is not trivial. Basically, box trainers and hybrid simulators require some experienced technicians for the set up and some organizational logistics due to legal and ethical factors related to the storage of freshly explanted organs. The most evident advantage of these simulators is that the tactile response from the manipulated objects is the same as in real surgery and complicated models of organs and tissue–tool interaction are not required. On the other hand, completely virtual-reality trainers are potentially very flexible, can take advantage of powerful graphical engines, but they are limited by the availability of realistic calibration data for the anatomical and biomechanical models. Although demonstrations exist of the ability of simulators to record, objectively score, and hone the psychomotor skills of novice surgeons [63.189], the debate about their value is still open because no multicentre trails has yet been conducted to determine their efficacy. Furthermore, it has been shown that simulation training must be included in structured curricula in robotic surgery, where trainees are also introduced to basic concepts of robotics and, in particular, to the so called non technical skills, e. g., organization, leadership and communication, which significantly affect the surgical outcome [63.190].
3.9 Other Applications and Research Areas
The research areas described above illustrate major themes within medical robotics, but they are by no means exhaustive. Many important application areas such as otolaryngology [63.191, 63.192, 63.193, 63.194] and radiation therapy have necessarily been excluded for space reasons. For a fuller exploration, readers should consult the further reading suggestions in Sect. 63.4.
4 Conclusion and Future Directions
Medical robotics (and the larger field of computer-integrated interventional medicine) has great potential to revolutionize clinical practice by:
-
Exploiting technology to transcend human limitations in treating patients
-
Improving the safety, consistency, and overall quality of interventions
-
Improving the efficiency and cost-effectiveness of robot-assisted patient care
-
Improving training through the use of validated simulators, quantitative data capture and skill assessment methods, and the capture and playback of clinical cases
-
Promoting more effective use of information at all levels, both in treating individual patients and in improving treatment processes.
From being the stuff of late-night comedy and science fiction 20 years ago, the field has reached a critical threshold, with clinically useful systems and commercial successes. The scope and number of research programs has grown exponentially in the past 8 years, and this chapter is by no means a comprehensive survey of the field. In fact, such a survey would be practically impossible in less than 100 pages. Many important emerging systems and applications are necessarily left out. Interested readers are urged to refer to the further reading section for more complete treatments. In particular, the survey articles and books in listed at the end of this section collectively contain somewhat fuller bibliographies than space permits here.
In the future, we can expect to see continued research in all aspects of technology and system development, with increasing emphasis on clinical applications. As this work proceeds, it is important that researchers remember several basic principles. The first, and arguably most important, principle is that medical robotics is fundamentally a team activity, involving academic researchers, clinicians, and industry. Each of these groups has unique expertise, and success comes from effective, highly interactive partnerships drawing upon this expertise. Building these teams takes a long-term commitment, and the successes in recent years are largely the pay-off from investments in creating these teams. Second, it is important to work on problems with well-defined clinical and technical goals, in which the criteria for measuring success are ultimately related to real advantages in treating patients. In working toward these goals, it is important to have measurable and meaningful milestones and to emphasize rapid iteration with clinician involvement at all stages. Finally, it is essential that all team members recognize the level of commitment that is required to achieve success and that they enjoy what they are doing.
Abbreviations
- 2-D:
-
two-dimensional
- 3-D:
-
three-dimensional
- CAD:
-
computer-aided design
- CAM:
-
computer-aided manufacturing
- CIS:
-
computer-integrated surgery
- CMOS:
-
complementary metal-oxide-semiconductor
- CT:
-
computed tomography
- DC:
-
direct current
- DLR:
-
Deutsches Zentrum für Luft- und Raumfahrt
- GI:
-
gastrointestinal
- GPU:
-
graphics processing unit
- HMCS:
-
human–machine cooperative system
- JHU:
-
Johns Hopkins University
- JPL:
-
Jet Propulsion Laboratory
- LED:
-
light-emitting diode
- MEMS:
-
microelectromechanical system
- MIS:
-
minimally invasive surgery
- MRI:
-
magnetic resonance imaging
- NOTES:
-
natural orifice transluminal surgery
- OCT:
-
optical coherence tomography
- OR:
-
operating room
- PC:
-
personal computer
- PET:
-
positron emission tomography
- RAMS:
-
random access memory system
- RCM:
-
remote center of motion
- SMA:
-
shape memory alloy
- SPL:
-
single port laparoscopy
- THR:
-
total hip replacement
References
R.H. Taylor, L. Joskowicz: Computer-integrated surgery and medical robotics. In: Standard Handbook of Biomedical Engineering and Design, ed. by M. Kutz (McGraw-Hill, New York 2003) pp. 325–353
J.R. Adler, M.J. Murphy, S.D. Chang, S.L. Hankock: Image guided robotic radiosurgery, Neurosurgery 44(6), 1299–1306 (1999)
G.S. Guthart, J.K. Salisbury: The intuitive telesurgery system: Overview and application, IEEE Int. Conf. Robot. Autom. (ICRA), San Francisco (2000) pp. 618–621
R. Taylor, P. Jensen, L. Whitcomb, A. Barnes, R. Kumar, D. Stoianovici, P. Gupta, Z.X. Wang, E. deJuan, L. Kavoussi: Steady-hand robotic system for microsurgical augmentation, Int. J. Robotics Res. 18, 1201–1210 (1999)
D. Rothbaum, J. Roy, G. Hager, R. Taylor, L. Whitcomb: Task performance in stapedotomy: comparison between surgeons of different experience levels, Otolaryngol. Head Neck Surg. 128, 71–77 (2003)
A. Uneri, M. Balicki, J. Handa, P. Gehlbach, R. Taylor, I. Iordachita: New steady-hand eye robot with microforce sensing for vitreoretinal surgery research, Proc. Int. Conf.Biomed. RoboticsBiomechatronics (BIOROB), Tokyo (2010) pp. 814–819
A. Amin, J. Grossman, P.J. Wang: Early experience with a computerized robotically controlled catheter system, J. Int. Card. Electrophysiol. 12, 199–202 (2005)
B. Hagag, R. Abovitz, H. Kang, B. Schmidtz, M. Conditt: RIO: Robotic-arm interactive orthopedic system MAKOplasty: User interactive haptic orthopedic robotics. In: Surgical Robotics: Systems Applications and Visions, ed. by J. Rosen, B. Hannaford, R. Satava (New York, Springer 2011) pp. 219–246
R. Konietschke, D. Zerbato, R. Richa, A. Tobergte, P. Poignet, F. Froehlich, D. Botturi, P. Fiorini, G. Hirzinger: Of new features for telerobotic surgery into the MiroSurge System, J. Appl. Bionics Biomech. Integr. 8(2), 116 (2011)
L. Mettler, M. Ibrahim, W. Jonat: One year of experience working with the aid of a robotic assistant (the voice-controlled optic holder AESOP) in gynaecological endoscopic surgery, Hum. Reprod. 13, 2748–2750 (1998)
N. Simaan, R. Taylor, P. Flint: High dexterity snake-like robotic slaves for minimally invasive telesurgery of the throat, Proc. Int. Symp. Med. Image Comput. Comput. Interv. (2004) pp. 17–24
N. Suzuki, M. Hayashibe, A. Hattori: Development of a downsized master-slave surgical robot system for intragastic surgery, Proc. ICRA Surg. Robotics Workshop, Barcelona (2005)
K. Ikuta, K. Yamamoto, K. Sasaki: Development of remote microsurgery robot and new surgical procedure for deep and narrow space, Proc. IEEE Conf. Robot. Autom. (ICRA) (2003) pp. 1103–1108
R.J. Webster, J.M. Romano, N.J. Cowan: Mechanics of precurved-tube continuum robots, IEEE Trans.Robotics 25(1), 67–78 (2009)
R.J. Webster, B.A. Jones: Design and kinematic modeling of constant curvature continuum robots: A review, Int. J. Robotics Res. 29(13), 1661–1683 (2010)
J. Ding, R.E. Goldman, K. Xu, P.K. Allen, D.L. Fowler, N. Simaan: Design and coordination kinematics of an insertable robotic effectors platform for single-port access surgery, IEEE/ASME Trans. Mechatron. 99, 1–13 (2012)
R.H. Taylor, H.A. Paul, P. Kazandzides, B.D. Mittelstadt, W. Hanson, J.F. Zuhars, B. Williamson, B.L. Musits, E. Glassman, W.L. Bargar: An image-directed robotic system for precise orthopaedic surgery, IEEE Trans. Robot. Autom. 10, 261–275 (1994)
P. Kazanzides, B.D. Mittelstadt, B.L. Musits, W.L. Bargar, J.F. Zuhars, B. Williamson, P.W. Cain, E.J. Carbone: An integrated system for cementless hip replacement, IEEE Eng. Med. Biol. 14, 307–313 (1995)
Q. Li, L. Zamorano, A. Pandya, R. Perez, J. Gong, F. Diaz: The application accuracy of the NeuroMate robot – A quantitative comparison with frameless and frame-based surgical localization systems, Comput. Assist. Surg. 7(2), 90–98 (2002)
P. Cinquin, J. Troccaz, J. Demongeot, S. Lavallee, G. Champleboux, L. Brunie, F. Leitner, P. Sautot, B. Mazier, A. Perez, M. Djaid, T. Fortin, M. Chenic, A. Chapel: IGOR: image guided operating robot, Innov. Technonogie Biol. Med. 13, 374–394 (1992)
H. Reichenspurner, R. Demaino, M. Mack, D. Boehm, H. Gulbins, C. Detter, B. Meiser, R. Ellgass, B. Reichart: Use of the voice controlled and computer-assisted surgical system Zeus for endoscopic coronary artery surgery bypass grafting, J. Thorac. Cardiovasc. Surg. 118, 11–16 (1999)
Y.S. Kwoh, J. Hou, E.A. Jonckheere, S. Hayati: A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery, IEEE Trans. Biomed. Eng. 35, 153–161 (1988)
J.M. Sackier, Y. Wang: Robotically assisted laparoscopic surgery. From concept to development, Surg. Endosc. 8, 63–66 (1994)
D. Stoianovici, L. Whitcomb, J. Anderson, R. Taylor, L. Kavoussi: A modular surgical robotic system for image-guided percutaneous procedures, Proc. Med. Image Comput. Comput. Interv. (MICCAI), Cambridge (1998) pp. 404–410
R.H. Taylor, J. Funda, B. Eldridge, K. Gruben, D. LaRose, S. Gomory, M. Talamini, L.R. Kavoussi, J. Anderson: A telerobotic assistant for laparoscopic surgery, IEEE Eng. Med. Biol. Mag. 14, 279–287 (1995)
M. Mitsuishi, T. Watanabe, H. Nakanishi, T. Hori, H. Watanabe, B. Kramer: A telemicrosurgery system with colocated view and operation points and rotational-force-feedback-free master manipulator, Proc. 2nd Int. Symp. Med. Robot. Comput. Assist. Surg., Baltimore (1995) pp. 111–118
K. Ikuta, M. Tsukamoto, S. Hirose: Shape memory alloy servo actuator system with electric resistance feedback and application for active endoscope. In: Computer-Integrated Surgery, ed. by R.H. Taylor, S. Lavallee, G. Burdea, R. Mosges (MIT Press, Cambridge 1996) pp. 277–282
A. Menciassi, C. Stefanini, S. Gorini, G. Pernorio, P. Dario, B. Kim, J.O. Park: Legged locomotion in the gastrointestinal tract problem analysis and preliminary technological activity, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2004) pp. 937–942
K. Ikuta, H. Ichikawa, K. Suzuki, D. Yajima: Multi-degree of freedom hydraulic pressure driven safety active catheter, Proc. IEEE Int. Conf. Robotics Autom. (ICRA), Orlando (2006) pp. 4161–4166
S. Guo, J. Sawamoto, Q. Pan: A novel type of microrobot for biomedical application, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2005) pp. 1047–1052
N. Patronik, C. Riviere, S.E. Qarra, M.A. Zenati: The heartlander: A novel epicardial crawling robot for myocardial injections, Proc. 19th Int. Congr. Comput. Assist. Radiol. Surg. (2005) pp. 735–739
P. Dario, B. Hannaford, A. Menciassi: Smart surgical tools and augmenting devices, IEEE Trans. Robotics Autom. 19, 782–792 (2003)
L. Phee, D. Accoto, A. Menciassi, C. Stefanini, M.C. Carrozza, P. Dario: Analysis and development of locomotion devices for the gastrointestinal tract, IEEE Trans. Biomed. Eng. 49, 613–616 (2002)
N. Kalloo, V.K. Singh, S.B. Jagannath, H. Niiyama, S.L. Hill, C.A. Vaughn, C.A. Magee, S.V. Kantsevoy: Flexible transgastricperitoneoscopy: A novel approach to diagnostic and therapeutic interventions in the peritoneal cavity, Gastrointest. Endosc. 60, 114–117 (2004)
S.N. Shaikh, C.C. Thompson: Natural orifice translumenal surgery: Flexible platform review, World J. Gastrointest. Surg. 2(6), 210–216 (2010)
M. Neto, A. Ramos, J. Campos: Single port laparoscopic access surgery, Tech. Gastrointest. Endosc. 11, 84–93 (2009)
Karl Storz Endoskope - Anubis: https://www.karlstorz.com/cps/rde/xchg/SID-8BAF6233-DF3FFEB4/karlstorz-en/hs.xsl/8872.htm (2012)
S.J. Phee, K.Y. Ho, D. Lomanto, S.C. Low, V.A. Huynh, A.P. Kencana, K. Yang, Z.L. Sun, S.C. Chung: Natural orifice transgastric endoscopic wedge hepatic resection in an experimental model using an intuitively controlled master and slave transluminal endoscopic robot (MASTER), Surg. Endosc. 24, 2293–2298 (2010)
M. Piccigallo, U. Scarfogliero, C. Quaglia, G. Petroni, P. Valdastri, A. Menciassi, P. Dario: Design of a novel bimanual robotic system for single-port laparoscopy, IEEE/ASME Trans. Mechatron. 15(6), 871–878 (2012)
D.J. Scott, S.J. Tang, R. Fernandez, R. Bergs, M.T. Goova, I. Zeltser, F.J. Kehdy, J.A. Cadeddu: Completely transvaginal NOTES cholecystectomy using magnetically anchored instruments, Surg. Endosc. 21, 2308–2316 (2007)
S. Tognarelli, M. Salerno, G. Tortora, C. Quaglia, P. Dario, A. Menciassi: An endoluminal robotic platform for Minimally Invasive Surgery, Proc. IEEE Int. Conf. Biomed. Robotics Biomechatronics (BIOROB), Pisa (2012) pp. 7–12
M. Simi, R. Pickens, A. Menciassi, S.D. Herrell, P. Valdastri: Fine tilt tuning of a laparoscopic camera by local magnetic actuation: Two-Port Nephrectomy Experience on Human Cadavers, Surg. Innov. 20(4), 385–394 (2013)
T. Leuth: MIMED: 3D printing - Rapid technologies, http://www.mimed.mw.tum.de/research/3d-printing-rapid-technologies/ (2013)
S. G. O’Reilly: Medical manufacturing technology: 3D printing and medical-device development, Medical Design, http://medicaldesign.com/Medical-Manufacturing-Technology-3D-printing-medical-device-development/index.html (2012)
C. Plaskos, P. Cinquin, S. Lavallee, A.J. Hodgson: Praxiteles: A miniature bone-mounted robot for minimal access total knee arthroplasty, Int. J. Med. Robot. Comput. Assist. Surg. 1, 67–79 (2005)
P.J. Berkelman, L. Whitcomb, R. Taylor, P. Jensen: A miniature microsurgical instrument tip force sensor for enhanced force feedback during robot-assisted manipulation, IEEE Trans. Robot. Autom. 19, 917–922 (2003)
D.P. Devito, L. Kaplan, R. Dietl, M. Pfeiffer, D. Horne, B. Silberstein, M. Hardenbrook, G. Kiriyanthan, Y. Barzilay, A. Bruskin, D. Sackerer, V. Alexandrovsky, C. Stüer, R. Burger, J. Maeurer, D.G. Gordon, R. Schoenmayr, A. Friedlander, N. Knoller, K. Schmieder, I. Pechlivanis, I.-S. Kim, B. Meyer, M. Shoham: Clinical acceptance and accuracy assessment of spinal implants guided with SpineAssist surgical robot -- Retrospective study, Spine 35(24), 2109–2115 (2010)
K. Chinzei, R. Gassert, E. Burdet: Workshop on MRI/fMRI compatible robot technology - A critical tool for neuroscience and image guided intervention, Proc. IEEE Int. Conf.Robotics Autom. (ICRA), Orlando (2006)
M. Jakopec, S.J. Harris, F.R.Y. Baena, P. Gomes, J. Cobb, B.L. Davies: The first clinical application of a hands-on robotic knee surgery system, Comput. Aided Surg. 6, 329–339 (2001)
D.F. Louw, T. Fielding, P.B. McBeth, D. Gregoris, P. Newhook, G.R. Sutherland: Surgical robotics: A review and neurosurgical prototype development, Neurosurgery 54, 525–537 (2004)
J. Marescaux, J. Leroy, M. Gagner, F. Rubino, D. Mutter, M. Vix, S.E. Butner, M.K. Smith: Transatlantic robot-assisted telesurgery, Nature 413, 379–380 (2001)
M. Anvari, T. Broderick, H. Stein, T. Chapman, M. Ghodoussi, D.W. Birch, C. Mckinley, P. Trudeau, S. Dutta, C.H. Goldsmith: The impact of latency on surgical precision and task completion during robotic-assisted remote telepresence surgery, Comput. Aided Surg. 10, 93–99 (2005)
M. Li, M. Ishii, R.H. Taylor: Spatial motion constraints in medical robot using virtual fixtures generated by anatomy, IEEE Trans. Robotics 23, 4–19 (2007)
S. Park, R.D. Howe, D.F. Torchiana: Virtual fixtures for robotic cardiac surgery, Proc. 4th Int. Conf. Med. Image Comput. Comput. Interv. (2001)
B. Davies: A discussion of safety issues for medical robots. In: Computer-Integrated Surgery, ed. by R. Taylor, S. Lavallée, G. Burdea, R. Moesges (MIT Press, Cambridge 1996) pp. 287–296
P. Varley: Techniques of development of safety-related software in surgical robots, IEEE Trans. Inform. Technol. Biomed. 3, 261–267 (1999)
R. Muradore, D. Bresolin, L. Geretti, P. Fiorini, T. Villa: Robotic surgery – Formal verification of plans, IEEE RoboticsAutom. Mag. 18(3), 24–32 (2011)
J.B. Maintz, M.A. Viergever: A survey of medical image registration, Med. Image Anal. 2, 1–37 (1998)
S. Lavallee: Registration for computer-integrated surgery: methodology, state of the Art. In: Computer-Integrated Surgery, ed. by R.H. Taylor, S. Lavallee, G. Burdea, R. Mosges (MIT Press, Cambridge 1996) pp. 77–98
P.J. Besl, N.D. McKay: A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992)
R.J. Maciunas: Interactive Image-Guided Neurosurgery (AANS, Park Ridge 1993)
G. Fichtinger, A. Degeut, K. Masamune, E. Balogh, G. Fischer, H. Mathieu, R.H. Taylor, S. Zinreich, L.M. Fayad: Image overlay guidance for needle insertion on CT scanner, IEEE Trans. Biomed. Eng. 52, 1415–1424 (2005)
T. Akinbiyi, C.E. Reiley, S. Saha, D. Burschka, C.J. Hasser, D.D. Yuh, A.M. Okamura: Dynamic augmented reality for sensory substitution in robot-assisted surgical systems, Proc. 28th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (2006) pp. 567–570
J. Leven, D. Burschka, R. Kumar, G. Zhang, S. Blumenkranz, X. Dai, M. Awad, G. Hager, M. Marohn, M. Choti, C. Hasser, R.H. Taylor: DaVinci canvas: A telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability, Proc. Med. Image Comput. Comput. Interv. Palm Springs (2005)
R.H. Taylor, H.A. Paul, C.B. Cutting, B. Mittelstadt, W. Hanson, P. Kazanzides, B. Musits, Y.Y. Kim, A. Kalvin, B. Haddad, D. Khoramabadi, D. Larose: Augmentation of human precision in computer-integrated surgery, Innov. Technol. Biol. Med. 13, 450–459 (1992)
J. Troccaz, M. Peshkin, B. Davies: The use of localizers, robots and synergistic devices in CAS, Lect. Notes Comput. Sci. 1205, 727–736 (1997)
G. Brisson, T. Kanade, A.M. DiGioia, B. Jaramaz: Precision freehand sculptig of bone, Proc. Med. Image Comput. Comput.-Assist. Interv. (MICCAI) (2004) pp. 105–112
R.A. Beasley: Medical Robots: Current systems and research directions, J. Robotics 2012, 401613 (2012)
S. Kuang, K.S. Leung, T. Wang, L. Hu, E. Chui, W. Liu, Y. Wang: A novel passive/active hybrid robot for orthopaedic trauma surgery, Int. J. Med. Robotics Comput. Assist, Surg. 8, 458–467 (2012)
S. Solomon, A. Patriciu, R.H. Taylor, L. Kavoussi, D. Stoianovici: CT guided robotic needle biopsy: A precise sampling method minimizing radiation exposure, Radiology 225, 277–282 (2002)
K. Masamune, G. Fichtinger, A. Patriciu, R. Susil, R. Taylor, L. Kavoussi, J. Anderson, I. Sakuma, T. Dohi, D. Stoianovici: System for robotically assisted percutaneous procedures with computed tomography guidance, J. Comput. Assist. Surg. 6, 370–383 (2001)
A. Krieger, R.C. Susil, C. Menard, J.A. Coleman, G. Fichtinger, E. Atalar, L.L. Whitcomb: Design of a novel MRI compatible manipulator for image guided prostate intervention, IEEE Trans. Biomed. Eng. 52, 306–313 (2005)
T. Ungi, P. Abolmaesumi, R. Jalal, M. Welch, I. Ayukawa, S. Nagpal, A. Lasso, M. Jaeger, D. Borschneck, G. Fichtinger, P. Mousavi: Spinal needle navigation by tracked ultrasound snapshots, IEEE Trans. Biomed. Eng. 59(10), 2766–2772 (2012)
D. DallAlba, B. Maris, P. Fiorini: A compact navigation system for free hand needle placement in percutaneous procedures, Proc. IEEE/RSI Int. Conf. Intell. Robots Syst. (IROS), Vilamoura (2012) pp. 2013–2018
G.A. Krombach, T. Schmitz-Rode, B.B. Wein, J. Meyer, J.E. Wildberger, K. Brabant, R.W. Gunther: Potential of a new laser target system for percutaneous CT-guided nerve blocks: Technical note, Neuroradiology 42, 838–841 (2000)
M. Rosenthal, A. State, J. Lee, G. Hirota, J. Ackerman, K. Keller, E.D. Pisano, M. Jiroutek, K. Muller, H. Fuchs: Augmented reality guidance for needle biopsies: A randomized, controlled trial in phantoms, Proc. 4th Int. Conf. Med. Image Comput. Comput. Interv. (2001) pp. 240–248
G. Stetten, V. Chib: Overlaying ultrasound images on direct vision, J. Ultrasound Med. 20, 235–240 (2001)
H.F. Reinhardt: Neuronagivation: A ten years review. In: Computer-Integrated Surgery, ed. by R. Taylor, S. Lavallée, G. Burdea, R. Moegses (MIT Press, Cambridge 1996) pp. 329–342
E. Hempel, H. Fischer, L. Gumb, T. Hohn, H. Krause, U. Voges, H. Breitwieser, B. Gutmann, J. Durke, M. Bock, A. Melzer: An MRI-compatible surgical robot for precise radiological interventions, Comput. Aided Surg. 8, 180–191 (2003)
Z. Wei, G. Wan, L. Gardi, G. Mills, D. Downey, A. Fenster: Robot-assisted 3D-TRUS guided prostate brachytherapy: system integration and validation, Med. Phys. 31, 539–548 (2004)
E. Boctor, G. Fischer, M. Choti, G. Fichtinger, R.H. Taylor: Dual-armed robotic system for intraoperative ultrasound guided hepatic ablative therapy: A prospective study, Proc. IEEE Int. Conf. Robot. Autom. (ICRA) (2004) pp. 377–382
J. Hong, T. Dohi, M. Hashizume, K. Konishi, N. Hata: An ultrasound-driven needle-insertion robot for percutaneous cholecystostomy, Phys. Med. Biol. 49, 441–455 (2004)
G. Megali, O. Tonet, C. Stefanini, M. Boccadoro, V. Papaspyropoulis, L. Angelini, P. Dario: A computer-assisted robotic ultrasound-guided biopsy system for video-assisted surgery, Proc. Med. Image Comput. Comput. Interv. (MICCAI), Utrecht (2001) pp. 343–350
R.J. Webster III, J.S. Kim, N.J. Cowan, G.S. Chirikjian, A.M. Okamura: Nonholonomic modeling of needle steering, Int. J. Robotics Res. 25(5-6), 509–525 (2006)
J.A. Engh, D.S. Minhas, D. Kondziolka, C.N. Riviere: Percutaneous intracerebral navigation by duty-cycled spinning of flexible bevel-tipped needles, Neurosurgery 67(4), 1117–1122 (2010)
N. Cowan, K. Goldberg, G. Chirikjian, G. Fichtinger, R. Alterovitz, K. Reed, V. Kallem, W. Park, S. Misra, A.M. Okamura: Robotic needle steering: Design, modeling, planning, and image guidance. In: Surgical Robotics – Systems Applications and Visions, ed. by J. Rosen, B. Hannaford, R.M. Satava (New York, Springer 2011)
S. Okazawa, R. Ebrahimi, J. Chuang, S. Salcudean, R. Rohling: Hand-held steerable needle device, IEEE/ASME Trans. Mechatron. 10, 285–296 (2005)
P.E. Dupont, J. Lock, B. Itkowitz, E. Butler: Design and control of concentric-tube robots, IEEE Trans.Robotics 26(2), 209–225 (2010)
G.H. Ballantyne: Robotic surgery, telerobotic surgery, telepresence and telementoring, Surg, Endosc. 16, 1389 (2002)
A. Rovetta, R. Sala, R. Cosmi, X. Wen, S. Milassesi, D. Sabbadini, A. Togno, L. Angelini, A.K. Bejczy: A new telerobotic application: Remote laparscopic surgery using satellites and optical figer networks for data exchange, Int. J. Robotics Res. 15(3), 267–279 (1996)
J. Arata, H. Takahashi, S. Yasunaka, K. Onda, K. Tanaka, N. Sugita, K. Tanoue, K. Konishi, S. Ieiri, Y. Fujino, Y. Ueda, H. Fujimoto, M. Mitsuishi, M. Hashizume: Impact of network time-delay and force feedback on tele-surgery, Int. J.Comput. Assist. Radiol.Surg. 3(3–4), 371–378 (2008)
J.A. McEwen, C.R. Bussani, G.F. Auchinleck, M.J. Breault: Development and initial clinical evaluation of pre-robotic and robotic retraction systems for surgery, Proc. 2nd Workshop Med. Health Care Robotics, Newcastle (1989) pp. 91–101
P. Berkelman, P. Cinquin, J. Troccaz, J. Ayoubi, C. Letoublon, F. Bouchard: A compact, compliant laparoscopic endoscope manipulator, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2002) pp. 1870–1875
K. Olds, A. Hillel, J. Kriss, A. Nair, H. Kim, E. Cha, M. Curry, L. Akst, R. Yung, J. Richmon, R. Taylor: A robotic assistant for trans-oral surgery: The robotic endo-laryngeal fexible (Robo-ELF) scope, J. Robotic. Surg. 6(1), 13–18 (2012)
G.R. Sutherland, S. Lama, L.S. Gan, S. Wolfsberger, K. Zareinia: Merging machines with microsurgery: Clinical experience with neuroArm, J. Neurosurg. 118, 521–529 (2013)
A. Nishikawa, T. Hosoi, K. Koara, T. Dohi: FAce MOUSe: A novel human-machine interface for controlling the position of a laparoscope, IEEE Trans Robot. Autom. 19, 825–841 (2003)
A. Krupa, J. Gangloff, C. Doignon, M.F. deMathelin, G. Morel, J. Leroy, L. Soler, J. Marescaux: Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing, IEEE Trans. Robotics Autom. 19, 842–853 (2003)
P. Green, R. Satava, J. Hill, I. Simon: Telepresence: Advanced teleoperator technology of minimally invasive surgery (abstract), Surg. Endosc. 6, 90 (1992)
T. Ahlering, D. Woo, L. Eichel, D. Lee, R. Edwards, D. Skarecky: Robot-assisted versus open radical prostatectomy: A comparison of one surgeon’s outcomes, Urology 63, 819–822 (2004)
J.D. Wright, C.V. Ananth, S.N. Lewin, W.M. Burke, Y.-S. Lu, A.I. Neugut, T.J. Herzog, D.L. Hershman: Robotically assisted vs laparoscopic hysterectomy among women with benign gynecologic disease, J. Am. Med. Assoc. 309(7), 689–698 (2013)
J. Rosen, J.D. Brown, L. Chang, M. Barreca, M. Sinanan, B. Hannaford: The BlueDRAGON – a system for measuring the kinematics and the dynamics of minimally invasive surgical tools in-vivo, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2002) pp. 1876–1881
H.C. Lin, I. Shafran, T.E. Murphy, A.M. Okamura, D.D. Yuh, G.D. Hager: Automatic detection and segmentation of robot-assisted surgical motions, Proc. Med. Image Comput. Comput.-Assist. Interv. (MICCAI) (2005) pp. 802–810
H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, J. Schmidhuber: A system for robotic heart surgery that learns to tie knots using recurrent neural networks, Proc. Int. Conf. Intell. Robots Syst. (IROS) (2006) pp. 543–548
C.E. Reiley, H.C. Lin, D.D. Yuh, G.D. Hager: A review of methods for objective surgical skill evaluation, Surg. Endosc. 25(2), 356–366 (2011)
N. Padoy, G. Hager: Human-machine collaborative surgery using learned models, Proc. IEEE Int. ConfRoboticsAutom. (ICRA) (2011) pp. 5285–5292
L.-M. Su, B.P. Vagvolgyi, R. Agarwal, C.E. Reiley, R.H. Taylor, G.D. Hager: Augmented reality during robot-assisted laparoscopic partial nephrectomy: Toward real-time 3D-CT to stereoscopic video registration, Urology 73(4), 896–900 (2009)
F. Volonté, N.C. Buchs, F. Pugin, J. Spaltenstein, B. Schiltz, M. Jung, M. Hagen, O. Ratib, P. Morel: Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci robotic console, Int. J.Med. Robotics Comput. Assist. Surg. 3(9), 34–38 (2013)
D. Cohen, E. Mayer, D. Chen, A. Anstee, J. Vale, G.-Z. Yang, A. Darzi, P.Ä.Ä. Edwards: Augmented reality image guidance in minimally invasive prostatectomy, Lect. Notes Comput. Sci. 6367, 101–110 (2010)
Y. Ida, N. Sugita, T. Ueta, Y. Tamaki, K. Tanimoto, M. Mitsuishi: Microsurgical robotic system for vitreoretinal surgery, Int. J. Comput. Assist. Radiol. Surg. 7(1), 27–34 (2012)
R. Taylor, J. Kang, I. Iordachita, G. Hager, P. Kazanzides, C. Riviere, E. Gower, R. Richa, M. Balicki, X. He, X. Liu, K. Olds, R. Sznitman, B. Vagvolgyi, P. Gehlbach, J. Handa: Recent work toward a microsurgical assistant for retinal surgery, Proc. Hamlyn Symp.Med. Robotics, London (2011) pp. 3–4
B. Becker, S. Yang, R. MacLachlan, C. Riviere: Towards vision-based control of a handheld micromanipulator for retinal cannulation in an eyeball phantom, Proc. IEEE RAS EMBS Int. Conf. Biomed. Robotics Biomechatronics (BIOROB) (2012) pp. 44–49
M. Balicki, T. Xia, M.Y. Jung, A. Deguet, B. Vagvolgyi, P. Kazanzides, R. Taylor: Prototyping a hybrid cooperative and tele-robotic surgical system for retinal microsurgery, Proc. MICCAI WorkshopSyst.Archit. Comput. Assist. Interv. Tor. (2011), Insight, http://www.midasjournal.org/browse/publication/815
C. Bergeles, B.E. Kratochvil, B.J. Nelson: Visually servoing magnetic intraocular microdevices, IEEE Trans. Robotics 28(4), 798–809 (2012)
Patent N. Simaan, J. T. Handa, R. H. Taylor: A system for macro-micro distal dexterity enhancement in microsurgery of the eye, Patent US 20110125165 A1 (2011)
X. He, D. Roppenecker, D. Gierlach, M. Balicki, K. Olds, J. Handa, P. Gehlbach, R.H. Taylor, I. Iordachita: Toward a clinically applicable steady-hand eye robot for vitreoretinal surgery, Proc. ASME Int. Mech. Eng. Congr., Houston (2012) p. 88384
J.K. Niparko, W.W. Chien, I. Iordachita, J.U. Kang, R.H. Taylor: Robot-assisted, sensor-guided cochlear implant electrode insertion (Abstract in Proceedings), Proc. 12th Int. Conf. Cochlear Implant. Other Implant. Auditory Technol., Baltimore (2012)
D. Schurzig, R.F. Labadie, A. Hussong, T.S. Rau, R.J. Webster: Design of a tool integrating force sensing with automated insertion in cochlear implantation, IEEE/ASME Trans. Mechatron. 17(2), 381–389 (2012)
B. Bell, C. Stieger, N. Gerber, A. Arnold, C. Nauer, V. Hamacher, M. Kompis, L. Nolte, M. Caversaccio, S. Weber: A self-developed and constructed robot for minimally invasive cochlear implantation, Acta Oto-Laryngol. 132(4), 355–360 (2012)
J.R. Clark, L. Leon, F.M. Warren, J.J. Abbott: Magnetic guidance of cochlear implants: Proof-of-concept and initial feasibility study, Trans. ASME-W-J. Medical Devices 6(4), 35002 (2012)
J. Schiff, P. Li, M. Goldstein: Robotic microsurgical vasovasostomy and vasoepididymostomy: A prospective randomized study in a rat model, J. Urol. 171, 1720–1725 (2004)
S. Parekattil, M. Cohen: Robotic microsurgery. In: Robotic Urologic Surgery, ed. by V.R. Patel (Springer, London 2012) pp. 461–470
P.S. Jensen, K.W. Grace, R. Attariwala, J.E. Colgate, M.R. Glucksberg: Toward robot assisted vascular microsurgery in the retina, Graefes Arch. Clin. Exp. Ophthalmol. 235, 696–701 (1997)
H. Das, H. Zak, J. Johnson, J. Crouch, D. Frambaugh: Evaluation of a telerobotic system to assist surgeons in microsurgery, Comput. Aided Surg. 4, 15–25 (1999)
K. Hongo, S. Kobayashi, Y. Kakizawa, J.I. Koyama, T. Goto, H. Okudera, K. Kan, M.G. Fujie, H. Iseki, K. Takakura: NeuRobot: Telecontrolled micromanipulator system for minimally invasive microneurosurgery-preliminary results, Neurosurgery 51, 985–988 (2002)
A. Kapoor, R. Kumar, R. Taylor: Simple biomanipulation tasks with a steady hand cooperative manipulator, Proc. 6th Int. Conf. Med. Image Comput. Comput. Assist. Interv. (MICCAI), Montreal (2003) pp. 141–148
R. MacLachlan, B. Becker, J. Cuevas-Tabarés, G. Podnar, L. Lobes, C. Riviere: Micron: An actively stabilized handheld tool for microsurgery, IEEE Trans. Robotics 28(1), 195–212 (2012)
B. Becker, R. MacLachlan, L. Lobes, G. Hager, C. Riviere: Vision-based control of a handheld surgical micromanipulator with virtual fixtures, IEEE Trans. Robotics 29(3), 674–683 (2013)
C. Riviere, J. Gangloff, M. de Mathelin: Robotic compensation of biological motion to enhance surgical accuracy, Proceedings IEEE 94(9), 1705–1716 (2006)
W. Armstrong, A. Karamzadeh, R. Crumley, T. Kelley, R. Jackson, B. Wong: A novel laryngoscope instrument stabilizer for operative microlaryngoscopy, Otolaryngol. Head Neck Surg. 132, 471–477 (2004)
L. Ascari, C. Stefanini, A. Menciassi, S. Sahoo, P. Rabischong, P. Dario: A new active microendoscope for exploring the sub-arachnoid space in the spinal cord, Proc. IEEE Conf. Robotics Autom. (ICRA) (2003) pp. 2657–2662
U. Bertocchi, L. Ascari, C. Stefanini, C. Laschi, P. Dario: Human-robot shared control for robot-assisted endoscopy of the spinal cord, Proc. IEEE/RAS/EMBS Int. Conf. Biomed. Robot. Biomechatronics (2006) pp. 543–548
A. Cuschieri, G. Buess, J. Perissat: Operative Manual of Endoscopic Surgery (Springer, Berlin, Heidleberg 1992)
Stereotaxis: The epoch (TM) solution: The heart of innovation, http://www.stereotaxis.com/ (2013)
Endosense, TactiCath Quartz – The First Contact Force Ablation Catheter: http://neomed.net/portfolio/endosense_sa (2013)
K.J. Rebello: Applications of MEMS in surgery, Proceedings IEEE 92(1), 43–55 (2004)
I. Kassim, W.S. Ng, G. Feng, S.J. Phee: Review of locomotion techniques for robotic colonoscopy, Proc. Int. Conf. Robotics Autom. (ICRA), Taipei (2003) pp. 1086–1091
Endotics: The endotics system, http://www.endotics.com/en/product (2012)
Gi-View: The AeroScope, http://www.giview.com/ (2013)
Invendo-Medical: Gentle colonoscopy with invendoscopy, http://www.invendo-medical.com/ (2013)
A. Loeve, P. Breeveld, J. Dankelman: Scopes too flexible … and too stiff, IEEE Pulse 1(3), 26–41 (2010)
G. Ciuti, A. Menciassi, P. Dario: Capsule endoscopy: From current achievements to open challenges, IEEE Rev.Biomed. Eng. 4, 59–72 (2011)
C. Stefanini, A. Menciassi, P. Dario: Modeling and experiments on a legged microrobot locomoting in a tubular, compliant and slippery environment, Int. J. Robotics Res. 25, 551–560 (2006)
G. Ciuti, P. Valdastri, A. Menciassi, P. Dario: Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal procedures, Robotica 28, 199–207 (2010)
J.F. Rey, H. Ogata, N. Hosoe, K. Ohtsuka, N. Ogata, K. Ikeda, H. Aihara, I. Pangtay, T. Hibi, S. Kudo, H. Tajiri: Feasibility of stomach exploration with a guided capsule endoscope, Endoscopy 42, 541–545 (2010)
R.C. Ritter, M.S. Grady, M.A. Howard, G.T. Gilles: Magnetic stereotaxis: Computer assisted, image guided remote movement of implants in the brain. In: Computer Integrated Surgery, ed. by R.H. Taylor, S. Lavallee, G.C. Burdea, R. Mosges (MIT Press, Cambridge 1995) pp. 363–369
C.R. Wagner, N. Stylopoulos, P.G. Jackson, R.D. Howe: The benefit of force feedback in surgery: Examination of blunt dissection, Presence: Teleoperators Virtual Environ. 16(3), 252–262 (2007)
P.F. Hokayem, M.W. Spong: Bilateral teleoperation: A historical survey, Automatica 42, 2035–2057 (2006)
M. Quirini, A. Menciassi, S. Scapellato, C. Stefanini, P. Dario: Design and fabrication of a motor legged capsule for the active exploration of the gastrointestinal tract, IEEE/ASME Trans. Mechatron. 13(1), 169–179 (2008)
A.M. Okamura: Methods for haptic feedback in teleoperated robot-assisted surgery, Ind. Robot 31, 499–508 (2004)
M.J. Massimino: Improved force perception through sensory substitution, Contr. Eng. Pract. 3, 215–222 (1995)
P. Gupta, P. Jensen, E. de Juan: Quantification of tactile sensation during retinal microsurgery, Proc. 2nd Int. Conf. Med. Image Comput. Comput.-Assist. Interv. (MICCAI), Cambridge (1999)
P.K. Poulose, M.F. Kutka, M. Mendoza-Sagaon, A.C. Barnes, C. Yang, R.H. Taylor, M.A. Talamini: Human versus robotic organ retraction during laparoscopic nissen fundoplication, Surg. Endosc. 13, 461–465 (1999)
J. Rosen, B. Hannaford, M. MacFarlane, M. Sinanan: Force controlled and teleoperated endoscopic grasper for minimally invasive surgery – Experimental performance evaluation, IEEE Trans. Biomed. Eng. 46, 1212–1221 (1999)
M. Siemionow, K. Ozer, W. Siemionow, G. Lister: Robotic assistance in microsurgery, J.Reconstr. Microsurg. 16(8), 643–649 (2000)
P.D.L. Roux, H. Das, S. Esquenazi, P.J. Kelly: Robot-assisted microsurgery: A feasibility study in the rat, Neurosurgery 48(3), 584–589 (2001)
A. Menciassi, A. Eisinberg, M.C. Carrozza, P. Dario: Force sensing microinstrument for measuring tissue properties and pulse in microsurgery, IEEE/ASME Trans. Mechatron. 8, 10–17 (2003)
S. Sunshine, M. Balicki, X. He, K. Olds, J. Kang, P. Gehlbach, R. Taylor, I. Iordachita, J. Handa: A Force-sensing microsurgical instruments that detects forces below human tactile sensation, Retina 33(1), 200–206 (2013)
I. Iordachita, Z. Sun, M. Balicki, J.U. Kang, S.J. Phee, J. Handa, P. Gehlbach, R. Taylor: A sub-millimetric, 0.25 mN resolution fully integrated fiber-optic force-sensing tool for retinal microsurgery, Int. J. Comput. Assist. Radiol.Surg. 4(4), 383–390 (2009)
I. Kuru, B. Gonenc, M. Balicki, J. Handa, P. Gehlbach, R.H. Taylor, I. Iordachita: Force sensing micro-forceps for robot assisted retinal surgery, Proc. Eng.Med. Biol., San Diego (2012) pp. 1401–1404
X. Liu, I.I. Iordachita, X. He, R.H. Taylor, J.U. Kang: Miniature fiber-optic force sensor for vitreoretinal microsurgery based on low-coherence Fabry-Perot interferometry, Biomed. Opt. Express 3, 1062–1076 (2012)
M. Balicki, J.-H. Han, I. Iordachita, P. Gehlbach, J. Handa, R.H. Taylor, J. Kang: Single fiber optical coherence tomography microsurgical instruments for computer and robot-assisted retinal surgery, Proc. Med. Image Comput. Comput.-Assist. Interv. (MICCAI), London (2009) pp. 108–115
M. Balicki, R. Richa, B. Vagvolgyi, J. Handa, P. Gehlbach, J. Kang, P. Kazanzides, R. Taylor: Interactive OCT annotation and visualization system for vitreoretinal surgery, Proc. MICCAI WorkshopAugment. Environ.Comput. Interv. (AE-CAI), Nice (2012)
X. Liu, Y. Huang, J.U. Kang: Distortion-free freehand-scanning OCT implemented with real-time scanning speed variance correction, Opt. Express 20(15), 16567–16583 (2012)
N. Cutler, M. Balicki, M. Finkelstein, J. Wang, P. Gehlbach, J. McGready, I. Iordachita, R. Taylor, J. Handa: Auditory force feedback substitution improves surgical precision during simulated ophthalmic surgery, Investig. Ophthalmol. Vis. Sci. 4(2), 1316–1324 (2013)
R.D. Howe, W.J. Peine, D.A. Kontarinis, J.S. Son: Remote palpation technology, IEEE Eng. Med. Biol. 14(3), 318–323 (1995)
G. Fischer, T. Akinbiyi, S. Saha, J. Zand, M. Talamini, M. Marohn, R. Taylor: Ischemia and force sensing surgical instruments for augmenting available surgeon information, Proc. IEEE Int. Conf. Biomed. Robot. Biomechatronics (BioRob), Pisa (2006)
F.J. Carter, M.P. Schijven, R. Aggarwal, T. Grantcharov, N.K. Francis, G.B. Hanna, J.J. Jakimowicz: Consensus guidelines for validation of virtual reality surgical simulators, Surg. Endosc. 19, 1523–1532 (2005)
A. Gavazzi, W.V. Haute, K. Ahmed, O. Elhage, P. Jaye, M.S. Khan, P. Dasgupta: Face, content and construct validity of a virtual reality simulator for robotic surgery (SEP Robot), Ann. R. Coll.Surg.Engl. 93, 146–150 (2011)
F.H. Halvorsen, O.J. Elle, E. Fosse: Simulators in surgery, Minim. Invasive Ther. 14, 214–223 (2005)
T. Ungi, D. Sargent, E. Moult, A. Lasso, C. Pinter, R. McGraw, G. Fichtinger: Perk Tutor: An open-source training platform for ultrasound-guided needle insertions, IEEE Trans. Biomed. Eng. 59(12), 3475–3481 (2012)
E. Moult, T. Ungi, M. Welch, J. Lu, R. McGraw, G. Fichtinger: Ultrasound-guided facet joint injection training using Perk Tutor, Int. J.Comput. Assist. Radiol.Surg. 8(5), 831–836 (2013)
T.K.A. Kesavadas, G. Srimathveeravalli, S. Karimpuzha, R. Chandrasekhar, G. Wilding, G.K.Z. Butt: Efficacy of robotic surgery simulator (RoSS) for the davinci surgical system, J. Urol. 181(4), 823 (2009)
C. Perrenot, M. Perez, N. Tran, J.-P. Jehl, J. Felblinger, L. Bresler, J. Hubert: The virtual reality simulator dV-Trainer is a valid assessment tool for robotic surgical skills, Surg. Endosc. 26(9), 2587–2593 (2012)
R. Korets, J. Graversen, M. Gupta, J. Landman, K. Badani: Comparison of robotic surgery skill acquisition between DV-Trainer and da Vinci surgical system: A randomized controlled study, J. Urol. 185(4), e593 (2011)
K. Moorthy, Y. Munz, S.K. Sarker, A. Darzi: Objective assessment of technical skills in surgery, Br. Med. J. 327, 1032–1037 (2003)
C. Basdogan, S. De, J. Kim, M. Muniyandi, H. Kim, M.A. Srinivasan: Haptics in minimally invasive surgical simulation and training, IEEE Comput. Graph. Appl. 24(2), 56–64 (2004)
M. Altomonte, D. Zerbato, S. Galvan, P. Fiorini: Organ modeling and simulation using graphical processing, Poster Sess. Comput. Assist. Radiol. Surg., Berlin (2007)
D. Zerbato, L. Vezzaro, L. Gasperotti, P. Fiorini: Virtual training for US guided needle insertion, Proc. Comput. Assist Radiol. Surg., Berlin (2012) pp. 27–30
Lapmentor: http://www.simbionix.com
Surgical Education platform: http://www.meti.com
LapSim: http://www.surgical-science.com
MIST: http://www.mentice.com
EndoTower: http://www.verifi.com
Laparoscopic Trainer: http://www.reachin.se
Simendo: http://www.simendo.nl
Vest System: http://www.select-it.de
EYESI: http://www.vrmagic.com/
U. Verona: Xron - Surgical simulator by ALTAIR lab, http://metropolis.scienze.univr.it/xron/ (2013)
F. Cavallo, G. Megali, S. Sinigaglia, O. Tonet, P. Dario: A biomechanical analysis of surgeon’s gesture in a laparoscopic virtual scenario. In: Medicine Meets Virtual Reality, Vol. 14, ed. by J.D. Westwood, R.S. Haluck, H.M. Hoffmann, G.T. Mogel, R. Phillips, R.A. Robb, K.G. Vosburgh (IOS, Amsterdam 2006) pp. 79–84
S. Yule, R. Flin, S. Paterson-Brown, N. Maran: Non-technical skills for surgeons: A review of the literature, Surgery 139, 140–149 (2006)
K.T. Kavanagh: Applications of image-directed robotics in otolaryngologic surgery, Laryngoscope 194, 283–293 (1994)
J. Wurm, H. Steinhart, K. Bumm, M. Vogele, C. Nimsky, H. Iro: A novel robot system for fully automated paranasal sinus surgery, Proc. Comput. Assist. Radiol. Surg. (CARS) (2003) pp. 633–638
M. Li, M. Ishii, R.H. Taylor: Spatial motion constraints in medical robot using virtual fixtures generated by anatomy, IEEE Trans. Robot. 2, 1270–1275 (2006)
G. Strauss, K. Koulechov, R. Richter, A. Dietz, C. Trantakis, T. Lueth: Navigated control in functional endoscopic sinus surgery, Int. J. Med. Robot. Comput. Assist. Surg. 1, 31–41 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Video-References
Video-References
- :
-
Da Vinci Surgery on a grape available from http://handbookofrobotics.org/view-chapter/63/videodetails/823
- :
-
Da Vinci Xi introduction | Engadget available from http://handbookofrobotics.org/view-chapter/63/videodetails/824
- :
-
Intuitive surgical Da Vinci single port robotic system available from http://handbookofrobotics.org/view-chapter/63/videodetails/825
- :
-
SPORT system by Titan Medical available from http://handbookofrobotics.org/view-chapter/63/videodetails/826
- :
-
Robot for single port surgery by Nebraska University available from http://handbookofrobotics.org/view-chapter/63/videodetails/827
- :
-
Magnetic and needlescopic instruments for surgical procedures available from http://handbookofrobotics.org/view-chapter/63/videodetails/828
- :
-
CardioArm available from http://handbookofrobotics.org/view-chapter/63/videodetails/829
- :
-
Snake robot for surgery in tight spaces available from http://handbookofrobotics.org/view-chapter/63/videodetails/830
- :
-
IREP robot – Insertable robotic effectors in single port surgery available from http://handbookofrobotics.org/view-chapter/63/videodetails/831
- :
-
Variable stiffness manipulator based on layer jamming available from http://handbookofrobotics.org/view-chapter/63/videodetails/832
- :
-
Reconfigurable and modular robot for NOTES applications available from http://handbookofrobotics.org/view-chapter/63/videodetails/833
- :
-
SPRINT robot for single port surgery available from http://handbookofrobotics.org/view-chapter/63/videodetails/834
- :
-
A micro-robot operating inside eye available from http://handbookofrobotics.org/view-chapter/63/videodetails/835
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Taylor, R.H., Menciassi, A., Fichtinger, G., Fiorini, P., Dario, P. (2016). Medical Robotics and Computer-Integrated Surgery. In: Siciliano, B., Khatib, O. (eds) Springer Handbook of Robotics. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-32552-1_63
Download citation
DOI: https://doi.org/10.1007/978-3-319-32552-1_63
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-32550-7
Online ISBN: 978-3-319-32552-1
eBook Packages: EngineeringEngineering (R0)