Keywords

Introduction

Robotic technology has been applied to surgery for many years [13]. As with industrial manufacturing applications, robotics has demonstrated precision and accuracy that exceeds human ability, particularly for repetitive tasks [4]. Meanwhile, human capacity for non-linear processing incorporates incomplete data sets and prior experience, and is necessary for anticipatory action [5]. At the human-machine interface (HMI), the unique attributes of robots and humans may well result in improved surgical intervention and outcome. These advantages are exploited in the master-slave configuration of most medical robotic devices [6], and are why automation, in order to be safe, would require a fundamental change to processing architecture and software design.

Following the modification of an industrial robot (Programmable Universal Machine for Assembly (PUMA), Advance Research and Robotics, Oxford, Connecticut) for neurosurgical use, various robotic systems have been adapted to or created for neurosurgery [2]. These systems have seen only isolated acceptance, largely due to a combination of safety concerns, complexity of use, intra-operative imaging incompatibility, cost, and restricted range of application. In 2002, development of an MR-compatible robot was initiated at the University of Calgary in an effort to integrate the precision and accuracy of robotics with the imaging capabilities of an innovative intra-operative MRI (ioMRI) system based on a moveable magnet [7, 8]. The master-slave dynamic governing neuroArm is discussed, together with the neurobiological mechanisms which separate human and computer decision-making capability.

Materials and Methods

The design and manufacture of neuroArm has been previously reported [3, 9]. In summary, the system consists of two MR-compatible manipulators mounted on a mobile base, a main system controller, and a human-machine interface (HMI), or workstation. The manipulators, or arms, have seven degrees of freedom (DOF), and are able to grasp and manipulate a variety of neurosurgical instruments. The main system controller processes the computational needs of robotic manipulation, and mediates the reciprocal exchange of information between surgeon and machine. Finally, the HMI provides the sensory milieu in which the surgeon is able to perform surgery within an image-guided environment.

To develop an image-guided robotic system, manipulator arms were constructed from MR-compatible materials, such as titanium and polyetheretherketone. Piezoelectric motors were chosen for their MR-compatibility, 20,000-h lifetime, and intrinsic braking characteristics in the event of sudden power loss. The system includes dual-redundant rotary electric encoders, which record joint position to an accuracy of 0.01 degrees. The end-effector incorporates a unique mechanism for tool grasping, roll, and actuation, such as opening or closing bipolar forceps or scissors. In addition, the configuration maintains sterility via end-effector attachments which pierce the manipulator drapes while capping the break in the sterile field (Fig.1). Two titanium six-axis force/torque sensors in contact with the tool generate force data in three translational DOF, which is fed back to the surgeon via two handcontrollers. For microsurgery, both manipulators are transferred to a mobile base, along with a 6-DOF digitizing arm. The digitizing arm is used to register the manipulators to fiducials located within the radiofrequency (RF) coil, and hence to pre- or intra-operatively acquired MR images. For stereotaxy, one manipulator is transferred to a polyetheretherketone platform attached to the gradient insert within the bore of the magnet. The platform includes two MR-compatible cameras, which allow visualization of the operative site and manipulator from the workstation monitors. In this configuration, the manipulator is registered to the magnet isocenter.

Fig.1
figure 1

Integration of human and machine: the figure shows neuroArm positioned for removal of an olfactory groove meningioma. The right arm holds a bipolar forcep and the left a bayonet-shaped sucker. Stereoscopic images from two high-definition cameras attached to the operating microscope are projected to the surgeon at the workstation (insert). Also shown in the insert, are the 3D MRI display (left) and virtual manipulators registered to the RF-coil (right)

The main system controller governs the transmission of input signals to the robot, as well as position and force data returning to the HMI. The software currently being utilized in the main system controller took over four years to develop, given the requirements of image guidance, quantified force feedback, and safety. This software not only acts as a throughput for data, but actively monitors all aspects of the system. In the event of divergence from intended manipulator motion, the embedded safety mechanisms halt further movement. However, while system software can recognize positional error, the surgeon may be able to recognize the context of unintended movement more rapidly, and thus avoid injury to adjacent neural tissue. This is particularly relevant given the speed of surgery, in which small variations in reaction time can result in substantially different outcomes. To avoid such a critical event, a foot-pedal, used as a dead-man switch and wired directly into the process logic controller, was added to allow the surgeon to rapidly halt the manipulators in the event of unintended movement.

Union of the precision and accuracy of machine technology and human executive capacity is made possible by the HMI apparatus [10]. The surgeon is immersed in sensory data, including 3D intra-operative MR images with tool overlay, virtual position of the manipulators relative to the RF coil, stereoscopic display of the surgical field from the operating microscope, an image of the operative area, and recreated sense of touch, which is displayed in Newtons and also pictorially represented. Force feedback, essential for microsurgery, permits quantification of the forces of dissection and the potential to set explicit limits to applied force. In addition, the surgeon can construct virtual surgical corridors on the MRI display (Fig.2). Once established, tool manipulation may occur only within the limits of these pre-determined pathways. Within this sensory immersive environment, the surgeon is able to safely control the manipulators.

Fig.2
figure 2

Three-dimensional MRI display: the image demonstrates a selected surgical corridor with tool overlay. Once delineated, the tools may only pass within the limits of the pre-defined corridor

Results

As previously published, the surgical robotic system was designed, manufactured, and integrated into a neurosurgical operating room over a 6-year interval [3]. During the subsequent 18 months, deficiencies in both hardware and software were identified and resolved. The foot-pedal, added during this interval, was found to increase manipulator control, further enhancing safety. With usage, errors related to positioning data and image registration were likewise identified and resolved.

Pre-clinical trials were performed using rats and cadavers [9]. To establish microsurgical application, two qualified microsurgeons achieved comparable results using either established microsurgical techniques or neuroArm. The tested procedures consisted of splenectomy, nephrectomy, and submandibular gland resection in a rodent model. Navigational accuracy was established by means of nanoparticle implantation into predetermined targets using a cadaveric model. Accuracy was found to be equal or better to existing surgical navigation technology. Unlike traditional navigation technology, location of the implanting device was confirmed with real-time imaging prior to nanoparticle release.

Clinical application was initially demonstrated in five patients with various intracranial neoplasms [9]. NeuroArm was progressively introduced in a staged manner, so as to isolate specific issues relating to its use in the operating room. Early cases focused on the positioning and draping of the manipulators for use within the sterile field. Assessment of image-guidance accuracy was then performed, followed by initial use during neoplasm resection. During the fifth case, an encoder failure occurred, resulting in uncontrolled motion causing the suction tool to come into contact with a retractor. This event triggered a safety review, out of which the aforementioned foot-pedal was developed and incorporated. Despite the occurrence of minor technical complications, patient safety remained uncompromised.

Sample size during initial clinical use was limited by the decision to upgrade the 1.5 T ioMRI system to 3.0T. This was complicated by the decision to shift from local to whole-room RF-shielding. NeuroArm was integrated into this new ioMRI environment. To avoid RF-interference from electronic cables controlling neuroArm, a penetration panel was constructed. At the site of shield penetration, RF-noise is eliminated with line filtration and copper to fiber-optic converters. Conversion to whole-room shielding simplified the method for registration during stereotaxy. Manipulator placement on a polyetheretherketone platform, attached to the gradient insert within the magnet bore, allows image registration to be based on the magnet isocenter (Fig.3).

Fig.3
figure 3

Image-guided robotic-assisted stereotaxy at 3.0T: this photograph shows the right manipulator, located on a platform attached to the gradient insert within the bore of the moveable 3.0 T magnet. Cables are connected to two MR-compatible cameras, allowing visualization of the manipulator and surgical field

Discussion

Pre-clinical trials and initial clinical experience with neuroArm have demonstrated that the system safely unites robotic precision and accuracy with human decision-making. This connection is based on a master-slave configuration, similar to most surgical robotic systems. Such a design was chosen largely due to limitations in contemporary computer technology, but was fortuitous nonetheless. Indeed, the master-slave dynamic is indispensable to the practice of robotic surgery, owing to the nature of human decision-making. While the superior precision and accuracy of robotic movement is accepted [2], the mechanisms separating human and computer decision-making capability in the surgical setting have not been well-studied.

Human decision-making has been studied extensively in the fields of neuroscience, psychology, and economics [1113]. In vivo experiments have been able to demonstrate the neurobiological underpinnings of decision-making processes during simple sensory-motor tasks [14]. Researchers have delineated complex neural circuits involved in the processes of choice valuation and decision execution; within associated regions, distinct populations of neurons respond independently to each available option [15]. The result is a non-linear computational system capable not only of evaluating multiple choice alternatives simultaneously, but also continued processing and modification of such choices after the action has been initiated. Consequences of a decision are then incorporated as experience into future heuristics. Presently, computers, which do not rely on the relatively slow synaptic connections of neuronal transmission, utilize computational circuitry and software designed with inherent linearity, although this may change [16]. The surgeon controlling neuroArm can evaluate the context of unintended motion, resulting in a much faster reaction than the main system controller.

In addition to differences at the level of basic computation, higher-level data processing is critical to human executive functioning. Step-by-step deduction in computer processing requires the evaluation of all possible choices to conclusion or a pre-determined fail condition, prior to selecting a single option with the highest likelihood of success. When applied to surgery, these distinctions have substantial impact as decision complexity increases. At a given point in time, there exist almost infinite possible actions within the surgical field, organized in ascending levels of importance and complexity. Were true automation attempted, so-called combinatorial explosion would result. The surgeon, presented with immersive sensory data at the workstation, can rapidly combine multiple incomplete data sets in order to execute manipulator motion. This is the basis for efficacy during pre-clinical studies and success in the early clinical use of neuroArm.

Small-scale anatomical variability further complicates the process of automation by adding layers of ambiguity and disorganization to the surgical data set, thereby increasing computational density. Though intra-operative imaging and intuitive HMI design attempt to minimize this uncertainty, the surgeon’s decision-making is not paralyzed by incomplete data, as some degree of sensory uncertainty is tolerated. Surgeons routinely apply knowledge of anticipated structural relationships to actively guide dissection, prior to full visualization of the relevant anatomy. Such action is predicated on computation that allows the brain to both infer probability from uncertain data and determine the threshold of sensory certainty required to initiate action [17]. The use of a foot-pedal to halt unintended manipulator motion takes advantage of the human capacity to evaluate the error in the setting of contextual dissonance, though the system software may not recognize technical failure. While neuroArm is capable of extremely basic automation in the form of a partially-automated tool exchange, it is capable of producing only a single solution to generate a given movement. Surgeons, on the other hand, can intuitively alter speed, force, and hand position efficiently and effectively in order to complete a task in a variety of approaches.

Technological developments have presented neurosurgeons with increasingly informative imaging modalities and tools that have substantially altered the scope and nature of neurosurgical intervention. NeuroArm combines advanced image guidance, robotic accuracy and precision, and the neural processing mechanisms employed during human control. The master-slave organization of this system links human executive function and robotic accuracy in a way that may generate surgical outcomes beyond those currently possible.

Conflict of interest statement

Dr. G.R. Sutherland and A.D. Greer hold shares in IMRIS (Winnipeg, Canada). M.J. Lang declares no conflict of inerest.