Abstract
Methods
This manuscript describes the development and ongoing integration of neuroArm, an image-guided MR-compatible robot.
A neurosurgical robotics platform was developed, including MR-compatible manipulators, or arms, with seven degrees of freedom, a main system controller, and a human-machine interface. This system was evaluated during pre-clinical trials and subsequent clinical application, combined with intra-operative MRI, at both 1.5 and 3.0T.
Results
An MR-compatible surgical robot was successfully developed and merged with ioMRI at both 1.5 or 3.0T. Image-guidance accuracy and microsurgical capability were established in pre-clinical trials. Early clinical experience demonstrated feasibility and showed the importance of a master-slave configuration. Surgeon-directed manipulator control improved performance and safety.
Conclusion
NeuroArm successfully united the precision and accuracy of robotics with the executive decision-making capability of the surgeon.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
Introduction
Robotic technology has been applied to surgery for many years [1–3]. As with industrial manufacturing applications, robotics has demonstrated precision and accuracy that exceeds human ability, particularly for repetitive tasks [4]. Meanwhile, human capacity for non-linear processing incorporates incomplete data sets and prior experience, and is necessary for anticipatory action [5]. At the human-machine interface (HMI), the unique attributes of robots and humans may well result in improved surgical intervention and outcome. These advantages are exploited in the master-slave configuration of most medical robotic devices [6], and are why automation, in order to be safe, would require a fundamental change to processing architecture and software design.
Following the modification of an industrial robot (Programmable Universal Machine for Assembly (PUMA), Advance Research and Robotics, Oxford, Connecticut) for neurosurgical use, various robotic systems have been adapted to or created for neurosurgery [2]. These systems have seen only isolated acceptance, largely due to a combination of safety concerns, complexity of use, intra-operative imaging incompatibility, cost, and restricted range of application. In 2002, development of an MR-compatible robot was initiated at the University of Calgary in an effort to integrate the precision and accuracy of robotics with the imaging capabilities of an innovative intra-operative MRI (ioMRI) system based on a moveable magnet [7, 8]. The master-slave dynamic governing neuroArm is discussed, together with the neurobiological mechanisms which separate human and computer decision-making capability.
Materials and Methods
The design and manufacture of neuroArm has been previously reported [3, 9]. In summary, the system consists of two MR-compatible manipulators mounted on a mobile base, a main system controller, and a human-machine interface (HMI), or workstation. The manipulators, or arms, have seven degrees of freedom (DOF), and are able to grasp and manipulate a variety of neurosurgical instruments. The main system controller processes the computational needs of robotic manipulation, and mediates the reciprocal exchange of information between surgeon and machine. Finally, the HMI provides the sensory milieu in which the surgeon is able to perform surgery within an image-guided environment.
To develop an image-guided robotic system, manipulator arms were constructed from MR-compatible materials, such as titanium and polyetheretherketone. Piezoelectric motors were chosen for their MR-compatibility, 20,000-h lifetime, and intrinsic braking characteristics in the event of sudden power loss. The system includes dual-redundant rotary electric encoders, which record joint position to an accuracy of 0.01 degrees. The end-effector incorporates a unique mechanism for tool grasping, roll, and actuation, such as opening or closing bipolar forceps or scissors. In addition, the configuration maintains sterility via end-effector attachments which pierce the manipulator drapes while capping the break in the sterile field (Fig.1). Two titanium six-axis force/torque sensors in contact with the tool generate force data in three translational DOF, which is fed back to the surgeon via two handcontrollers. For microsurgery, both manipulators are transferred to a mobile base, along with a 6-DOF digitizing arm. The digitizing arm is used to register the manipulators to fiducials located within the radiofrequency (RF) coil, and hence to pre- or intra-operatively acquired MR images. For stereotaxy, one manipulator is transferred to a polyetheretherketone platform attached to the gradient insert within the bore of the magnet. The platform includes two MR-compatible cameras, which allow visualization of the operative site and manipulator from the workstation monitors. In this configuration, the manipulator is registered to the magnet isocenter.
The main system controller governs the transmission of input signals to the robot, as well as position and force data returning to the HMI. The software currently being utilized in the main system controller took over four years to develop, given the requirements of image guidance, quantified force feedback, and safety. This software not only acts as a throughput for data, but actively monitors all aspects of the system. In the event of divergence from intended manipulator motion, the embedded safety mechanisms halt further movement. However, while system software can recognize positional error, the surgeon may be able to recognize the context of unintended movement more rapidly, and thus avoid injury to adjacent neural tissue. This is particularly relevant given the speed of surgery, in which small variations in reaction time can result in substantially different outcomes. To avoid such a critical event, a foot-pedal, used as a dead-man switch and wired directly into the process logic controller, was added to allow the surgeon to rapidly halt the manipulators in the event of unintended movement.
Union of the precision and accuracy of machine technology and human executive capacity is made possible by the HMI apparatus [10]. The surgeon is immersed in sensory data, including 3D intra-operative MR images with tool overlay, virtual position of the manipulators relative to the RF coil, stereoscopic display of the surgical field from the operating microscope, an image of the operative area, and recreated sense of touch, which is displayed in Newtons and also pictorially represented. Force feedback, essential for microsurgery, permits quantification of the forces of dissection and the potential to set explicit limits to applied force. In addition, the surgeon can construct virtual surgical corridors on the MRI display (Fig.2). Once established, tool manipulation may occur only within the limits of these pre-determined pathways. Within this sensory immersive environment, the surgeon is able to safely control the manipulators.
Results
As previously published, the surgical robotic system was designed, manufactured, and integrated into a neurosurgical operating room over a 6-year interval [3]. During the subsequent 18 months, deficiencies in both hardware and software were identified and resolved. The foot-pedal, added during this interval, was found to increase manipulator control, further enhancing safety. With usage, errors related to positioning data and image registration were likewise identified and resolved.
Pre-clinical trials were performed using rats and cadavers [9]. To establish microsurgical application, two qualified microsurgeons achieved comparable results using either established microsurgical techniques or neuroArm. The tested procedures consisted of splenectomy, nephrectomy, and submandibular gland resection in a rodent model. Navigational accuracy was established by means of nanoparticle implantation into predetermined targets using a cadaveric model. Accuracy was found to be equal or better to existing surgical navigation technology. Unlike traditional navigation technology, location of the implanting device was confirmed with real-time imaging prior to nanoparticle release.
Clinical application was initially demonstrated in five patients with various intracranial neoplasms [9]. NeuroArm was progressively introduced in a staged manner, so as to isolate specific issues relating to its use in the operating room. Early cases focused on the positioning and draping of the manipulators for use within the sterile field. Assessment of image-guidance accuracy was then performed, followed by initial use during neoplasm resection. During the fifth case, an encoder failure occurred, resulting in uncontrolled motion causing the suction tool to come into contact with a retractor. This event triggered a safety review, out of which the aforementioned foot-pedal was developed and incorporated. Despite the occurrence of minor technical complications, patient safety remained uncompromised.
Sample size during initial clinical use was limited by the decision to upgrade the 1.5 T ioMRI system to 3.0T. This was complicated by the decision to shift from local to whole-room RF-shielding. NeuroArm was integrated into this new ioMRI environment. To avoid RF-interference from electronic cables controlling neuroArm, a penetration panel was constructed. At the site of shield penetration, RF-noise is eliminated with line filtration and copper to fiber-optic converters. Conversion to whole-room shielding simplified the method for registration during stereotaxy. Manipulator placement on a polyetheretherketone platform, attached to the gradient insert within the magnet bore, allows image registration to be based on the magnet isocenter (Fig.3).
Discussion
Pre-clinical trials and initial clinical experience with neuroArm have demonstrated that the system safely unites robotic precision and accuracy with human decision-making. This connection is based on a master-slave configuration, similar to most surgical robotic systems. Such a design was chosen largely due to limitations in contemporary computer technology, but was fortuitous nonetheless. Indeed, the master-slave dynamic is indispensable to the practice of robotic surgery, owing to the nature of human decision-making. While the superior precision and accuracy of robotic movement is accepted [2], the mechanisms separating human and computer decision-making capability in the surgical setting have not been well-studied.
Human decision-making has been studied extensively in the fields of neuroscience, psychology, and economics [11–13]. In vivo experiments have been able to demonstrate the neurobiological underpinnings of decision-making processes during simple sensory-motor tasks [14]. Researchers have delineated complex neural circuits involved in the processes of choice valuation and decision execution; within associated regions, distinct populations of neurons respond independently to each available option [15]. The result is a non-linear computational system capable not only of evaluating multiple choice alternatives simultaneously, but also continued processing and modification of such choices after the action has been initiated. Consequences of a decision are then incorporated as experience into future heuristics. Presently, computers, which do not rely on the relatively slow synaptic connections of neuronal transmission, utilize computational circuitry and software designed with inherent linearity, although this may change [16]. The surgeon controlling neuroArm can evaluate the context of unintended motion, resulting in a much faster reaction than the main system controller.
In addition to differences at the level of basic computation, higher-level data processing is critical to human executive functioning. Step-by-step deduction in computer processing requires the evaluation of all possible choices to conclusion or a pre-determined fail condition, prior to selecting a single option with the highest likelihood of success. When applied to surgery, these distinctions have substantial impact as decision complexity increases. At a given point in time, there exist almost infinite possible actions within the surgical field, organized in ascending levels of importance and complexity. Were true automation attempted, so-called combinatorial explosion would result. The surgeon, presented with immersive sensory data at the workstation, can rapidly combine multiple incomplete data sets in order to execute manipulator motion. This is the basis for efficacy during pre-clinical studies and success in the early clinical use of neuroArm.
Small-scale anatomical variability further complicates the process of automation by adding layers of ambiguity and disorganization to the surgical data set, thereby increasing computational density. Though intra-operative imaging and intuitive HMI design attempt to minimize this uncertainty, the surgeon’s decision-making is not paralyzed by incomplete data, as some degree of sensory uncertainty is tolerated. Surgeons routinely apply knowledge of anticipated structural relationships to actively guide dissection, prior to full visualization of the relevant anatomy. Such action is predicated on computation that allows the brain to both infer probability from uncertain data and determine the threshold of sensory certainty required to initiate action [17]. The use of a foot-pedal to halt unintended manipulator motion takes advantage of the human capacity to evaluate the error in the setting of contextual dissonance, though the system software may not recognize technical failure. While neuroArm is capable of extremely basic automation in the form of a partially-automated tool exchange, it is capable of producing only a single solution to generate a given movement. Surgeons, on the other hand, can intuitively alter speed, force, and hand position efficiently and effectively in order to complete a task in a variety of approaches.
Technological developments have presented neurosurgeons with increasingly informative imaging modalities and tools that have substantially altered the scope and nature of neurosurgical intervention. NeuroArm combines advanced image guidance, robotic accuracy and precision, and the neural processing mechanisms employed during human control. The master-slave organization of this system links human executive function and robotic accuracy in a way that may generate surgical outcomes beyond those currently possible.
Conflict of interest statement
Dr. G.R. Sutherland and A.D. Greer hold shares in IMRIS (Winnipeg, Canada). M.J. Lang declares no conflict of inerest.
References
Ballantyne GH, Moll F (2003) The da Vinci telerobotic surgical system: the virtual operative field and telepresence surgery. Surg Clin North Am 83:1293–1304, vii
Lwu S, Sutherland GR (2009) The development of robotics for interventional MRI. Neurosurg Clin N Am 20:193–206
Sutherland GR, Latour I, Greer AD, Fielding T, Feil G, Newhook P (2008) An image-guided magnetic resonance-compatible surgical robot. Neurosurgery 62:286–292
Stefanidis D, Wang F, Korndorffer JR Jr, Dunne JB, Scott DJ (2009) Robotic assistance improves intracorporeal suturing performance and safety in the operating room while decreasing operator workload. Surg Endosc 18:18
Yang T, Shadlen MN (2007) Probabilistic reasoning by neurons. Nature 447:1075–1080
Ewing DR, Pigazzi A, Wang Y, Ballantyne GH (2004) Robots in the operating room–the history. Semin Laparosc Surg 11:63–71
Sutherland GR, Kaibara T, Louw D, Hoult DI, Tomanek B, Saunders J (1999) A mobile high-field magnetic resonance system for neurosurgery. J Neurosurg 91:804–813
Sutherland GR, Latour I, Greer AD (2008) Integrating an image-guided robot with intraoperative MRI: a review of the design and construction of neuroArm. IEEE Eng Med Biol Mag 27:59–65
Pandya S, Motkoski JW, Serrano-Almeida C, Greer AD, Latour I, Sutherland GR (2009) Advancing neurosurgery with image-guided robotics. J Neurosurg 17:17
Greer AD, Newhook PM, Sutherland GR (2008) Human-machine interface for Robotic surgery and stereotaxy. IEEE ASME Trans Mechatron 13:355–361
Dehaene S, Spelke E, Pinel P, Stanescu R, Tsivkin S (1999) Sources of mathematical thinking: behavioral and brain-imaging evidence. Science 284:970–974
Padoa-Schioppa C, Assad JA (2006) Neurons in the orbitofrontal cortex encode economic value. Nature 441:223–226
Platt ML, Huettel SA (2008) Risky business: the neuroeconomics of decision making under uncertainty. Nat Neurosci 11:398–403
Resulaj A, Kiani R, Wolpert DM, Shadlen MN (2009) Changes of mind in decision-making. Nature 461:263–266
Furman M, Wang XJ (2008) Similarity effect and optimal control of multiple-choice decision making. Neuron 60:1153–1168
Denning PJ, Tichy WF (1990) Highly parallel computation. Science 250:1217–1222
Knill DC, Pouget A (2004) The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27:712–719
Acknowledgements
This work was supported by grants from the Canada Foundation for Innovation, Western Economic Diversification, and Alberta Advanced Education and Technology.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag/Wien
About this chapter
Cite this chapter
Lang, M.J., Greer, A.D., Sutherland, G.R. (2011). Intra-operative Robotics: NeuroArm. In: Pamir, M., Seifert, V., Kiris, T. (eds) Intraoperative Imaging. Acta Neurochirurgica Supplementum, vol 109. Springer, Vienna. https://doi.org/10.1007/978-3-211-99651-5_36
Download citation
DOI: https://doi.org/10.1007/978-3-211-99651-5_36
Published:
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-99650-8
Online ISBN: 978-3-211-99651-5
eBook Packages: MedicineMedicine (R0)