Introduction

Teaching process is a key element during the training of professionals. This has been characterized as a professor-based approach, which is considered as the most effective mean to acquire new knowledge [1]. Since the origin of computers, researchers have been looking for new ways to improve and reduce the costs in the teaching process. Therefore, different types of simulations, control systems and learning environments via the Internet have emerged in the field of Technology Enhanced Learning (TEL) and e-learning [2].

One of the main areas where TEL environments have been developed is the medical field. Surgical procedures have a high degree of difficulty and complexity. It is necessary to provide students or practitioners a proper and extensive learning process before they can perform surgeries correctly. The learning curve in the medical process is a concept focused on two aspects: optimization of operating time [3] and reduction of patients bleeding [4]. Vickers et al. [5] stated that about 750 operations are needed to improve surgical procedures. They also found that patients who have been treated by doctors with surgical experience, that have performed between 750 and 10,250 procedures, tend to have fewer health problems than patients who were treated by doctors with less experience.

Considering advances in technology and new laws in the medical field, various solutions have been pursued to allow medical students to acquire the necessary skills involved in medical procedures. Some solutions involve the use of animals in operating rooms, which must have similar conditions as those related to humans. Devices that use synthetic materials to resemble human skin have also been implemented. Nevertheless, they fail to emulate the real characteristics of human skin. Other solutions suggest the use of virtual environments, which can simulate situations with different levels of risk. Former virtual environments only exploited the senses of sight and hearing; consequently, they excluded other interaction possibilities.

Several companies and authors had created e-learning environments for medical education. These advances have been reviewed to assess the impact in medical education [6, 7]. However, Secin et al. [8] stated that current surgical environments did not provide the realism needed to train future doctors during operations, they did not provide users the sense of force strength during the task, or they were not able to adequately develop the necessary skills.

As a solution, haptic simulation has great potential to improve medical training. A haptic device is a mechanical input/output device that enables users interact with virtual environments by adding the sense of touch, which enhances the learning quality [9]. The incorporation of haptic technologies in medical software and simulations has grown [10, 11] and various companies developed medical stations using haptic devices [1217].

The use of haptic simulators provides new alternative solutions that also allow the development of new teaching methods. The advantage of haptic technologies in virtual environments is the ease they have to recreate difficult situations originated during real practices. They provide new means of exploration and representation. They enable the creation of systems that are capable of implementing new methods or procedures, and in some cases, these systems can generate uncommon anatomies by modifying specific models with patients information, as it has been reviewed [9]. Therefore, they provide students the ability to practice surgery as often as necessary.

This paper reviews recent advances in medical training simulators with the focus on haptic technologies. In particular, virtual environments that use commercial haptic devices were chosen because these have already been benchmarked and they accomplish security and performance standards. This review is focused on stitching, palpation, dental, endoscopy, laparoscopy, orthopaedics, and miscellaneous procedures. Existing works and simulators are reviewed and compared from the point of view of used technology, the number of degrees of freedom and degrees of force feedback, immersion, and learning feedback provided to the user. At the end, several observations per area are provided and suggestions for future work are proposed.

Haptics

Haptic devices

Haptic devices are electro-mechanical devices with handlers (Fig. 1) that allow motion with several Degrees of Freedom (DoF). When coupled with virtual simulators, they provide the user the sense of touch in addition to the sight. In haptic environments, touch sensation can be performed by humans, machines or a combination of both, while objects and/or environments may be real, virtual, or a combination of both. Current physical haptic devices always present a residual inner friction that can be perceived as noise, which can even fatigue the user in some cases. Additionally, the device itself has a certain degree of inertia, which present a problem if the user moves the haptic device quickly. Haptic devices balance to compensate for external forces of the systems, such as gravity. They also have mechanisms that provide sufficient reaction to stimulation and effort sense to detect hard surfaces [19]. Their resolution, i.e., the amount of feedback per unit of distance, needs to be high in order to provide a greater detail of textures in virtual environments, and the movement area must be large enough in order to simulate the actual workspace. Moreover, haptic devices have different DoF depending on the amount of directions they can move. The most common are haptic devices with 3-DoF that can follow the XYZ axis.

Fig. 1
figure 1

Commonly used haptic devices in virtual simulators. a) Phantom Omni Ⓡ, b) Phantom Desktop Ⓡ. c) Phantom Premium Ⓡ, and d) Novint Falcon Ⓡ

Current haptic devices use two basic variations to control interaction: impedance control and admittance control. In impedance control, the user moves the device, and it sends the data back to the computer; therefore, the application is responsible for controlling the feedback. Prime examples of this type of device are Phantom Ⓡ, built by Geomagic Ⓡ (previously known as SensAble Technologies Ⓡ), and Falcon Ⓡ from Novint Technologies Inc Ⓡ (Fig. 1).

In contrast, by using the admittance control devices users exert a force on the device, which reacts by displacing it in a proportional distance. This action is translated in the device as displacement of the input and force feedback as the output of the system. This type of control provides the users with the freedom in the mechanical design of the devices, and these devices are able to produce movement with greater force and stiffness. However, due to their complexity, they are usually very large and they must be designed carefully to interact safely with humans. Consequently, they are not commonly used in the training field. An example of these devices is HapticMaster Ⓡ, manufactured by Mog Corporation Ⓡ (previously known as FCS Control Systems Ⓡ) [20].

Haptic rendering

The goal of haptic rendering is to enable the user to feel, touch, and manipulate virtual objects through a haptic interface. The type of interaction defines the procedure for haptic rendering and how the forces are rendered. These methods can be identified by the way they model the interaction of the haptic device in the virtual environment. There are three kinds of haptic rendering: a) point-based, b) ray-based, and c) based on a 3D object made by a group of points, lines and polygons [18] as shown in Fig. 2.

Fig. 2
figure 2

Haptic rendering techniques: based on points (left), line segments (middle), and a 3D object (right) [18]

Rendering of deformable objects is often required in medical procedures. Visual rendering has been studied extensively in the area of computer graphics [21]. Basdogan et al, have divided rendering techniques of deformable objects into two types: geometry-based and physics-based [18].

Geometry-based

techniques deform the object based on geometric manipulations. For instance, the user manipulates the vertices or control points around the 3D object to modify the shape of the object. These techniques are usually fast and relatively easy to implement; however, they focus mainly on the visual representation, which does not necessarily simulates the underlying mechanical deformation.

Physics-based

algorithms add physics simulation to the modification of geometry by modeling the physical laws involved in the movement of the object and the dynamics of the interaction within it. Physical approaches are necessary to simulate a realistic behavior of deformable objects. Nevertheless, physical rendering is computationally more expensive than pure geometry-based modeling.

Haptics in medical training

We describe and analyze recent virtual simulators that use haptic devices to practice medical procedures. These are organized by types of medical practices they simulate.

Stitching techniques

Simulation of stitching procedures is one of the areas where haptic technology has been implemented to create learning simulators. Skin and organs have flexible features, so stitching simulators consider mainly rendering techniques of deformable objects. Additionally, haptic simulators combined with active learning environments can provide users with features such as deformation of the suture thread, knot tying, and interaction between tools, the needle and the environment, as can be seen in [22].

Jia and Pan developed a stitching simulator which can simulate interaction and deformation of the suture thread and objects [23]. They applied the Follow the Leader (FTL) algorithm [24] to simulate the thread, where each link can rotate freely at their connecting vertex. On the side of the deformable object, a tensor mass-spring deformation model in a tetrahedral mesh was implemented to simulate the skin. For collision detection in the environment their simulator uses an Axis-Aligned Bounding Box (AABB) tree, where the shape of the bounding boxes are updated during the simulation. The simulator uses a single Phantom Desktop Ⓡ haptic device; nevertheless, the proper process of stitching simulators is two handed. The novelty of the work presented by Jian and Pan is the description of four possible states for the position of the needle according to its interaction with the virtual skin:

  1. 1.

    the needle is completely outside of the deformable object,

  2. 2.

    the end of the needle is in contact with the skin (end touching),

  3. 3.

    the tip of the needle is in contact with the skin, but it has not pierced the skin yet (tip touching), and

  4. 4.

    the needle is piercing the skin.

Their environment provides a functional approach to suture tasks, and it uses shadows for the 3D location of the haptic probe; however, the simulator lacks the skin texture, the knot tying action of the suture, the needle is modeled as a line segment, and, as mentioned before, it only implements a single haptic device.

Similar research was used by Payandeh and Shi [25] who proposed a serious gaming platform to teach suturing and knotting for simple skin or soft tissue wound closure. The haptic devices used in the experimental study were two Phantom Omni Ⓡ. They also used a mass-spring model to create the deformable skin and the suturing material and they used Bounding-Volume Hierarchy (BVH) to detect collisions and self-collisions in the suture thread (Fig. 3). They used Virtual Reality Modeling Language (VRML) files produced in Autodesk Ⓡ 3ds Max Ⓡ to create 3D models of pre-wounded skin. Latter, the simulator imports these files to build the virtual objects.

Fig. 3
figure 3

Suture simulator developed by Payandeh and Shi [23]. The skin model (a) is constructed as a mass-spring system (b) and the system uses two haptic device to operate the suture thread

Payandeh and Shi presented a suture simulator based on physics models, and it allows users to tie a knot. One feature that has to be remarked in this work is that it implements tissue tearing. This is made by finding intersection points between each polygon and the tearing path. However, if the user repeatedly inserts the needle at the same location, the mesh can be subdivided, which can generate unwanted small triangles.

Another interesting research is the one made by Choi et al. [27]. They used the physical rendering engine NVIDIA Ⓡ PhysX Ⓡ in collaboration with OpenGL to create a system for developing manual skills in the fields of medicine and nursing. The system uses two Phantom Omni Ⓡ haptic devices. The prototype uses a model of springs interconnected to simulate the deformable skin. The models of the needle and thread are created as a chain of spheres interconnected as a segment, which facilitates the flexibility of the elements and allows a simple way to reprodcure thread trimming. The FTL algorithm was used to simulate the suture thread; however, FTL was modified to use springs and springs with mass as the joints of the thread. This approach provide a realistic sensation. Furthermore, bending can obtained by adding torsion springs in the model.

A recent work in the suture field is the one made by Ricardez et al. [26] that was focused on generating an external suture environment named SutureHap. This simulator simulates sensations that are perceived in medical rooms or offices, as can be seen in Fig. 4. This feature enables a new alternative in the learning process of doctors. The system is based on the architecture proposed by [28]. SutureHap used NVIDIA Ⓡ PhysX Ⓡ physical engine, OpenGL Ⓡ to render the graphical environment, and Phantom Omni Ⓡ haptic devices. The system is based on the elaboration of a suturing knot using a real technique, which was consulted among medical staff.

Fig. 4
figure 4

SutureHap environment created by Ricardez et al. [26] The left image shows a user performing the stitching process in the simulator. The use of two Phantom Omni Ⓡ haptic devices is implemented to recreate similar conditions as the ones experienced in real stitching process. On the right side a close-up of the virtual environment can be seen

For haptic rendering, three blocks were developed. Collision detection and calculation of force feedback response between the cloth and the haptic cursor were performed by using the simplified model, where a novel algorithm was used. This algorithm is based on the addition of multiple rays that are emanating from the center of the haptic avatar in different directions. Finally, response force between solid objects was developed according to the ”penalty method” standard technique.

In the environment of Ricardez et al. further changes such as improvement of interactions between objects in the scene, and the implementation of collision handling and haptic feedback between surgical tools should be considered to improve the performance of the simulator.

Both Jia and Pan and Payandeh and Shi do not implement or use proper medical tools. The incorporation of tweezers with realistic appearance to the simulator will show students the correct tools to use in operations and surgical tasks, as Ricardez et al. did. Moreover, by simulating physical properties of tools, such as shape, students will learn how to interact with them and take them into consideration in the workspace. Finally, another aspect that Ricardez et al. and Payandeh and Shi should consider to provide is appropriate space location. Sometimes, students could get lost if they do not perceive a visual cue in navigation. Therefore, the addition of shadows of objects could improve the perception of the 3D environment.

Palpation

Palpation is the process where surgeons analyze, via their fingers, tissues or organs to detect anomalies on the surface. Stiffness is essential in this medical procedure. Areas that are stiffer than other can be considered as potential tumors. Therefore, correct calculation of force feedback is necessary during the creation of virtual simulators with haptic devices.

Li et al. [29] developed a tumor location simulator based on soft tissue probing data. They used a Microsoft Ⓡ Kinect Ⓡ device to create a tissue model using depth data from a silicone phantom tissue and Finite Element Model (FEM) techniques. This enables the skin to be a geometrical deformable soft tissue that can be modified in real time. The novelty of this study is the use of a rolling indentation sensor to obtain friction and stiffness values, which are stored in a look-up table. Therefore, the force feedback to the user is based on the values from the table that is accessed during the simulation.

A case test study was made with twenty students, who used the phantom tissue and the simulator. They were asked to identify two zones that are stiffer than others. For the simulator, they used a Phantom Omni Ⓡ to navigate the system. Results showed that the simulator could be seen as a feasible and efficient alternative solution of manual palpation to train students in this medical field.

Another palpation simulator is described in the work of Ullrich and Kuhlen [30]. The system implements needle insertion features, which is a task that is usually done after palpation. In their approach, the authors modeled a deformable skin using FEM methods and tetrahedral meshes. For collision detection, they used the Bullet Ⓡ Physics Library [31]. The simulation implements the use of hand models and illumination to provide space location cues; therefore, the authors proposed a new algorithm for tissue dragging to provide proper interactions.

The simulator uses two Phantom Omni Ⓡ haptic devices; however, Ullrich and Kuhlen modified the end effector of one to provide a lightweight palpation pad (Fig. 5). This enables the haptic device to be used by two fingers, which is the minimum number of fingers used during palpation. The system was evaluated by 23 beginner anesthesiology students and 17 experts, where results established average acceptance of the simulator. The simulator should be modified and better improvements of the hardware should be included; however, the results also showed that the modification of the haptic device provided better sensations that the ones that are experienced with the original stylus.

Fig. 5
figure 5

Palpation environments have been developed by modifying current haptic devices. Ullrich and Kuhlen developed one where they use a lightweight palpation pad. [30]. Left figure shows current rendering of the virtual environment. Right figure displays a user using the environment. It can be seen the modification to the haptic device on the left hand

Finally, modifications to typical work stations have been made in this area, as the incorporation of Augmented Reality (AR). PalpSim is a simulator made by Coles et al. [32]. This simulator can be seen in Fig. 6. PalpSim was designed to replicate real scenarios in virtual environments by using AR. In their workstation, the authors have guaranteed visuohaptic alignment by fixing a LCD display and using a camera under it to capture users’ hands movements. By doing the latter, the authors could use chroma-key techniques to attain AR. Therefore, users can see in the screen the virtual environment with their hands’ real time movements.

Fig. 6
figure 6

Another simulator that modified haptic devices is the one developed by Coles et al. [32]. This simulator uses two Falcon \({\circledR }\) haptic devices coupled with a palpation pad (right side). Additionally, the authors used Augmented Reality techniques to provide a realistic approach (left side)

Coles et al. created a work station where the user handles three haptic devices under the LCD screen: two Falcon Ⓡ haptic devices are coupled to emulate palpation, and one Phantom Omni Ⓡ haptic device to manipulable the suture insertion task. The end effector of both devices were modified to provide real sensation for the tasks. Additionally, the authors did in vivo force measurement for palpation and needle insertion tasks. Data obtained were used to calculate and recreate force feedback in the simulator. The simulator was validated by a user test, where seven experts in the field used the workstation. At the end of the simulation, they were asked to answer a 29 question survey to provide feedback, and it was strongly agreed that the simulator provides realistic and correct stimuli.

Dental procedures

Dental training has been done using haptics feedback with Phantom Head Ⓡ [33] or artificial teeth. However, these methods present similar limitations as other surgical areas, such as spatial location difficulties. Ethical constraints prohibit the use of real human teeth, and different teeth materials lack the appropriate standards or they can be used only once per procedure on the same tooth.

A simulator for typical dental procedures called hapTEL was created by Tse et al. [34] and it allows dental drilling, caries removal, and cavity preparation for tooth restoration. In the first stages of hapTEL, Falcon Ⓡ and Force Dimension Ⓡ Omega 3 Ⓡ haptic devices were used. The results stated that the quality of haptic rendering needed to be improved and modifications to the physical environment were necessary that were implemented in a second prototype. Falcon Ⓡ devices were modified by adding an additional 6-DoF passive measurement device, and they were mounted in an upward position. Additionally, a magnetic ball bearing socket system was developed to provide 180 of rotation. This socket system avoided the presence of singularities.

The simulation uses polygonal models that were converted into tetrahedra by Tetgen Ⓡ [35]. The haptic algorithm to render forces displays four different types of materials and physical properties: enamel, dentine, pulp, and cavity. hapTEL novelty includes the use of different force feedback depending on the material, but mainly the addition of a device to the Falcon Ⓡ. This setup improved the realism of the virtual mouth model and let students feel as if they were in a typical dental operation.

Another common field of dental procedures is oral implant therapy. It has been accepted worldwide as an integral part of dental practice and it has become even more important over the past few years. The placement of implants takes into consideration anatomically complex operation sites in the cranio-maxillofacial region, which increase the risk of damage in operations. Computer-aided surgeries have helped students improve their skills; however, their application in dental implant surgery is rarely reported.

A virtual training system for oral implantation was developed by Chen et al. [36]. CAPPOIS (Computer Assisted Preoperative Planning for Oral Implant Surgery) helps users establish a medical environment, and lets users transfer the data to the training station. Chen et al. stated that this pilot study proved that CAPPOIS helps inexperienced doctors grasp the feel of the osteotomy procedure during dental implant surgery. However, more clinical cases need to be done to show the feasibility and reliability of CAPPOIS. CAPPOIS used various algorithms in the field of computer graphics, and several medical image processing methods are involved, including: DICOM Ⓡ file parsing, image segmentation and 3D-visualization, surface decimation, cutting, spatial search and 3D distance computing, volume measurement, spline curve generation, multi(curved)-planar reconstruction, registration, among others.

CAPPOIS was implemented by using Chai3D Ⓡ and Qt Ⓡ, and the system uses Omega 6 Ⓡ haptic device from Force Dimension Ⓡ. Additionally, CAPPOIS allows the use of 3D glasses to view the work place in 3D. This environment was made to guide users in preoperative planning. Moreover, CAPPOIS allows users perceive the position and orientation of oral implants, do bone density analysis and bone volume measurements for maxillary defects, among others.

Soft tissue simulation has been studied extensively in computer graphics area [18]. A recent research made by Hui and Dang-xiao in the area has expanded this topic by including bimanual interaction to attain practical applications in the medical field [38]. They developed a dental surgery focused on the interactions between the dental tools and tissues, in this case cheeks and tongue. They based their deformation model on a mass-spring model. The novel part of their work is the addition of a special kind of springs, ”returning spring”. This spring enable their meshes to be incompressible and resistant to bending by connecting each mass of the mesh to its original position.

Force feedback provided to the user is calculated from the penetration depth made by the tip of the tool. Hui and Dang-xiao simplified the interaction of the tools and the deformable skin as a case of rigid bodies. Therefore, only the tip of the tool is interacting with the mesh and the force that is acting on the mesh is calculated by distributing the force applied to the contact point to all of its neighbor vertexes.

They applied their approach to a virtual simulator where they modeled a face, in which cheeks are made up of 143 masses and 951 springs and the tongue is made up of 213 masses and 1747 springs. Hui and Dang-xiao used two Phantom Omni haptic devices in the simulator, and they modeled their cursors as a Mouth Mirror and a Dental Explorer for exploration and interaction purposes.

One type of teaching-learning techniques used in education are collaborative environments where students can interact among them or with a professor to solve problems and learn as a team. As stated before, TEL options allow students to enhance their learning by allowing them to acquire knowledge and skills via virtual environments [39]. The implementation of these environments in the medical area can help teachers guide students in medical operations. Kosuki and Okada developed a collaborative training system [37]. The simulator was created using a tool called IntelligentBox, which is a 3D visual software development system based on the Model-View-Controller architectural pattern [40]. The authors have implemented components for medical operations in IntelligentBox, and they have developed specific modules for touch and drilling actions to design a dental training environment (Fig. 7).

Fig. 7
figure 7

Current prototype for typical dental procedures made by Kosuki and Okada [37]. The simulator was made using IntelligentBox, and it provides students a collaborative environment where they can work in turns to solve a task

IntelligentBox defines 3D objects as polygonal models, which are represented as a set of several polygons. For object deformations, the system uses a very simple method based on the movements of the haptic device pointer. This method enables the user to use push and pull interaction on the environment. On the side of collision detection, IntelligentBox solves an equation based on the vertex locations and the interaction point, where only if the interaction point is the opposite side of the vertex, the system detects a collision. Finally, the dental training station uses Phantom Omni Ⓡ haptic device and IntelligentBox provides a distributed model sharing mechanism called RoomBoxes. RoomBoxes provides students a virtual environment in which they can share user-operation events.

Evaluations of the learning curve and acquisition of skills using haptic technologies have been reviewed in previous work [41]. Assessment and evaluation of students training can be found in the literature [42]. Wang et al. [42] designed a simulator called iDental to provide a preliminary user evaluation. Teeth models were obtained via point-cloud data from Computed Tomography (CT) images and scans and were refined using 3ds Max Ⓡ. The result is a VRML file that is the input for the virtual environment. The meshes of tissues for rigid and elastic objects are handled as triangles, and GHOST SKD Ⓡ handled collision [43].

The simulator uses one Phantom Desktop Ⓡ haptic device and runs on a workstation with one display. Performance and user tests with 10 dentists and 19 postgraduate students with one to two years of experience in dental procedures were made. Results obtained showed that the current simulator provides relative realism similar to that experienced with real patients. The results suggest that virtual reality simulators give undergraduate students more flexible training experiences. However, the current simulator needs to improve its haptic and graphic feedback, as well as the physical hardware, which is not consistent with the current technique.

Endoscopy

Endoscopy procedures examine the interior of a hollow organ or cavity of the body using an endoscope. Neurosurgery is one of the surgical specialties within endoscopy with the highest risks [44]. It requires a safe manipulation of instruments around fragile tissues; consequently, the most frequently reported errors in neurosurgery are technical ones [45].

Delorme et al. [46] developed NeuroTouch Ⓡ, a virtual simulator for cranial microsurgery training (Fig. 8). The neurosurgical simulator was developed using a physic-based engine and is composed of a 3D graphics rendering system (stereoscope), haptic devices, other controls, and one or two computers. Two different types of haptic systems can be used: the Phantom Desktop Ⓡ or a Freedom 6S Ⓡ from MPB Technologies Ⓡ. Delorme et al. developed a software for the simulations to design and mimic a neurosurgical microscope. The simulation uses multiple rates: graphics at 60 Hz, haptics at 1k Hz, and tissue mechanics at 100 Hz. NeuroTouch was developed to enable residents to practice their skills in tumor-debulking and tumor cauterization tasks. This system computes the deformation of the tissues and topology changes according to tissue rupture, cut, or removal. The mechanical behavior of tissues is modeled as viscoelastic solids using a quasilinear viscoelastic constitutive model for the viscous part of the environment.

Fig. 8
figure 8

Neurosurgery resident testing a NeuroTouch prototype [46]. The workstation implements a stereoscope as an immersion device to help the user locate himself in the virtual world. The system uses two Freedom 6S Ⓡ haptic devices. Picture comes from the National Research Council Canada

NeuroTouch is a virtual simulator with haptic feedback that was designed for the acquisition and assessment of technical skills involved in craniotomy-based procedures. Delorme et al. stated that usability feedback from doctors provided NeuroTouch Ⓡ ways to improve its functionality. They have praised the visual portion of the simulator, and suggested modifications in the ergonomic design. Finally, by adding a modification of parameters of the model, this system can be extended to cover patient-specific operations, which would increase the adaptability of the environment and cover a wider range of scenarios.

Another simulator was developed by Jiang et al. [47] and it allows a simulation of endoscopic third ventriculostomy. This procedure is used to treat hydrocephalus by making a perforation in the floor of the third ventricle of the brain under endoscopic guidance. Jiang et al. adapted the stereoscope design by Delorme et al. The proposed system adapted the NeuroTouch Ⓡ microneurosurgery to achieve the practices needed in the simulation. They used four mirrors and a screen instead of two mirrors and two screens. To provide haptic feedback, a neuro-endoscope handle with a blunt probe instrument inserted in its working channel was mounted on a Phantom Omni Ⓡ haptic device. The simulator was created using their software simulation engine Blade. Jiang et al. designed the environment through close collaboration with the end-users. It enables two tasks: one is the burr-hole position and entry orientation selection, and the second is the navigation inside the ventricular system, reaching the third ventricle and perforating the third ventricle floor.

Other studies have focused on developing prototypes of other kinds of endoscopic surgeries. Previous work in the area of endonasal endoscopy made by Neubauer [48], used a joystick to navigate and access tumor areas in the environment. This approach reduces the realism of the simulator due to its lack of haptic interactions with tools. Additionally, they did not implement deformations modules of tissues due to its navigation nature. Perez-Gutierrez et al. [49]. divided a research study in three steps to develop an endoscopic endonasal haptic surgery simulator and they simulated a rigid endoscope device with 4-DoF and modeled the nasal tissue.

Perez-Gutierrez et al. used a force feedback joystick to provide the haptic interaction. The simulator provides the user graphical feedback using the camera movements in the virtual environment, and through the joystick, the user can sense the interaction forces to reach the tumor, as can be seen in Fig. 9. The system was created using Nukak3D Ⓡ [50]. In this research, further work should be considered to improve aspects such as immersion and interaction.

Fig. 9
figure 9

Current visualization for an endoscope approaching to the middle turbinate in an endonasal haptic surgery simulator made by Perez-Gutierrez et al. [49]

Advances in endoscopy simulators can also be found in the area of surgical suturing. Punak et al. developed a surgical suturing simulator for wound closure using the simulation of the Covidien Ⓡ (previously known as Autosuture Ⓡ) Endo Stich Ⓡ suturing tool in [51]. The simulator uses two Phantom Omni Ⓡ haptic devices: one for a grasper and the other for the simulated Endo Stich Ⓡ suturing device.

The model of the wound is based on FEM techniques, which in this case used the linear hexahedral model. Additionally, the wound is simulated as a triangular surface mesh embedded in the linear hexahedral. This allowed Punak et al. to modify the surface of the wound or change the grid resolution of the mesh without changing the global environment . On the other hand, the suture model was based on a simplified Cosserat theory of elastic rods, which uses the CoRDE model [52], enabling the model to be discretized into a coupling of a chain of mass points and a chain of orientations. The simulator uses bounding volumes in the tools and BVH for the collision detection of between the open wound and the thread. Finally, they used a finite state machine to animate the knot tying sequence.

The suture was simulated with 100 points, and the triangular surface mesh of the wound was composed of 2,178 vertices and 4,352 triangles. The simulation ran at 20 fps when there were null or minor intersections, and it ran at approximately 10 fps with complex collisions and interactions. Punak et al. aimed to create low-cost and simple simulators that help trainees learn surgical skills. User testing of the system is necessary to observe if the environment provides realistic behavior and allows the user to be trained in the correct way. Additionally, diversity of wounds and methods of suture are being planned. Most of the surgical procedures are done with at least 6 DoF. However, Punak et al. proposed to test if simulators with three degrees of force feedback can be a feasible option instead of using a higher cost simulator offering more degrees of force feedback, so further field research should be done.

Laparoscopy

One of the most common surgeries is laparoscopic cholecystectomy, which is the removal of a diseased gallbladder. This surgery is often used as the training case for laparoscopy due to its high frequency and perceived low risk. Park et al. [53] developed a low cost virtual reality surgical simulator. It consists of three elements: virtual reality, motion sensor, and haptic feedback. The authors used the Microsoft Ⓡ Kinect Ⓡ 3D camera to capture and store information associated with the movements of the body of a user. To provide haptic feedback, this work uses Nintendo Ⓡ Wiimote Ⓡ controllers.

To create the virtual environment, Park et al. construct an expert knowledge database derived from video footage taken from a surgical endoscope during real cholecystectomy operations (Fig 10). A set of 15 key stages were defined and a database links the corresponding image frames of the video footage to the relevant key stages that were used to construct an interactive video tutorial. Finally, the system uses Hidden Markov Models (HMMs) to compare the actions of users with actions from the expert knowledge database.

Fig. 10
figure 10

Virtual reality simulator made by Park et al. for a laparoscopic operation. The image depicts the video tutorial that users can watch (right), and the surgical instruments that the user manipulates using Wiimotes Ⓡ (left) [53]

By using Wiimotes Ⓡ and the Kinect Ⓡ sensor, it is possible to capture the actions of the user. However, the environment loses realism due to the fact that it lacks simulated physical surgical devices. Laparoscopic workstations usually involve the handling of physical parts while the surgeon proceeds with the operation. However, the workstation proposed by Park et al. was aimed as an in-home training system for students, and it can help them practice anywhere.

Another low cost training simulator is eLaparo4D, which was created by Gaudina et al. [54]. They designed training exercises for medicine students in realistic scenarios of videolaparoscopic surgeries. The system is based on a node.js application server, which enables all the visualization, communications and administration. The user interface uses HTML5 Ⓡ, which runs a Unity3D Ⓡ engine plugin [55]. The meshes in the simulation were developed in Blender Ⓡ 3D [56].

Haptic feedback is provided by using three Phantom Omni Ⓡ, where the first two are used as tool handlers (grasper, hook or scissors) and the third one is used to move the camera within the virtual abdomen. By using three haptic devices, students would experience a realistic approach of the laparoscopic equipment. Gaudina et al. used an Arduino Ⓡ board connected to a vibrating motor that also has a vibration feedback. Additionally, the way the simulator was made allows haptic-based remote guidance provided by a supervisor. This guidance can be provided via web to show students the proper way to execute a critical task. Finally, the system enables users to have their own profile, which is tracked over time and users can monitor their progress. The system was designed in a way in which each exercise has its allowed and not allowed actions, which adds or subtracts points from the user’s score.

Gaudina et al. stated that eLaparo4D has many aspects that need to be improved. One is the physical modeling to simulate the interactions of elements properly, such as body parts that could interfere with the surgical procedure. Moreover, the definition of exercises in the simulator are being developed to reach a certain level of mastery. Finally, surgical procedures need to provide performance feedback, so eLaparo4D is evaluating the implementation of a tracking module of the students’ learning curve.

Serious games are employed to teach skills on surgery-based training applications. Serious games provide high fidelity simulations of environments and situations. De Paolis presented a serious game for training suture skills in laparoscopic surgery [57]. The system is focused on physical modeling, and it established a set of parameters to determine the development of trainees. The simulator uses a pair of haptic devices, which can be two Phantom Omni Ⓡ or two Falcon Ⓡ using the multidevice HAPI Ⓡ library. The system uses PhysX Ⓡ physics engine and Ogre3D Ⓡ graphics engine [58].

De Paolis used the mass-spring method to design the tissue in PhysX Ⓡ, and he used Ogre3D Ⓡ to render the tissue. On the other hand, the thread model was built using follow-the-leader techniques in PhysX Ⓡ. The thread is composed of cylinders that are connected through a spherical joint. This enables the rotation of the elements relative to each other. The architecture of the system was developed using the architectural pattern of Model-View-Controller, which manages the behavior and responses of the environment and objects.

In the assessment of suturing procedure, De Paolis considered the following parameters to evaluate the medical process: duration time, accuracy, force peak, tissue damage, angle of entry, and needle distance. The software architecture was developed using the architectural pattern Model-View-Controller, which manages the behavior of objects in the environment, responds to requests for information about their states and to instructions to change state. De Paolis [57] concluded that the serious game of laparoscopic suturing could be improved by modifying some aspects of the environment. One is improving the interaction between the tissue and suture thread, which is one of the areas in suture modeling that is currently researched.

Other improvements in the area of laparoscopic simulators include the addition of visual effects. For instance, smoke and bleeding models are not developed due to its lesser significance compared to other major modules, such as collision detection and physics simulation. Nevertheless, Halic and De developed a GPU based method to implement realistic smoke and bleeding effects in virtual reality simulators [59] (Fig. 11). These effects were applied in the Laparoscopic Adjustable Gastric Banding (LAGB) simulator [60].

Fig. 11
figure 11

In order to promote realism in simulators, Halic and De developed (a) smoke effects and (b) bleeding effects [59]. These effects were applied to LAGB simulator [60]

In order to obtain the desired effects, Halic and De analyzed videos obtained from LAGB and they identified two different kinds of smoke effects. One is the environmental smoke effect, which often obstructs the camera view. The other is the smoke effect originated at the tip of the cautery tool that spreads around the tissue slowly. Both types are blended to generate the final environment. For bleeding effects, an attribute variable was assigned to all vertexes of the mesh structure. The processing is made when the interpolated value of this variable is compared with the noise texture. If it exceeds a certain value, bleeding is rendered in that particular fragment.

The simulator and the effects were written with GLSL Ⓡ shader. The smoke video fetching and bleeding rendering were performed within the same shader. Halic and De used the GPU for bleeding and smoke simulation; therefore, it decreased the computational load of the CPU with a slight reduction in the rendering performance. However, the performance is better compared to a full-fledged simulation of both effects. Halic and De specified that they are planning to optimize the texture processing in CPU and CPU-GPU data transfer to avoid the loss of performance obtained.

Finally, assistant modules are currently being applied in the area of virtual simulators, where one of the most common is the technique of Virtual Fixtures (VF). VF is a technique that is used to guide students while they are working on a specific task. VF applies forces in the haptic device to maintain the students outside of prohibited work space areas. Research made by Hernansanz et al. is focused on VF [61]. The paper discusses the advantages of VR to improve basic learning skills in laparoscopic surgery. They used the Skill-Rule-Knowledge taxonomy [62] to assess assisting technologies applied in abilities acquisition.

Hernansanz et al. implemented as VF: visual guidance, audio guidance, motion scaling, magnification, and force feedback. These provide the student an assistance environment. The simulator that Hernansanz et al. used was the one developed by Zerbato [63]. The system uses a single Phantom Omni Ⓡ haptic device.

Orthopaedics

Arthroscopy [9] is another medical surgery that has been implemented in virtual environments. Arthroscopic procedures are performed to evaluate or treat orthopaedic conditions. Heng et al. developed a virtual reality system for knee arthroscopic surgery using OpenGL Ⓡ and C++ [73]. They created soft tissue deformable model using Finite-Element Analysis (FEA) with realistic feedback. They developed their own 4-DoF haptic device with 3-Degrees of Force Feedback (DoFF) to satisfy their requirements, and they used slides from the Visible Human Project image dataset combined with CT real images to obtain better visual realism.

In this simulator, two types of meshes are generated: one to model non-deformable organs (i.e., bones), and the other using tetrahedral meshes to represent deformable organs. Using an in-house architecture, these two meshes are rendered in a preprocessing phase of the system. To improve the performance, they propose regions with different properties to balance the computation time and the level of simulation realism. Finally, the simulator developed procedures to cut tetrahedral meshes and implemented optimized collision detection methods. An endoscope and a cutter were modeled to provide real tool models in this simulation. The simulator includes the functionality to record the surgical process. This feature enables medical students to playback the recorded procedure. This system was evaluated by medical professionals and the results reveal a satisfactory tactile feedback.

A typical task in orthopaedics is bone drilling. This process is an essential step before the insertion of screws or pins. Chen and He created a simulator for bone drilling of a femur bone in a hip fracture surgery [64]. This simulator can be seen in Fig. 12. The key feature of this study is the use of a Phantom Premium Ⓡ, which provide 6 DoFF. In the simulator, the bone is modeled as a voxel, which is obtained from CT or Magnetic Resonance Imaging (MRI) scans and the drilling probe is modeled as a cylindrical volume.

Fig. 12
figure 12

Chen and He developed a bone drilling simulator, where users can feel, by force feedback, the different components of the bone [64]. In figure (a) a user is performing the task using a Phantom Premium \({\circledR }\), and on figure (b) a close-up of the environment can be seen

In this simulator two important aspects for drilling were taken into consideration: bone properties and drill parameters. The bone properties was modeled according to its heterogeneity (hyaline cartilage, spongy bone, marrow cavity, and compact bone). For drill parameters, drill speed, type of drills and feed rates were considered. Finally, calculation of force feedback (drilling force and rolling torque) is based on the material removal rate, which depends on drill diameter and feed-rate. Additionally, the travel distance of the drill is calculated using feed-rate and time interval parameters.

In orthopaedics, one specific surgery process for fractures on the human femur is the Less Invasive Stabilization System (LISS). LISS allows surgeons to insert a percutaneous plate and fix it in the distal femur with screws to help bones recover from fractures. For this operation, Cecil et al. developed a virtual collaborative simulator for LISS training [65]. The simulator uses FEM approach to model deformations when surgical tools interact with the surfaces of the bone (Fig. 13).

Fig. 13
figure 13

Another environment that uses the technique of collaborative learning is the one made by Cecil et al. [65]. Picture on the right shows the surgery process for fractures on the human femur was designed. By applying the methodology of collaborative environments, Cecil et al. enables users to practice the surgery process, and on another workstation an expert surgeon can check the process. On the right side, a close-up of the virtual simulator can be seen

The importance of this research was the implementation of the collaborative mode. Each user has their own virtual environment on their work station; the control of the simulation remains with a specific user until he grants permissions to another user, in a token-based approach. In addition to the collaborative mode, Cecil et al. designed this simulator as two different platforms to be used as an evaluation and planning tool for surgeons. The first platform is a simulator, which was created using C++ and Chai3D Ⓡ. This simulator incorporates a Phantom Omni Ⓡ haptic device to interact with the environment and obtain force feedback from it. The second platform was created to visualize the surgery process obtained in the process. This system was considered as a tool to help surgeons teach the process and evaluate medical residents.

Miscellaneous procedures

Additional to the previous medical areas, there have been simulators in other specific areas or tasks. One of them is in the training of biopsy, which is the sampling of cells or tissues for examination. This procedure is most commonly performed to evaluate whether there are inflammatory or cancerous conditions in organs. Training of this task can only be performed on live patients, where consequent risks may occur in patients. Therefore, the importance of creating systems that provide visual realism and controlled environments is emphasized. In this area, Ni et al. developed a virtual reality simulator for Ultrasound-Guided (UG) liver biopsy training [66].

The authors combined images obtained from CT with ultrasound images to provide higher realism. In addition, the generated deformable model is able to simulate the breathing of a patient by changing controlled parameters. For haptic feedback, they used two haptic devices that are managed in independent routines, a Phantom Omni Ⓡ to simulate an ultrasonic scanning probe and a Phantom Premium to handle the virtual needle.

Ni et al. show in the simulator the liver model and the ultrasonic sensor image that can be seen in real operations, as can be seen in Fig. 14. User tests were made to evaluate different aspects of realism, such as images and tactile effects. The evaluation was conducted with experts and trainees, where satisfactory results were obtained. Nevertheless, it was highlighted that the performance of experts is not as high as expected when breathing is enabled in the simulation.

Fig. 14
figure 14

Ni et al. developed a virtual biopsy simulator [66]. In this, the user can perfom interactive anatomical navigations, where according to the movements of the probe, a CT view that corresponds to it is shown

A Transrectal Ultrasound (TRUS) guided prostate biopsy virtual environment using haptic devices has been developed in recent studies [67]. Selmi et al. designed a complete learning environment where various exercises were implemented to provide didactic learning. To achieve this goal, the dedicated clinical case database from Koelis Ⓡ UroStation Ⓡ system was connected to the system to provide diverse patient cases. This database covers a large variety of important situations that surgeons typically encounter during real surgeries.

The simulator let practitioners use a Phantom Omni Ⓡ haptic device to navigate the environment on one hand, and on the other hand they use CamiTK Ⓡ GUI [68], which provides data and feedback to the user during the simulation. Moreover, the simulator allows practitioners to access their performance record, such as average score and procedure average time of the procedure. Additionally, to improve realism, students have to complete a checklist, which is typically done prior to the operation.

The simulator of Selmi et al. is used in two phases. In the first one, students are asked to perform seven specific exercises, which help them to understand the procedures during the biopsy and let them practice hand-eye interaction in the simulator. The second phase offers them the option to perform a virtual biopsy process, where the student can select the desired position to do the biopsy (decubitus or lateral) and the location to start it (left central base, left lateral base, right central base or right lateral base).

Training environments have also been created for liver surgery. Yi et al. proposed a method to adapt and calibrate shape and size of organs by establishing feature points on their surface [69]. The organs are formed by triangle meshes that follow the spring-mass damper model to achieve deformations. Yi et al. used a simplified Lin-Canny algorithm to calculate the nearest point from the surgical tool in order to deploy the corresponding interaction forces and collision detection. This simulator enables medical students practice surgeries in virtual environments by applying images of patient’s organs in the simulator.

Virtual training environments have also been designed in optometry. Optometry is usually practiced in close supervision of experts due to the fact that these procedures are time-consuming and unrepeatable. Tradicional techniques in the area focus on the use of artificial or bull eyes. Medical students in optometry need training environments to develop their skills with superior realism. Wei et al. [70] created an immersive extendable and configurable work station that implements the use of a Phantom Omni Ⓡ haptic device and a Head Mounted Display (HMD) to provide AR techniques. This work station can be seen in Fig. 15. The simulation was developed using C++ and Chai3D Ⓡ.

Fig. 15
figure 15

A workstation for optometry tasks was developed by Wei et al. [70]. The workstation provides a realistic approach to real operations. Figure (a) shows that the system is made from a head display and the equipment that is used in the operation. Additionally, a display and a Phantom Omni are mounted. On figure (b) screenshot of the body removal task is shown

The eye model was implemented as a polygonal mesh with separate component and textures. By doingthis segmentation, interaction between the surgical tools and the eye provide different physical properties, according to the region that the user is navigating. Moreover, the model was built implicitly from the interactions with the deformable object; therefore, data-driven models were used in this simulator.

In the virtual environment, illumination and needle sharpness modification were adapted to provide the best suitable and similar conditions as the ones experienced during real surgeries . Additionally,, the simulator considered the use of the HMD to adjust the visual rendering effect, and AR techniques to modify the slit lamp orientation during the simulation. Finally, the simulator was tested by five optometry students who already knew the procedures. They had to perform five different tests and their performance was recorded by the system. By using the simulator, they were evaluated by assessing 1) distance estimation between the tool and the eye, 2) foreign body location, 3) foreign body removal task, 4) the angle and position of the needle, and 5) needle insertion.

The authors stated that in the first trials made by the students, they had bad performance because they were still unfamiliar with the workstation. The principal problem was in the area of space location (Z-axis or depth). At the end of the evaluation, students were asked to answer a survey about the system. In this questionnaire, immersion, interactivity and effectiveness of the system were asked. Most students agreed that the simulator could enable users to finish the tasks faster and safer due to the force feedback provided on their hand.

Besides advances in training students for specific medical tasks, physio-therapeutic systems have been made using haptics to relieve the work load of physiotherapists. Surgeries are usually followed by a rehabilitation program. Typical feedback provided to patients is given using audiovisual cues [71]. However, in some cases, using a mechanical or haptic feedback can be implemented as an alternative or complement treatment. A recent research made by Rajanna et al. proposed a system called KinoHaptics, which is an autonomous, wearable, haptic device to help during the recovery process of patients [72].

KinoHaptics hardware is based on an in-house developed armband (with vibration motors), and a Kinect Ⓡ 2.0 to track the motion of users. By adding the armband with vibro-haptic feedback, Rajanna et al. simulates a physiotherapist that accompanies patients during the training session. Kinect Ⓡ 2.0 is used to capture the movement of the patient to recreate it in a virtual scheme. The virtual scheme and the user interface were made using Unity3D Ⓡ framework.

The system was tested by 14 students. Test subjects were chosen according to the following categories: has suffered an injury, has limited motion in their limbs, and has no prior injury or limited motion of their limbs. Finally, results shows that users liked KinoHaptics and felt it convenient to be used for self-care after having a surgery.

Discussion

In this section we provide a comparison and summary of technical aspects used in haptic virtual simulations. Then we discuss the visual realism of the simulations and then we focus on the training aspects of virtual environments. The papers discussed in this work are summarized in Tables 1 and 2.

Table 1 Comparison of haptic simulations
Table 2 Comparison of haptic simulations (cont)

Technical comparison

Haptic devices in virtual simulators

Current simulators mainly use haptic devices with three or six DoF. The most common ones are Falcon Ⓡ and Phantom Omni Ⓡ due to their affordable price (Falcon Ⓡ costs around $150 - $200 US Dollar as of July 2015). Falcon Ⓡ devices only provide 3-DoF, Phantom Omni Ⓡ provides 6-DoF, but it has only 3-DoFF. Additionally, its cost is around $2,200 US Dollar, which makes it less accessible. Several authors have attempted to combine low cost haptic devices with external modules to provide additional degrees of freedom [32, 34], or they provide better realism by adapting surgical tools to them [30, 47].

The prevailing used haptic devices in stitching procedures are from the Phantom family. This kind of haptics uses 6-DoF, but they only have 3-DoFF, except for the Phantom Premium, which has all its DoF coupled with force feedback. A typical task used in the development of stitching simulators is open-wound operations, where they mainly focus the task on the dermis layer. The nature of this procedure implies that torsion is needed when the needle is piercing the skin; consequently, simulators need to provide 6-DoF. Particularly in this area, none of the authors adapted haptic devices or created their own to provide forces in cases such as skin perforation by the needle.

Palpation virtual environments

target to achieve sensations felt in real procedures. Palpation is a procedure that is commonly made in parallel with needle insertion. Authors usually adapt haptic devices with pads to provide real sensations. Other authors focus their attention to provide proper sensations of tumors by scanning current tumor pads. They translated the texture and force sensations to provide better haptic feedback and rendering. Palpation may benefit from using new haptic devices that use piezoelectric sensor in gloves to enhance the sensation and depth perception.

Dental simulators

focus on the most common actions in dental procedures, as caries and calculus removal. Applications use diverse haptic devices; nevertheless, the main aim is the reproduction of realistic workstations. Tse et al. [34] adapted the workstation to provide these conditions, and also added an extension to a Falcon Ⓡ device in order to provide 6-DoFF. Additionally, undergraduate students used a common drill as the handler of the haptic device, which enhanced the learning procedure. Further work in this area could consider the addition of two haptic devices because medical tasks are done by using two hands.

Endoscopy and laparoscopy environments

belong to the class of Minimally Invasive Surgeries (MIS). Because this procedures need to be performed with two hands, simulations are usually manipulated with two Phantom Omni Ⓡ haptic devices. Virtual environments in this area focus principally in endoscopy navigation [47, 49]. Laparoscopy simulations are commonly developed to provide practice in videolaparoscopic surgery [54, 61]. New alternatives, such as [53], provide low-cost simulations by using haptic devices coupled with video games by using Wiimotes Ⓡ.

Orthopaedics environments

usually focus in arthroscopic tasks, principally drilling. In this type of simulations, FEM should be considered. Researchers should focus on how to model bone properties and structure. By doing this, they can model proper interaction in drilling and force feedback. Finally, miscellaneous procedures are being developed. Important cases in the area are biopsies and optometry. Both kind of simulators use Phantom Omni Ⓡ. However, manipulation of tools requires 6-DoFF, as in the case of stitching. Finally, post-surgery rehabilitation has started to add haptic feedback as a complementary resources of audiovisual cues [72].

In short, simulators in these areas are developed with 3-DoFF due to their nature, where only 3-DoF are needed. Commonly used tools in medical procedures involve manipulations using triggers, so their addition to haptic devices do not require the incorporation of more DoF such as in [54].

Degrees of freedom and modifications

State of the art articles on medical applications focus on developing simulations, which leaves the addition and modifications of haptic devices as another field of study. The modification of haptic devices can be found in robotics and robot-assisted surgeries [74]. Modification to haptic devices usually improves haptic sensations in order to make them closer to real surgeries or procedures [30, 32, 34, 47].

In contrast, several authors decreased the physical realism in virtual simulators coupled with different haptic devices [53] to make the simulations more affordable. Whereas it decreases the cost of a virtual simulator for undergraduate students by using a Kinect Ⓡ and Wiimotes Ⓡ, the type of force feedback and the form of Wiimotes Ⓡ reduced realism in the practice of skills.

Additionally, due to the technical properties of the force feedback, only one force aiming in one direction can be usually applied. One exception is the work of [53] that provided vibrations as force feedback. Laparoscopic applications need to provide at least 3-DoFF. Consequently, the Wiimote Ⓡ is not a feasible option because it only provides vibration as feedback. Also, twisting forces require 6-DoFF and they are found and used rarely [34].

Visual realism

Another important aspect that should be considered in the development of simulators is to reproduce a high level of visual realism. Recent progress in the GPU processing and computer graphics in general allows an extremely high level of realism. Very few works [46, 54, 60, 61] focus on photorealistic rendering and high visual quality. Most of the works use only schematic rendering with shadow to show the location of the probe and simple Phong shading to depict the surfaces. Finally, only a limited number of works simulate gravity [57].

The most commonly used visualization is displaying the position of the probe as a sphere, sometimes with shadow, and in some cases 3D immersion is added. There is usually some learning required for the users to become familiar with the system as they first need to allocate themselves properly in the simulator. Current simulators let users experiment and move freely in the 3D scene before starting the medical procedures, as exemplified in the work of [26].

Users very often report the difficulty in detecting the position of the probe in the 3D space. It is extremely important to locate the coordinate system of the virtual 3D space correspondingly to the real one so when the probe moves to the right the virtual one should move in the same direction. Moreover, scaling of the virtual environment is very important as well. In many cases the forces as well as the sizes of the objects do not correspond to the real world. The minimal requirement for the correct perception of the 3D location is the inclusion of shadows to locate the haptic probe. Some works add 3D immersion to increase realism and to add the 3D perception. The immersion is provided either by using active 3D head mounted displays [34, 36] or by using 2D screens with displaying in stereo [46]. The 3D head mounted displays allow free movement, and their position and orientation is coupled with the virtual camera, but they are usually bound to a single user. Passive 2D screens allow multiple users to view the scene, but the movement of the user is not transferred to the movement of the virtual camera. The most common solution is using active 3D displays, where the position of the person performing the procedure is critical, as for example, in the treatment of occlusions.

One of the main challenges is to improve the way students perceive the location of objects in space, especially the z-axis (depth) as described in [70]. An alternative solution was made by Coles et al. [32], where the authors used AR to ensure visual-optical alignment.

Physics

Computer graphics has provided an unprecedent progress in simulation of physics. Recent GPU-supported physics simulation engines such as PhysX Ⓡ [75], Havok Ⓡ [76], Newton Ⓡ [77], or Bullet Ⓡ [31] allow simulation of fluid dynamics, kinematics, or FEM and soft tissues. These simulations are physically precise, they can use very large scenes and meshes, and are usually provided in real time. This has not been exploited to the full extent in the area of haptic simulations for two reasons; first, users rely on existing haptic libraries and APIs that usually do not provide access to the most advanced and recent algorithms. Second, these libraries are usually implemented on the GPU and they require usage of advanced visualization techniques to be fully used. Moreover, combining haptic feedback with those libraries is non-trivial. Nevertheless, it would be extremely beneficial for haptic simulations to fully exploit and use modern GPU-oriented physics-based approaches as it could be the next quantum leap in the quality and realism of the simulators.

Feedback in training

An important aspect is to provide learning feedback and evaluation during training sessions. While most of the existing simulators provide visuo-haptic feedback, some of them also provide a detailed feedback of the training session by recording the number of clicks, motion of the haptic cursor, applied forces in the environment, actions made by the user, etc. The assessment of the tasks and user performance is made via the registration of the interaction parameters of the environment. For instance, some systems focus on quantifying the number of operations carried out in a given time interval, such as the system described in [42], where the authors measured the number of calculus removed over a period of ten minutes. Other systems analyzed parameters and evaluation checklists from the users by checking the reliability of the simulation [66, 70]. Nonetheless, these results and checklists were measured internally, so no evaluation nor feedback was given to the user. Researches have also considered increasing the adaptability of virtual environments. Students can use different medical training scenarios by modifying the parameters of the models in the work of Delorme et al. [46].

Contrary to the closed systems that do not provide feedback to the user, simulators introduced in [36, 47, 53] provided real-time feedback to the user. They send immediate messages on how the user is performing and they also provide tips during the task. Additionally, Jiang et al. [47] presented a system that includes an evaluation of the whole task by displaying an appropriate assessment metrics.

Gaudina et al. [54] used an assessment method to evaluate the learning by measuring user performance. Their simulator predicted the score obtained during the training session and provided students with feedback that helped them to know the aspects that needed to be improved. VF has been used in the area as guidance during training simulations; however, they are not properly considered as learning feedback. VF are used to keep students out of prohibited areas within the workspace. This type of assistance was used by Hernansanz et al. [61]. Nonetheless, VF has been shown to be generally ineffective in improving results in training [78, 79]. Authors should consider using other kinds of virtual assistance methods to attain better results during training and avoid disturbances during user’s movements, such as Temporally Separated Assistance or Spatially Separated Assistance [80]. Another methodology that has been used in virtual environments with haptic devices is collaborative learning. These environments are based on the interaction between users or user-expert. These are principally based in a token-based approach, where one user cannot do anything until she has the token [37, 65]. However, these environments require large time from experts, and they are not easily accessible for training simulators.

Several studies performed usability tests with students and experts in the field of medicine [26, 29, 30, 32, 37, 42, 66, 70]. Most of the researchers ask the medical staff to answer surveys and functionality questionnaires to assess the simulation fidelity, and these evaluations helped virtual environments with haptic devices to gain acceptance in the medical community.

Even though the above-presented simulators allow assessment of user performance in the virtual environment, they do not usually provide assistance, guidance or hints after simulation. It is important to decide whether it is better to provide immediate ongoing feedback while interacting with the simulation, or to provide users a post-feedback after having interacted with the system. The use of expert human tutors could help to train users appropriately. Indeed, these systems have contributed in various areas in the training of professionals [1].

Considerations

By analyzing the state of art of haptic simulators in medical training, we identified several aspects that are important to consider.

The target audience

It is extremely important to actually know the audience, the level of knowledge they hold, and to make an ex ante decision about the targeted group before the simulator is actually developed. The audience affects the choice of the technology (DoF, DoFF, immersion, visual realism) as well as appropriate learning feedback to be provided.

The required level of visual realism

varies significantly for each area. While the high visual quality is important in areas where the user can actually see the performed operation in daylight (dental, stitching) it may not be so important in endo- and laparoscopy. However, a general tendency to increase the realism and the quality of rendering is given by the increasing quality of GPUs and modern APIs for computer graphics, such as OpenGL and shading languages such as GLSL.

Another challenge is the incorporation of medical tools models in the virtual environment that promotes realism in the simulations. Students tend to learn better if they have training stations that provide simulator resources as the ones they use on real life. On the other hand, modifications of current haptic devices may be important to increase the quality of haptic feedback, and they represent a lower cost choice for situations when a large number of users need to be trained. According to users and medical staff, differences between the virtual procedure and the real one represent a change in traditional operation habits, which generates a difficulty in the procedures of surgical tasks. This aspect involves the creation of simulators that add, adapt or modify current haptic devices.

The feedback

provided by current simulators is often the most neglected part of the system. Future work should be focused on providing in-depth feedback for the users, evaluating the learning of an individual user over time, and including intelligent assistants or tutors that could allow students to enhance dexterity and improve performance. These modules should measure and guide them to perform the task properly, providing an enhanced learning process.

In addition to the previous remarks, it is important to mention that the development of adaptive simulators, capable of modifying the degree of difficulty of surgical processes depending on the user performance, will be highly suitable for training purposes. Indeed, these systems could provide appropriate challenges aimed to improve suturing skills acquisition. Also, this kind of simulators should provide modules in order to monitor the curve of expertise reached by users and display new scenarios according to their progress.

Moreover, there are also challenges that must be resolved when researchers want to incorporate haptic devices in virtual training environments. One of them is the calibration of force feedback that is going to be sent to students when they interact with the simulated tissues and organs. Proper modeling of forces should be considered by applying proper mathematical theorems and constrictions.

Finally, it is worth to mention that simulators should benefit from the latest advances in GPU design and novel algorithms for physically based modeling.

Conclusion

In this work, we have presented a survey of medical training simulators that use haptic feedback. Existing works and simulators have been reviewed and compared from the viewpoint of surgical techniques modeled, the number of degrees of freedom and degrees of haptic feedback, perceived realism, immersion, and learning feedback provided to the user.

Visual and auditory channels have been privileged in the evolution of the interaction between humans and computers. Tactile sensations have been added gradually by developing devices intended for hands, such as mouse, keyboards and touch screens, which are primarily used to input data. Recent advances in the development of haptic devices allow the perception of the environment through an active examination by the sense of touch, feeling, and palpating its shape and texture. Training virtual environments that incorporate haptic devices pose an important alternative to train and gain hand-operated skills.

While haptic simulations are an interesting and low cost alternative to training by using real tissues, they are still hindered by the low realism of the visual environment or the high price for high quality devices. Visual realism has been driven by the entertainment industry, whereas haptic feedback is usually developed in research laboratories.

One important aspect of the medical simulators is the quality of the feedback provided to the user. Many approaches limit themselves to only mimicking the real world. However, simulators should have the capability to log the entire session and provide detailed feedback that could be coupled with the use of an expert evaluation either in the form of human evaluation or an intelligent tutor. This would certainly have the potential to increase the usage of medical haptic training devices in medical practices.