Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Grounded force feedback haptic devices such as Sensable’s Phantom, Force dimension’s Omega, or Haption’s Virtuose, enable users to explore environments and feel the force feedback generated by the Haptic Interaction Point (HIP) or tool they control. The environment can be virtual, in which case the interaction forces are computed by means of a model. It can also be real, when the haptic interface is used to teleoperate a slave robot, and the interaction forces can either be estimated or obtained via a load cell, and then returned to the user.

Virtual environments are widely used in haptics for training purposes, as it has been pointed out in [1] that the presence of haptic feedback in conjunction with visual feedback improves the accuracy of force recall in motor skill learning over having only visual feedback. Likewise, [2] shows that haptic feedback is important in skill acquisition during imaged-guided surgical simulation training. The literature on teleoperation is itself extensive. A comprehensive survey on bilateral teleoperation (with force return to the user) can be found in [3].

In more recent years, the use of active constraints, or virtual fixtures, was introduced to assist users in their task, both in virtual reality as in teleoperation scenarios. Active constraints are virtual forces that can be used to prevent unwanted motions (forbidden region virtual constraints), or guide the user towards a certain goal. They have been used by themselves (without force return from the environment), like in [4], where a user teleoperates a robot to perform live-line maintenance, and the active constraint keeps him close to a circular path, or in [5] where force fields prevent the user from approaching obstacles in teleoperation. In virtual reality, [6] uses active constraints to create force feedback in a minimally invasive beating heart surgery simulation, where normally the environment doesn’t provide force feedback due to the nature of the operation. Another example can be found in [7], where virtual fixtures are designed to recreate force interaction at a catheter tip when it approaches a target area.

Active constraints can be used as the only source of haptic feedback, but they are frequently implemented on top of the force feedback made available by the environment. A typical example can be found in [8], where active constraints are used to define untouchable regions around tissue in teleoperated surgery with force return. In [9], a user handles a Phantom haptic device to palpate a soft viscoelastic deformable body simulating a breast, in which a virtual tumor is simulated and rendered through the haptic device, therefore augmenting the real scene.

Whenever active constraints and environment force feedback are used simultaneously, they have to be combined in some way to be rendered at the haptic interface. The usual implementation just sums both forces to obtain the resulting force to be rendered. But what effect does this achieve? Will the constraint and environment “blend” so that we can’t distinguish where the force came from? Or will the user have a clear understanding of the source of the force? Both things can happen, according to the scenario. Is there a way we can ensure that the user will always distinguish at the haptic interface if he is feeling a force coming from a virtual fixture or from the environment? To the authors’ knowledge, this hasn’t been investigated. In this paper, we propose a rendering technique that makes the source of the feedback force easily distinguishable.

The problem we analyze in this work is that of combining the force feedback from the environment and the virtual feedback in an unambiguous way. While the user is navigating through space and perceiving forces, how can we render the forces such that he knows at any time what is coming from the environment and what is generated by the superimposed virtual cues?

2 Combining Vibrotactile and Kinesthetic Cues

When two different kinds of forces are generated by the environment and active constraints, they then have to be combined to be rendered at the force feedback haptic device. If these forces are orthogonal, they don’t interfere with each other. Such is the case when the virtual fixtures constrain the user to stay on a specific surface, and the forces of interest in the environment are to be felt in the plane. In any other case, the virtual and environment forces are to have at least a certain degree of overlap.

In the case of guidance fixtures, the usual implementations keep the user on a given trajectory using a virtual spring, as shown in [10]. Other guidance forces are possible, as can be seen in [11], where a cooperatively controlled robot augments the tool in order to minimize certain forces applied during vitreoretinal surgery. These usually make it hard to distinguish the guidance from the environment feedback. When forbidden-region virtual fixtures are used, they can be distinguished from their environment if they have a very different stiffness. For example, a low stiffness forbidden region is easily distinguishable when moving in free space, but might be confused with soft tissue in medical scenarios.

If the forbidden region fixture is very stiff, it will generate a very big force, which is likely to shadow the force returned by the environment. In many scenarios, this is a desired feature. However, if the environment has objects with a similar stiffness, it will make it hard for the operator to realize if he is touching a hard object in the environment, or a virtual constraint.

In [12], we proposed to render the virtual constraints by vibrating the haptic device when the constraint was reached, and increase the magnitude of the vibration proportionally to the penetration, so as to distinguish the constraints from the objects in the environment. We designed a blind experiment where virtual constraints and objects were close together. Using only the classic virtual spring rendering approach for both object and constraint, users frequently did not feel the object next to the constraint. With our proposed vibration rendering for the virtual constraints, and virtual spring for the objects, such errors consistently decreased, but the penetration into the virtual constraints augmented. Another disadvantage of vibration was that it didn’t convey directionality, since humans are unable to distinguish the direction of high-frequency vibration [13]. This made it hard for the users to understand the direction of the constraint’s surface.

In this paper, we aim at combining the advantages of both rendering approaches. We want the kinesthetic component on virtual constraints, that will prevent the user from crossing the forbidden region, and convey the directionality of it. We also want the vibration that conveys the “virtuality” of the constraint.

The way we accomplish our goal is straightforward. We propose to render the environment force as it is returned by the virtual scenario, force torque sensor, or model. On the other hand, the virtual constraints will have a high frequency (200 Hz) component added on top of them. The frequency is chosen in order to maximise sensitivity ([14]). Moreover, we exploit the fact that we’re unable the perceive the direction of the vibration, to have the vibratory feedback orthogonal to the normal of the virtual constraint, so as not to vibrate the haptic interface in the direction of movement.

In order to test the effectiveness of our approach, we conducted an experiment very similar to that in [12], but with our new paradigm.

3 Experimental Design

Users were asked to blindly navigate in a 2D environment (to simplify the experiment) with a haptic device, where they would perceive virtual constraints, in the form of a tunnel bounding the area of movement. They would also perceive a sphere protruding from the tunnel, simulating a “real” object. The scenario is appropriate for our hypothesis, since the forces generated by the virtual constraints and the object in the environment have similar direction and magnitude. The users were asked to cross the tunnel from one side to the other, with the main task of avoiding the walls of the tunnel (in surgery, a forbidden region usually denotes a danger zone that has to be avoided). They were told that a sphere would be present on one side of the tunnel, and were asked to report if they felt it, and if so, on which side. Figure 1 shows the details of the rendered environment. The haptic device used was Force Dimension’s Omega 3, and the virtual environment was mathematically implemented in C.

Fig. 1.
figure 1

Visual representation of the virtual environment, showing the center of the tunnel \(h_k\), the center of the sphere \(c_k\), the position of the HIP \(o(t)\), and its distance from the center of the tunnel \(d_t(t)\) and to the center of the sphere (\(d_1(t)\)).

The tunnel was rendered on a vertical plane facing the user, and its orientation was randomly changed at every trial, with angle \(\theta _k \in [-7\pi /16, 7\pi /16]\) (where \(\theta =0\) is vertical). This prevented users from getting used to the travel path. The length of the tunnel was 170 cm.

In this experiment, we considered the sphere to be the object present in the environment, and therefore rendered it stiffly. It was placed randomly at either side of the tunnel, and at a random distance along the travel path. Being of radius \(r\) and having it’s center at \(c_k \in \mathfrak {R}^2\), let \(F_s\) be the force it contributes at the haptic device. We define it as follows:

$$ F_s(t) = {\left\{ \begin{array}{ll} K~d_{1}(t) &{} \text {if } \; \Vert d_1(t)\Vert < r, \\ 0 &{} \text {if } \; \Vert d_{1}(t)\Vert \ge r, \end{array}\right. } $$

Here, K = 3000 N/m, and \(d_1(t)\) is the distance vector from the center of the sphere to the Haptic Interaction Point. It’s defined by \(d_{1}(t) = o(t) - c_k \in \mathfrak {R}^2\), where \(o(t) \in \mathfrak {R}^2 \) is the position of the HIP. The sphere was designed to be big enough so that it would not be consistently missed, and so that it could be confused with the wall of the tunnel. A radius of 55 cm was used, and \(c_k\) was positioned so that the sphere would “block” half of the tunnel. In other words, the center of the tunnel \(h_k\) is always tangent to the sphere’s circumference.

The virtual constraints, on the other hand, could be rendered in different ways, producing a force \(F_v\) on the haptic device. In a classical way, they can be rendered in the same way as the sphere, with a stiff virtual spring (task K, for kinesthetic). The same stiffness as that of the sphere was used, depicting the most difficult situation that could be encountered in teleoperation (constraint and object cannot be distinguished by their stiffness). A vibration proportional to the penetration inside the constraint can be used (task V, for vibrotactile), or a sum of both (task M, for mixed). Let \(d_2(t)\) be the distance vector from the haptic interaction point to the center of the tunnel \(h_k\). If \(\Vert d_2(t)\Vert <5\) mm, \(F_v=\bar{0}\). Otherwise,

$$ F_v(t) = {\left\{ \begin{array}{ll} K~d_{2}(t) &{} \text {for } \; \text {task K}, \\ A~\text {sgn}(\sin (\pi f t))~(\Vert d_{2}(t)\Vert -4.5\,\mathrm{mm}) &{} \text {for } \; \text {task V}, \\ K~d_{2}(t) + A~\text {sgn}(\sin (\pi f t))~(\Vert d_{2}(t)\Vert -4.5\,\mathrm{mm}) &{} \text {for } \; \text {task M} \end{array}\right. } $$

Here, \(f=200\,\mathrm{Hz}\), and \(A\) is a vector that relates the vibration amplitude to the constraint penetration. It is orthogonal to the plane of our experiment (only has an \(x\) component), and was empirically chosen with \(\Vert A\Vert =500\,\mathrm{N/m}\). Subtracting \(4.5\,\mathrm{mm}\) instead of \(-5\,\mathrm{mm}\) adds an offset to the vibration amplitude. The total rendered force at the haptic device is therefore \(F=F_s+F_v\).

Participants were seated in a desk with only a haptic interface and a keyboard in front of them, and had pink noise played through headphones during the experiment. At each trial, the haptic device would automatically move to the top of the randomly oriented tunnel, and wait for the user to press the space bar. The whole scenario was explained to the participants, and they were instructed to cross the tunnel from one extremity of the tunnel to the other, avoiding its walls as much as possible. They were told the applied force and penetration would be measured, and their primary goal was to minimize them. Secondarily, they would have to report if they had felt the lump, and if so, on which side. The different modalities were explained, and they were given all the time they needed to explore the environment to understand it. After that, they performed nine non-recorded trials, three with each feedback modality, as training for the actual task. After a pause, trials started being recorded, with the different modalities being presented in random order to cancel possible training effects, and their answers to the object detection were recorded by the experimenter.

Since we are interested in the difference of perception between active constraints and the environment, trials on which the sphere was not touched were discarded. Trials were presented until eight of each modality were performed in which the sphere was touched. Eleven participants took place in the survey, all right handed, of age between 26 and 34.

4 Results

We start by analyzing how useful each modality turned out to be to correctly determine on which side of the tunnel was the sphere placed. Table 1a, shows how many times (out of eight) did each participant correctly realize on which side of the tunnel was the sphere placed, for each modality. Table 1b groups all subjects together, and counts for each modality how many times it was missed, correctly classified, or incorrectly classified.

Table 1. On the left, number of correct identifications (hits) of the sphere by subject and modality, with the totals on the last column. On the right, the number of hits, incorrect identification (error), and misses (sphere not felt when actually touched) for all users grouped together.

It can be seen that the vibrotactile and mixed modalities fare very similarly, while the amount of misses is much higher with the only-kinesthetic modality. Since normality can not be assumed with the obtained data, the Friedman test (non-parametric test) was used to compare the repeated measurements of Table 1a. The obtained p-value of \(p=0.0131\) indicates that there is a statistically significant difference between the modalities. Running post-hoc tests, we confirm what can be seen in the data: the only-kinesthetic modality is statistically different from the vibrotactile (\(p=0.0341\)), and from the mixed modality (\(p=0.0251\)). On the other hand, no difference can be found between the vibrotactile and mixed modalities (\(p=0.9930\)).

We now turn our attention to the constraint penetration data. We registered the maximum penetration into the tunnel wall for each trial and modality, and then computed the maximum penetration on all trials by subject and modality. Figure 2 shows a box plot summarizing the data, with the maximum penetration of each user by modality. The kinesthetic and mixed modalities exhibit a much lower penetration into the constraints, due to the rendered force that prevents the user from crossing them. The vibrotactile modality on the other hand has much higher penetrations, and is therefore shown on a different scale.

Fig. 2.
figure 2

Box-plot showing the subjects’ maximum penetration into the constraints by modality. The median, 25\(^{th}\) and 75\(^{th}\) percentile are shown in the box, and extreme data-points on the whiskers, and an outlier marked with a + sign.

It is clear that the vibrotactile modality does little to prevent constraint penetration. On the other hand, the kinesthetic and mixed modalities fare very similarly. Although Fig. 2 seems to suggest a slightly lower penetration with the mixed modality than with the kinesthetic one, a paired t-test did not reveal any statistical difference between them.

5 Conclusions and Future Work

We analyzed three different ways to present data coming from active constraints and objects in the environment to the user. The kinesthetic-only (task K) used the same method to render forces from both sources, using a stiff virtual spring between the object/constraint surface to the haptic interaction point. The vibrotactile modality (task V) used only vibrations to render active constraint penetration, while the mixed modality (task M) used a combination of kinesthesia and vibration.

We designed and ran an experiment that showed that users were better at distinguishing object from constraints with the vibrotactile and mixed modalities, than with the kinesthetic-only modality. Moreover, the constraint violation was much higher with only-vibration than with the other two modalities, as was expected, but the mixed and kinesthetic modalities were comparably good. This confirms our hypothesis that by combining vibrations and kinesthesia, we combine the strengths of both the vibrotactile and kinesthetic modalities, without losing the discriminative power of using them separately. Until now, all of our experiments have been done in virtual scenarios, but we plan to implement them in a teleoperation setup with a KUKA LightWeight robot fitted with a force/torque sensor to manipulate soft tissue, as described in the ACTIVE project [15].

In this paper we have explored how to render forbidden-region virtual fixtures, where the forces that are felt at the boundary usually overwhelm those returned by the environment. In the case of guidance virtual fixtures, the forces coming from the guidance and from the environment occur simultaneously, presenting a different kind of problem. We are currently exploring different rendering techniques to be able to provide directional cues on the haptic interface, without the need of a continuous force. This would also allow us to render these cues on wearable tactile devices. Combining these cues for guidance with the rendering method presented in this paper for forbidden regions, we hope to have a complete framework to render any kind of active constraints on top of the force feedback from the environment.