Keywords

1 Introduction

The development of robot work cells without safety fences [6] and the coming up of human-robot cooperation or coexistence in shared workspaces make collision avoidance of robots to a very important issue. In the case that humans are present in the robot workspace their safety can be guaranteed by observing the distance between robot and human and reduced velocity of the robot motion, which are supervised by safety controller. Another possibility is the limitation of the dynamic power of the robot to 80W and its static force to 150N.

However, collisions between robots and humans are only one hazard. It seems to be probably that frequently the human is the source of danger. He or she modifies the robot workspace by putting and taking objects. So a dynamic workspace occurs, which increases the risk of collisions. For that reason, the focus of this paper is not to avoid collisions between robots and humans. Instead of that, collisions with lifeless objects should be prevented by modifying the original planed robot path. For such a scenario it is not necessary to equip the robot system with special safety controllers and safe sensors.

In this paper obstacles should be detected by range image sensor which is a part of the Microsoft Kinect RGBD camera. The Kinect camera is a popular sensor often used in robotics since some years. Something about its use can be found in [1]. To get a full 3D image and handle occlusions and distortions two cameras are used to observe the robot work cell. Every camera provides 30 fps high definition color picture and 30 fps depth image with a resolution of \(512\times 424\). Measuring range of depth image is between 0.5 and 4.5 m. In [4] detailed investigations on fusion of multiple Kinect images can be found. However, in this article we use only a very simple algorithm. In [5] the 20-joint model of the human provided by multiple Kinect cameras directly is used for data fusion. An algorithm of collision detection for robots using multiple depth sensors is presented in [2]. In this article after detection of obstacles and potential collisions robot should not be stopped. It should be continuing its task. Thereto the robot path has to be modified to avoided collisions.

Fig. 1.
figure 1

Robot work cell observed by two range image sensors.

For this purpose the experimental setup which can be seen in Fig. 1 is configured. Besides the Kinect cameras, the robot work cell furthermore consists of a low payload six-axis articulated robot of type ABB IRB 120. It is controlled by IRC5-Compact robot controller. On the table where the robot is mounted, some obstacles are placed which consist of Lego bricks. The RGBD cameras are connected to a PC, which performs the fusion of the images and the detection of the obstacles. Furthermore, the PC computes the positions of so called artificial charges. They lead to a virtual force which is used for robot path modification. The virtual force is also calculated on the PC.

This paper is organized as follows: First, in the next section a simple algorithm is presented which performs the fusion of the depth images provided by the two range image sensors. Thereafter, obstacles can be detected regarding the resulting 3D image. In Sect. 3 an approach is worked out for placing virtual charges on and in the obstacles. For this purpose, the objects are fractionized and assigned into geometric primitives. In Sect. 4 collision avoidance is verified on a practical experiment using the method of artificial force field generated by the virtual charges. This approach is also shortly described. Finally, in the last section the conclusion is given.

2 Obstacle Detection Using Two Depth Images

As already mentioned obstacles in the robot work space should be detected using two depth images collected from different perspectives. In comparison to approaches which using only one camera, fault detection as a result of occlusions and distortions should be reduced.

Fig. 2.
figure 2

Robot work cell with obstacles viewed from different perspectives. (Color figure online)

Figure 2 show clippings of the color pictures of the two Kinect cameras which are located on different positions. In the example five objects are placed in the robot workspace. The corresponding depth images from the range sensors in the Kinect cameras can be seen in Fig. 3. Both of them have the same coordinate frame. For verification of the algorithms, the contours of the obstacles are included manually in the depth images. They are represented by the red lines. As it can be seen, the real obstacles are shifted with respect to the depth images. Additionally, in parts objects are not detected completely by the range sensors which may be a consequence of occlusions. Now the task is to merge this two depth images to get a full 3D image of the obstacles in the robot work space.

Fig. 3.
figure 3

Depth images of the robot work cell from different perspectives. (Color figure online)

Fig. 4.
figure 4

Shifting of a two dimensional point.

The received pictures are distorted as a result of the perspective of the camera. Using the following algorithm the distortion can be compensated. Figure 4 shows the situation of a two dimensional point represented by vector \(\mathbf{P_0}=\left[ x_0~z_0\right] ^T\). It should be rotated around the angle \(\alpha \). The result is point \(\mathbf{P_1}\). Its coordinates can be calculated as follows:

$$\begin{aligned} \mathbf{P_1}=\left[ x_1~z_1\right] ^T=\left[ x_0+z_0\tan {\alpha }~~~\sqrt{z_0^2+\left( z_0\tan {\alpha }\right) ^2}\right] ^T \end{aligned}$$
(1)

In the case that the point \(\mathbf{P_0}=\left[ x_0~y_0~z_0\right] ^T\) is given in three dimensional space, the rotation have be performed around the two angles \(\alpha \) and \(\beta \). Now point \(\mathbf{P_1}\) yields to:

$$\begin{aligned} \mathbf{P_1}=\left[ \begin{array}{c}x_1\\ y_1\\ z_1\end{array}\right] = \left[ \begin{array}{c} x_0+z_0\tan \alpha \\ y_0+h\tan \beta \\ \sqrt{h^2+\left( h\tan \beta \right) ^2} \end{array}\right] \text{, } \end{aligned}$$
(2)

where

$$\begin{aligned} h=\sqrt{z_0^2+\left( z_0\tan \alpha \right) ^2} \text{. } \end{aligned}$$
(3)
Fig. 5.
figure 5

Adjusted depth images.

The angles \(\alpha \) and \(\beta \) depend on the positions of the cameras. Its values are determined empirically. In the scenario which is shown in Fig. 1 the angles are \(\alpha _1=18^\circ \), \(\beta _1=57^\circ \), \(\alpha _2=-50^\circ \) and \(\beta _1=-25^\circ \) for camera 1 and camera 2, respectively. The results of the algorithm are shown in Fig. 5. As it can be seen, the distortion could be removed successfully. The edges which can be viewed by the cameras are now conform with the objects.

Fig. 6.
figure 6

Merged depth image of the example.

The resulting depth images are represented by matrices \(\mathbf {M}_1\) and \(\mathbf {M}_2\). They can be merged to matrix \(\mathbf {N}\) as follows:

$$\begin{aligned} \mathbf {N}=\gamma _1\mathbf {M}_1+\gamma _2\mathbf {M}_2 \text{, } \end{aligned}$$
(4)

where \(\gamma _1\) and \(\gamma _2\) are correction values. They are necessary to compute the proper height of the obstacles. The values depend on the position of the according camera. In our scenario these values are \(\gamma _1=1.1\) and \(\gamma _2=1.4\). The final depth image of the example can be seen in Fig. 6. Here, the obstacles are successful detected.

3 Placing of the Virtual Charges

In the next step the detected obstacles in the robot work space should be supplied with virtual charges. In [8] an approach was presented, which places the charges on the whole surface of the object. For the sake of decreasing the computation time during real time path generation the number of the virtual charges should be reduced. For this purpose a new algorithm will be used.

Fig. 7.
figure 7

Geometric primitives with enveloped rectangles and charges.

Fig. 8.
figure 8

Layers of the obstacles with placed virtual charges.

The obstacles are fractionized with respect to the z-axis of the robot world frame. So for every obstacle we get some 2D binary images. Their number depends on the distance of factorization. Here, this distance is chosen to 10 mm. The objects found in the binary images should be assigned to geometric primitives like line, square, circle, etc. For this purpose from binary objects some characteristic features have been determined. First, the object is completely enveloped by a rectangle. Its width and high are denoted to w and h, respectively. The feature aspect ratio ar represents the maximum of the relationship between h and w:

$$\begin{aligned} ar=\text{ max }\left( \frac{h}{w},\,\frac{w}{h}\right) \text{. } \end{aligned}$$
(5)

Another feature is pixel ratio pr:

$$\begin{aligned} pr=\frac{\text{ number } \text{ of } \text{ pixels } \text{ which } \text{ are } \text{ assigned } \text{ to } \text{ the } \text{ object }}{\text{ total } \text{ count } \text{ of } \text{ pixel } \text{ within } \text{ the } \text{ rectangle }} \text{. } \end{aligned}$$
(6)

For the purpose of further placing of the virtual charges also the center of the rectangle and the extreme positions where the object touch the rectangle are required. With the values of features ar and pr the objects can be classified into geometric primitives by comparing the features with different thresholds.

Fig. 9.
figure 9

Virtual charges placed in and on the obstacles.

Placing of artificial charges depends on the kind of geometric primitive of the obstacle or rather the geometric primitive of the according layers. Figure 7 show the used selection of geometric primitives, the enveloped rectangles and the positions of charges.

As already mentioned the distance between the layers of the obstacles is 10 mm in z-direction. For the example used in this paper they can be seen in Fig. 8. The algorithm assigns every layer to one geometric primitive. Thereafter, virtual charges can be placed corresponding to the rules presented in Fig. 7. They can be also seen for the particular layers in Fig. 8.

At least, the 3D positions of the charges in the robot workspace show Fig. 9.

The diameter of every charge illustrated in this figure is proportional to its potential \(Q_i\).

4 Collision Avoidance

The approach of artificial potential fields is a well known method for path planning of mobile or stationary robots [3]. This idea can be also used for collision avoidance. The artificial force emitted by the obstacle influences the robot motion to bring the manipulator arm away from it. For real time generation of the virtual force field the approach of artificial charges was proposed [7].

Here the generation of the force field is performed by the following algorithm: The number of detected obstacle layers is denoted as c. Every object layer gets its total potential value \(T_i\). It is a function of the area of the enveloped rectangle \(A_i\), already explained in the previous section, and the pixel ratio \(pr_i\) of the object layer with respect to the rectangle:

$$\begin{aligned} T_i=A_i pr^3=w_i h_i pr_i^3 \text{. } \end{aligned}$$
(7)

The value of \(T_i\) is split to all charges of the object layer which leads to charge potential \(Q_i\):

$$\begin{aligned} Q_i=\frac{T_i}{n_i} \text{, } \end{aligned}$$
(8)

where \(n_i\) is the number of charges on the according obstacle layer. Force function \(\mathcal{{F}}_i\) for every obstacle layer is chosen to reciprocal quadratic relationship between force and \(\left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| \):

$$\begin{aligned} \mathcal{{F}}_i\left( \left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| \right) = \frac{Q_i}{\lambda \left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| ^2} \text{, } \end{aligned}$$
(9)

where \(\left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| \) represents the distance between the robot end-effector and the according charge. Force function can be tuned by factor \(\lambda \). The virtual force vector \(\mathbf{F}\) of all charges acting on the robot tool located at \(\mathbf{p}\) can be finally computed using the principle of superposition:

$$\begin{aligned} \mathbf{F}=\frac{1}{\lambda }\sum _{i=1}^c \sum _{j=1}^{n_i}\left( \frac{Q_i}{\left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| ^2} \frac{\mathbf{p}-\mathbf {e}_{ij}}{\left| \left| \mathbf{p}-\mathbf {e}_{ij}\right| \right| }\right) \text{. } \end{aligned}$$
(10)

Positions of the virtual charges are presented by vector \(\mathbf{e}_{ij}\), where i is the index of the obstacle layer and j is the index of the charge. For practical reasons and to reduce the computational time it may be suitable to limit the range of influence of a charge.

Fig. 10.
figure 10

Original and modified robot path as a result of virtual force emitted by obstacles.

Now an example of collision avoidance is presented: In its task the robot should move along a programmed path of meander form which is located in x and y-direction of the robot world frame. Permanently, from the current end-effector position \(\mathbf{p}\) and the positions of the virtual charges \(\mathbf{e}_{ij}\) the virtual force vector \(\mathbf{F}\) is calculated according to (10). It should modify the robot path by vector \({\mathbf {\Delta X}}\). For this purpose, e.g. by simple spring behavior can be used which is a kind of mechanical impedance:

$$\begin{aligned} {\mathbf {\Delta X}}=\mathbf {k}\,\mathbf{F} \text{ with } {\mathbf {\Delta X}}=\left[ \varDelta x~~\varDelta y~~\varDelta z\right] ^T \text{, } \mathbf{F}=\left[ F_x~~F_y~~F_z\right] ^T \text{. } \end{aligned}$$
(11)

The parameters in matrix \(\mathbf {k}\) can be understand as spring constants. The matrix \(\mathbf {k}\) should have diagonal form:

$$\begin{aligned} \mathbf {k}=\text{ diag }\left( k_x,~k_y,~k_z\right) \text{. } \end{aligned}$$
(12)

Path correction is performed in path coordinate frame. This function it supported by the used ABB robot controller.

For the selected test scenario the original and the modified robot path can be seen in Fig. 10. Thus it appears that obstacles located at the robot path are circumnavigated and so collisions are successful prevented. For better presentation here the parameter \(k_z\) is set to zero.

5 Conclusion

In this article an approach was presented which computes a 3D image from the depth images of two Kinect cameras. The cameras are used to observe the work cell of an industrial robot from different perspectives to detect obstacles. With two cameras fault detections as a result of occlusions and distortions can be avoided.

Collisions between the robot and the obstacles should be prevented. For this purpose the method of the artificial force field was used. In this paper an algorithm was proposed, which classifies the sections of the objects into geometric primitives with the objective of effective placing of virtual charges in and on the obstacles. The charges are used to compute the artificial force field. A repulsive force acts on the robot and corrects its preliminary path to move the robot end-effector away from the obstacles. The absolute value and the direction of the force vector were computed in real time.

All presented approaches were verified by practical experiments. For this purpose a suitable test scenario was developed.