1 Introduction

In many industries, collaboration between humans and robots is becoming increasingly important. However, collaborative robots could harm the humans with whom they share workspace, owing to human error or malfunction of the robot. Therefore, there has been a recent increase in the importance of human–robot collision safety. Collision detection is one of the most practical solutions to ensure human safety [1]. Various collision detection algorithms have been proposed to ensure the safety of human workers, one of which is a disturbance observer-based approach.

A disturbance observer, which is widely used to implement a robust controller, has been used to detect unwanted disturbances such as human–robot collision [2, 3] or actuator fault [4, 5]. These schemes observe an external torque estimated with the motor current and dynamic model of a robot and detect a collision if the external torque exceeds a certain threshold. Further researches were conducted to avoid the use of joint acceleration while estimating an external torque [6,7,8]. Concerning the model uncertainty, robust observers [9, 10] or neural network-based approaches [11, 12] have been proposed. Detection methods based on tactile sensors [13] or vision sensors [14] also have been proposed in order not to use manipulator’s dynamic model. However, there are several challenging issues for collision detection of a collaborative robot, one of which is collision detection during a physical robot–environment interaction.

Physical robot–environment interaction, such as handling of a payload and force control, is a fundamental function of collaborative robots, and the robots experience external force resulting from such interaction. However, aforementioned collision detection methods assume situations wherein the only source of the external force is the collision force. Thus, these schemes fail to detect collisions occurring along with an intentionally applied external force to perform the task, because these two external forces cannot be distinguished from one another. To address this problem, a collision detection algorithm that can reject the disturbance generated by the interaction force using a band-pass filter was proposed [15,16,17]. It assumes that the disturbance due to collisions occurs at a relatively higher frequency than that due to intentional contact to perform tasks such as kinesthetic teaching. However, the threshold frequency used to distinguish the collision from other disturbances is not clear, and there is no guarantee that the force resulting from the collision is always at a high frequency. To overcome these limitations, a collision detection method using tactile sensors with machine-learning algorithm was proposed [18]. However, this approach requires a vast amount of training data and time. Therefore, a more practical detection scheme should be developed for safe human–robot collaboration.

In this study, we propose a collision detection method for a collaborative robot that physically interacts with the external environment (objects in its surrounding). To achieve this, we devised a collision detection index (CDI) that is not affected by external forces exerted on the robot while performing normal tasks, and thus is sensitive only to collisions. This index is based on linear space projection. We conducted several experiments to validate the effectiveness of the proposed collision detection method.

The main features of the proposed method can be summarized in three points. First, it is possible to reliably detect collisions, even though the environment associated with the tasks is uncertain or unknown because the proposed method does not require any information about the environment. Second, collisions can be detected regardless of the frequency and magnitude of the collision force since the proposed CDI is fundamentally irrelevant to the external force generated during normal operation. Finally, it does not require any training data and pre-processing, such as learning and adaptation schemes.

The remainder of this paper is organized as follows. Section 2 introduces a collision detection scheme for a collaborative robot. Section 3 proposes the CDI, which is verified with several experiments detailed in Sect. 5. Finally, Sect. 6 presents the concluding remarks.

2 Collision Detection for Collaborative Robots

2.1 Classification of Collaborative Tasks

Generally, tasks that are conducted by a robot arm can be classified into three cases. The first is a non-interactive task such as robotic painting. In this case, the robot does not physically interact with objects in its surroundings, or the force exerted on the robot owing to the interaction between the robot and its surroundings is negligible. Figure 1a shows the external forces that can be applied to the robot when it performs a non-interactive task, and the only source of external force is the collision force Fc.

Fig. 1
figure 1

Classification of human–robot collaboration tasks: a non-interactive task, b payload handing, and c contact task

The second case is based on the handling of a payload; a pick-and-place operation is a typical example. As shown in Fig. 1b, the external force Fp resulting from the grasped object or tool acts on the robot when it handles the payload. Fp is the resultant force of the gravitational force Fg and the inertial force Fi. Thus, both Fc and Fp are applied to the robot when a collision occurs. It is important to note that Fp, Fg, and Fi are vectors, and Fp is composed of a force vector fp and a moment vector np. Likewise, Fi and Fg are denoted as (fi, ni) and (fg, ng), respectively.

The final case is a contact task, typical examples of which are deburring and assembly. In this case, Fc and the end-effector force Fe caused by the contact between the robot and the environment act on the robot, as shown in Fig. 1c. Therefore, Fc and Fe are exerted on the robot during collision.

2.2 External Torque-Based Collision Detection

The equation of motion can be described by using the external joint torque vector τext, as follows

$${\varvec{\uptau}}_{j} = {\mathbf{M}}({\mathbf{q}}){\mathbf{\ddot{q}}} + {\mathbf{C}}({\mathbf{q}},{\dot{\mathbf{q}}})\,{\dot{\mathbf{q}}} + {\mathbf{g}}({\mathbf{q}}) + {\varvec{\uptau}}_{ext}$$
(1)

where M(q) is the inertia matrix, C(q,) the matrix containing the Coriolis and centrifugal terms, and g(q) the gravity vector. The variables q, , and are the joint position, velocity, and acceleration vectors, respectively. τj is the joint torque vector applied to each link of the robot arm and can be measured by joint torque sensors mounted at each joint for torque control.

The features of collaborative tasks are summarized in Table 1 in terms of τext that has been used to detect collisions in the previous studies [2,3,4,5,6,7,8,9]. The only source of the external force is the collision force Fc when the robot conducts a non-interactive task. In such case, it is possible to establish τext =  τc where τc is the external joint torque vector generated by Fc. This means that τext = 0 if no collision occurs, and τext ≠ 0 otherwise. On the contrary, τext is affected by Fp or Fe when a robot handles a payload or conducts a contact task, and can be expressed as

$${\varvec{\uptau}}_{ext} = {\varvec{\uptau}}_{c} + {\varvec{\uptau}}_{p}$$
(2a)
$${\varvec{\uptau}}_{ext} = {\varvec{\uptau}}_{c} + {\varvec{\uptau}}_{e}$$
(2b)

where τp and τe are the joint torque vectors caused by Fp and Fe, respectively. As shown in (2), τc cannot be directly obtained from τext because τext ≠  τc. Therefore, a collision cannot be detected by observing τext if a robot arm interacts with objects in its surroundings (i.e., τp ≠ 0 and/or τe ≠ 0).

Table 1 Main features of collaborative tasks

The collision detection indexes (CDIs) for each task are introduced in Table 1. The CDIs dp and de are not affected by external forces exerted on the robot during payload handling or contact tasks, respectively. Therefore, they are sensitive only to collisions and can be used to detect a collision even when the contact force between the manipulator and the environment exists. These CDIs are derived and explained below in detail in Sect. 3.

If τp and τe are known, τc can be obtained from τext; one method of obtaining τp and τe is to measure Fp and Fe using an F/T sensor mounted on the wrist of the arm. However, it is not economically feasible to use an expensive F/T sensor for collision detection only. An alternative method is to predict Fp and Fe based on the specific dynamic model of the grasped object and the static model of the environment in contact, respectively. However, it is very difficult to accurately develop or estimate such models since they vary according to the shape and material of the grasped object and the contact surface. Therefore, a collision detection method that does not require the use of the F/T sensor and the model of the external environment should be developed for safe physical robot–environment interaction.

3 Collision Detection Index

3.1 Collision Detection Index for Handling of Payload

To achieve the purpose of this study, subspace projections are used in the joint torque space. This is based on a well-known statement that a vector projected onto a vector space orthogonal to its own space always results in a zero vector. That is, if we project τext in (2) onto the vector space orthogonal to the space wherein τp lies, then τp can be eliminated from τext. By using this principle, the CDI dp can be decoupled from Fp.

Projection of τext onto the vector space orthogonal to the space wherein τp lies requires the projection matrix which is formulated from the linear mapping of vector spaces. Figure 2 shows the linear mapping between subspaces of joint torque space Sτ for an n-DOF manipulator. Sτ is an n-dimensional joint torque space, and τext lies in this space. Sp is the space wherein τp lies, and it is a subspace of Sτ. ⊥Sp is the space orthogonal to Sp. The dimension of Sp and ⊥Sp are m and n-m, respectively. PSp and PSp are the n × n projection matrices that project an arbitrary vector in Sτ onto spaces Sp and ⊥Sp, respectively. The dashed arrows in Fig. 2 represent the linear mapping.

Fig. 2
figure 2

Mapping between the subspaces of joint torque space

As is well known, the projection matrix PSp that projects τext onto ⊥Sp is given by

$${\mathbf{P}}_{ \bot Sp} = \,{\mathbf{I}} - {\mathbf{P}}_{Sp}$$
(3)

where I is the n × n identity matrix. The projection matrix Psp that projects an arbitrary vector onto Sp can be formulated by the well-known equation

$${\mathbf{P}}_{Sp} = {\mathbf{B}}_{p} \left( {{\mathbf{B}}_{p}^{T} {\mathbf{B}}_{p} } \right)^{ - 1} {\mathbf{B}}_{p}^{T}$$
(4)

where Bp is the basis matrix of Sp, and the columns of Bp are the orthogonal basis of Sp [19, 20]. Thus, Bp should be computed to derive PSp.

To compute Bp, we use a basis of the space SFp, to which Fp belongs. τp and Fp are related by τp = JTFp as shown in Fig. 3a. Note that the basis of SFp is the set of a basis of the space Sfp and Snp wherein fp and np lie, respectively, since Fp consists of fp and np, which are linearly independent of each other, as shown in Fig. 1b. Thus, we find a basis of the space Sfp and Snp to compute Bp.

Fig. 3
figure 3

Derivation of the projection matrix PSp: a mapping between the joint torque space and task force space, and b plane of force generated by the payload

To find a basis of the space Sfp and Snp, Fp and its elements fp and np were analyzed in terms of linear algebra. As shown in Fig. 3b, fp lies in the same plane as fg and fi because Fp is a resultant of Fg and Fi, as shown in Fig. 1b where the unit vectors ui and ug are in the same directions as fi and fg, respectively. Then, we can consider the set of unit vectors ui and ug as a basis of Sfp that is the space wherein fp lies because fp is a linear combination of ui and ug. Therefore, matrix Bfp, which is an orthogonal basis of Sfp, can be established as follows:

$${\mathbf{B}}_{fp} = [\begin{array}{*{20}l} {{\mathbf{u}}_{i} } \hfill & {{\mathbf{u}}_{g} } \hfill \\ \end{array} ]$$
(5)

The magnitude of ui and ug are always known, because these vectors are unit vectors and the directions of ui and ug can be acquired from the desired acceleration of the end-effector and the gravitational force by assuming that the robot follows the desired trajectory well. That is, we can find ui and ug, which form Bfp, even though the exact magnitude and direction of fp are unknown. Therefore, Bfp can be computed without the dynamic model of the grasped object.

On the contrary, matrix Bnp, which is a basis of a space Snp wherein the vector np lies, cannot be found without the dynamic model of the grasped object, because the magnitude and direction of np are determined by the position of the center of mass of the grasped object, which is uncertain. This means that np can be any vector in the three-dimensional space. Therefore, Bnp is a set of three linearly independent vectors and can be established as follows:

$${\mathbf{B}}_{np} = \left[ {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right]$$
(6)

Therefore, the 6 × 5 matrix BFp, which is an orthogonal basis of SFp, is a set of Bfp and Bnp. As a result, BFp can be established as follows:

$${\mathbf{B}}_{Fp} = \left[ {\begin{array}{*{20}l} {{\mathbf{B}}_{fp} } \hfill & {{\mathbf{0}}_{3 \times 3} } \hfill \\ {{\mathbf{0}}_{3 \times 2} } \hfill & {{\mathbf{B}}_{np} } \hfill \\ \end{array} } \right]$$
(7)

Then, the 6 × 6 projection matrix PFp, which projects an arbitrary force vector F into SFp, is given by substituting BFp into (4) as follows:

$${\mathbf{P}}_{Fp} = {\mathbf{B}}_{Fp} \left( {{\mathbf{B}}_{Fp}^{T} {\mathbf{B}}_{Fp} } \right)^{ - 1} {\mathbf{B}}_{Fp}^{T}$$
(8)

Therefore, as illustrated in Fig. 3a, τp can be represented by

$${\varvec{\uptau}}_{p} = {\mathbf{J}}^{T} {\mathbf{F}}_{p} = {\mathbf{J}}^{T} {\mathbf{P}}_{Fp} {\mathbf{F}} = {\mathbf{P}}'_{Sp} {\mathbf{F}}$$
(9)

where J is the 6 × n Jacobian matrix that can be derived from the manipulator kinematics and the matrix P’Sp = JTPFp, which is the n × 6 matrix. As can be seen from (9), τp is a linear combination of the column vectors of P’Sp. That is, the column vectors of P’Sp are a spanning set for Sp, where τp lies, and an orthogonal basis of Sp can be obtained from P’Sp because the basis of a space is the particular spanning set for that space.

The singular value decomposition (SVD) method is known as the best way to construct the orthogonal basis of a space because the numerical computation of an SVD is relatively stable in terms of round-off errors compared to other orthogonalization methods. Therefore, SVD is employed to construct the orthogonal basis of Sp, and PSp is decomposed as follows:

$${\mathbf{P}}'_{Sp} = {\mathbf{U\varSigma }}V^{T}$$
(10)

where U and V are the unitary matrices and Σ is the diagonal matrix. The column vectors of U and V are the bases for the row and column spaces of Jp, and the diagonal entries of Σ are the singular values of Jp. Therefore, Bp, the orthogonal basis of Sp, can be constructed using the column vectors of V, with the exception of the zero vectors of V.

By substituting Bp into (4), the projection matrix PSp can be derived as

$${\mathbf{P}}_{Sp} = {\mathbf{B}}_{p} ({\mathbf{B}}_{p}^{T} {\mathbf{B}}_{p} )^{ - 1} {\mathbf{B}}_{p}^{T} = {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + }$$
(11)

where the superscript + denotes the pseudo-inverse operation defined by A+ = (ATA)−1AT. By substituting (11) into (3), the projection matrix \({\mathbf{P}}_{ \bot Sp}\), which projects the arbitrary torque vector into ⊥Sp, can be derived as

$${\mathbf{P}}_{ \bot Sp} = {\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + }$$
(12)

As mentioned, the two vector spaces Sp and ⊥Sp are orthogonal to each other, and τp always lies in Sp. Therefore, the projection of τp onto ⊥Sp is given by

$${\mathbf{0}} = ({\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + } ){\varvec{\uptau}}_{p} \,$$
(13)

This means that τp cannot affect the projection of an arbitrary vector into ⊥Sp. As a result, a collision detection index dp that is decoupled from τp can be developed by projecting τext into ⊥Sp as follows:

$$\begin{aligned} {\mathbf{d}}_{p} & = ({\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + } ){\varvec{\uptau}}_{ext} = ({\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + } ){\varvec{\uptau}}_{c} + ({\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + } ){\varvec{\uptau}}_{p} \\ & = ({\mathbf{I}} - {\mathbf{B}}_{p} {\mathbf{B}}_{p}^{ + } ){\varvec{\uptau}}_{c} \\ \end{aligned}$$
(14)

As shown in (14), dp is decoupled from τp and is a function of τc and Bp. Therefore, if τc is applied to the robot arm, dp increases or decreases with τc; dp is close to zero if there are no collisions (i.e., τc = 0). As a result, collisions can be detected by observing dp, even though the external force Fp is uncertain or unknown.

3.2 Collision Detection Index for Contact Task

Similar to the CDI for payload handling in the previous section, the CDI for the contact task is the result of projecting τext onto the space orthogonal to τe. The projection matrix employed to project τext onto the space orthogonal to τe is derived from the relation between the joint torque and the end-effector force as follows:

$${\varvec{\uptau}}_{e} = {\mathbf{J}}^{T} {\mathbf{F}}_{e}$$
(15)

In (15), τe is a linear combination of the column vectors of JT, which are orthogonal to each other when the robot is not operated in a singular configuration. This means that a basis of the space Se, where τe lies, is the set of the column vectors of JT. Therefore, a projection matrix PSe that projects an arbitrary vector onto Se can be formulated by

$${\mathbf{P}}_{Se} = {\mathbf{J}}^{T} ({\mathbf{JJ}}^{T} )^{ - 1} {\mathbf{J}} = {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + }$$
(16)

Then, the projection matrix PSe, which projects an arbitrary vector onto the space ⊥Se, which is orthogonal to Se, can be given by

$${\mathbf{P}}_{ \bot Se} = \,{\mathbf{I}} - {\mathbf{P}}_{Se}$$
(17)

Substituting (16) into (17) yields

$${\mathbf{P}}_{ \bot Se} = \,{\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + }$$
(18)

As a result, we can project an arbitrary vector onto ⊥Se using the projection matrix PSe in (18). The projection of τe onto ⊥Se is given by

$${\mathbf{0}} = (\,{\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + } ){\varvec{\uptau}}_{e}$$
(19)

This means that τe cannot affect the projection of an arbitrary vector onto ⊥Se. Therefore, a collision detection index that is decoupled from τe can be developed by projecting τext onto ⊥Se as follows:

$$\begin{aligned} {\mathbf{d}}_{e} & = ({\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + } ){\varvec{\uptau}}_{ext} \\ & = ({\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + } ){\varvec{\uptau}}_{c} + ({\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + } ){\varvec{\uptau}}_{e} \\ & = ({\mathbf{I}} - {\mathbf{J}}^{T} ({\mathbf{J}}^{T} )^{ + } ){\varvec{\uptau}}_{c} \\ \end{aligned}$$
(20)

As shown in (20), de is decoupled from τe, and becomes a function of τc and J. Therefore, collisions can be detected by observing de, even though the robot performs its task under the external force Fe.

4 Collision Detection Algorithm

Figure 4 shows the flowchart of the collision detection algorithm that was designed based on the proposed CDIs. This algorithm requires an estimate of τext and then computes dp and de. Ideally, every element of τext, dp and de approaches zero when no collision exists, but this condition rarely happens due to the uncertainty in the dynamic model and friction model. Therefore, collisions should be detected by comparing the absolute values of τext, dp and de with a thresholds τth, dth,p and dth,e, respectively. The suitable values of these thresholds are experimentally determined so that the CDIs exceeding these thresholds are considered as a collision. In Fig. 4, cases 1, 2, and 3 correspond to a non-interactive task, payload handling, and contact task in Fig. 1, respectively. Collision detection changes the CDI in accordance with the type of task conducted by the robot. If a collision is detected, the robot stops its task and regenerates its trajectory.

Fig. 4
figure 4

Flowchart of the collision detection algorithm

Detection of a collision using the algorithm shown in Fig. 4 requires the value of τext, which uses the acceleration information as shown in (1). However, the acceleration calculated from the encoder is susceptible to noise due to numerical differentiation. To solve this problem, we use a single typical observer for the estimation of τext, which is designed based on the disturbance observer, and the generalized momentum p, which can be expressed as p= M(q). Using this observer, τext can be estimated as follows:

$${\hat{\mathbf{\tau }}}_{ext} (t)\, = {\mathbf{K}}\left[ {\int_{0}^{t} {\left\{ {{\varvec{\uptau}}_{j} + {\mathbf{C}}^{T} ({\mathbf{q}},{\dot{\mathbf{q}}}){\dot{\mathbf{q}}} - {\mathbf{g}}({\mathbf{q}}) - {\hat{\mathbf{\tau }}}_{ext} (l)} \right\}} dl - {\mathbf{p}}} \right]$$
(21)

where \(\hat{\varvec{\tau }}_{ext}\) is the observed value of τext and K is the observer gain [6].

5 Experimental Verification

In this section, the CDIs proposed in the previous section are verified through several experiments. This verification was conducted using a 7-DOF robot arm (KU-Dex) that was developed in our laboratory. Every joint of this robot arm is equipped with a joint torque sensor (JTS), and its specifications are listed in Table 2.

Table 2 Specifications of KU-Dex for experimental verification

The proposed CDI for payload handling was experimentally verified. An arbitrary trajectory of the end-effector through P1, P2, and P3 was given as shown in Fig. 5. This verification was conducted in three steps. Step 1 was conducted without the payload, and steps 2 and 3 were carried out with the payload.

Fig. 5
figure 5

Trajectory of the end-effector for experimental verification

Step 1 of the experiment was conducted based on a situation in which the robot controlled its position along a given trajectory, as shown in Fig. 5. In this step, there was no physical interaction between the robot arm and the external environment; thus, Fp = 0. However, in step 2, the 1 kg mass shown in Fig. 6a was attached to the end- effector of the robot arm, as shown in Fig. 6b. In this step, the robot controlled its position in the same way as demonstrated in Fig. 5, and collisions between a human hand and the end-effector of the robot arm occurred while the robot was completing its task, as shown in Fig. 6b. Likewise, the 2 kg mass shown in Fig. 6a was attached to the end-effector in step 3, and the body of the robot collided with a human arm, as shown in Fig. 6c. Thus, both Fp and Fc were applied to the robot arm in steps 2 and 3 of this experiment.

Fig. 6
figure 6

Experimental conditions for the verification of the CDI for handling of the payload: a two payloads, b collision with 1 kg payload, and c collision with 2 kg payload

Figure 7a, b show the values of \(\hat{\varvec{\tau }}_{ext}\) at joint 4, which is mainly affected by Fp, and the fourth element of dp during the experiment. Note that \(\hat{\varvec{\tau }}_{ext}\) is external torque estimate. When detecting a collision, every element of \(\hat{\varvec{\tau }}_{ext}\) should close to zero if the collision does not occur. However, τext is affected by the payload, as shown in Fig. 7a. The reason for these results is that both τc and τp contribute to τext in (2a). Therefore, it is difficult to detect a collision based on whether a certain element of \(\hat{\varvec{\tau }}_{ext}\) exceeds the threshold. On the contrary, in Fig. 7b, the fourth element of CDI dp exceeds the threshold dth,p, which was set to 2 Nm for a collision, and dp is not affected by the change in the payload, because it is independent of τp, as shown in (14). Therefore, the proposed collision detection method can detect collisions even in situations where the robot arm handles the unknown payload.

Fig. 7
figure 7

Experimental results: a external torque estimate \(\hat{\varvec{\tau }}_{ext}\) at joint 4, and b the fourth element of the CDI dp

The CDI for contact tasks was verified in the experiments associated with impedance control for hand guiding, which is a typical contact task. Figure 8 shows the procedure of this experiment. Figure 8a shows the manipulator conducting impedance control, and the end-effector maintaining its initial position. An intended contact force Fe for hand guiding was applied to the end-effector in step 1 as shown in Fig. 8b. In step 2, a human–robot collision occurred as shown in Fig. 8c. In this step, an intended contact force Fe and the collision force Fc were applied to the manipulator simultaneously.

Fig. 8
figure 8

Experimental condition for collision detection: a initial conditions, b intended contact between a human and manipulator, and c collisions with intended contact

Figure 9a, b show the values of \(\hat{\varvec{\tau }}_{ext}\) at joint 2, which is mainly affected by Fe, and the second element of de during the experiment. Similar to the results in Fig. 7, Fig. 9 shows that τext is affected not only by τc generated by Fc, but also by τe generated by Fe. However, elements of de were close to zero when only Fe was applied to the end-effector with no collision because de is independent of τe; however, the second element of de exceeded dth,e when Fc was applied to the robot owing to collision. Therefore, any collisions that occur on the body of the robot while it conducts the contact task at the end-effector can be detected using the proposed collision detection method.

Fig. 9
figure 9

Experimental results: a external torque estimate \(\hat{\varvec{\tau }}_{ext}\) at joint 2, and the second element of the CDI de

6 Conclusions

In this study, a collision detection index was proposed to detect collisions of a robot arm when the robot performs a task involving a physical human–environment interaction. The proposed collision detection index was verified through experiments, and the following conclusions are drawn from the results:

  1. 1.

    The proposed scheme can be used to detect a collision, even in situations where the contact force between the manipulator and the object is unknown or uncertain. Therefore, it can be implemented in various collaborative tasks of a robot arm.

  2. 2.

    Using the proposed CDIs, the robot can detect collisions regardless of the frequency and magnitude of the collision force. Moreover, the proposed CDIs can be computed without any training data for machine learning. Therefore, the proposed collision detection method is a practical solution to ensure collision safety of collaborative robots.

One drawback of the proposed CDI is that its performance around the singular configurations of a robot arm can be degraded since the CDI is calculated based on the Jacobian matrix. However, such a situation can be avoided because robot arms generally do not operate around the singular configurations to prevent deterioration of control performance.

Accurate estimation of joint torque based on the motor current and friction estimate will be developed as a future study so that a CDI can be computed without any extra sensors.