Keywords

1 Introduction

Manufacturers believe that the 4M (method, machine, materials, and man) are the most important aspect in assembly tasks [1]. Humans are essential resources for manufacturing systems, but they are affected by uncertain factors such as labor intensity, proficiency, or physical conditions, which can increase the risks involved in the manufacturing field [2]. Industry 4.0 has facilitated the automation of factories and realized intelligent factories. However, this has led to major changes in human workload, as complex assembly lines still require manual labor and thus involve human errors. In addition to human issues, human–machine interaction errors result in operational problems. Human–machine interaction refers to communication and collaboration between humans and machines through a user interface [3]. Four elements are critical for a reliable and faultless human–machine interaction: machine behavior, operational goals (or task specifications), a model that describes machine behavior for the user (called the user model), and the user interface [4, 5]. As Industry 5.0 approaches, collaborative robots play an important role in the manufacturing industry, and manufacturers have been considering the use of collaborative robots to increase the flexibility and responsiveness of production processes in facilities [6]. Accordingly, research on human–machine collaboration has attracted attention. In the actual field, human operators are still considered as separators because they lack the necessary collaboration skills. Robots have high physical strength and are sophisticated in terms of handling tasks, whereas humans are intelligent and have problem-solving skills [7]. Thus, humans and machines can enhance each other’s operations and improve overall performance. However, there are some problems with human–machine interaction, which are discussed as follows:

  • In the past, robots were typically surrounded by a protective fence and separated from operators located in a different zone. However, in this scenario, much of the fenced space is not utilized, leading to an increase in cost [8]. The robot and operator might not be able to interact because of the separation, which also leads to efficiency issues.

  • The real-time monitoring capability of depth sensors can be utilized to measure and track the distance between human operators and robots [7]. However, the presence of obstacles may lead to the view of the camera being obstructed; this leads to a difficulty in creating a virtual model for workers and robots, which may result in an accident.

To solve these problems above, this paper presents a framework involving a wearable device with motion-capture and haptic-feedback. The proposed framework enables seamless interaction and collaboration between humans and robots with several functionalities, such as the calculation of real-time distance or detection of the number of collisions using a haptic feedback module. Thus, shop-floor workers can avoid collision accidents by maintaining an appropriate safe distance from a robot surrounded by a virtual fence. The application can increase human safety and accessibility if a digital human is implemented in the virtual world.

2 Literature Review

2.1 Operator 5.0 in Smart Manufacturing

Recently, the term “digital twin” has been used to refer to digital representations of humans and objects that can be replicated, merged, and exchanged as well as saved and recorded, representing the advantages of digitalization [9]. Industry 4.0 focuses primarily on automation, whereas Industry 5.0 aims to combine the advantages of humans and robots through collaboration. Robotic co-workers will enable humans to work harmoniously with robots without fear; furthermore, the knowledge that robots understand them well and can collaborate effectively with them will help improve work efficiency [10]. Romero [11] stated that Operator 4.0 included the social sustainability and human-centricity necessities of Industry 5.0, while Operator 5.0 completes the Industry 5.0 requirements by adding resilience. This has two main dimensions: self-resilience and system resilience. The former focuses on the natural, physical, cognitive, and mental wellbeing and safety as well as efficiency of each operator; the latter focuses on alternative methods for human–machine systems to continue functioning by sharing and trading control between humans and machines to ensure performance and system stability [11].

2.2 Motion Capture and Haptic Feedback

In the motion capture (MOCAP) technology, the posture and movement of a human are measured on the basis of their position and orientation in a 3D space and recording the information in a form that can be used in a computer-configured digital human model (DHM) [12]. The MOCAP technology is used to improve the working environment in terms of worker safety management, worker process design and operation, improvement of manual task productivity, and training for unskilled workers through the collection of worker motion data and using the DHM of workers in the manufacturing field [13]. Bortolini et al. digitalized human body movements during assembly manufacturing and analyzed the control volume for operator performance evaluation [14]. Nam et al. digitalized the motions of workers in the physical environment to the virtual environment and measured the working time and difficulty involved in the assembly process [15]. Geiselhart et al. utilized MOCAP systems to calculate the production performance in the actual process and compared it with the performance predicted by simulation [16]. Jun et al. developed automating human modeling technology using Kinects. This technology can reduce the costs and time required when using the DHM and engineering simulation [17]. The MOCAP systems used in existing research are advantageous because they collect the position and rotation values of each joint from skeleton data. However, they have a limitation in that they cannot provide direct feedback to the worker wearing the device. To overcome this limitation, the Teslasuit device is used for motion detection and haptic feedback, which enables the digital human to interact with other objects across the physical and virtual worlds [18].

3 Proposed Method and Application

3.1 Modules and Proposed Framework

The proposed framework is illustrated in Fig. 1. The framework is divided into several modules. An operator wearing a motion-capture haptic-feedback suit is connected to a virtual human through a wireless network. The operator’s actual movement is reflected to the digital human in the virtual world, and collision detection is tested with the robot movements according to the defined assembly sequences. The safety distance between the human and the virtual fence of the robot is measured by the minimum Euclidean distance between the human body and the robot’s end effector. When a collision event occurs in the virtual world, the operator can feel a haptic feedback response to their actions through electrostimulation. The movement of the robot or actual human is first corrected to avoid collision, and the movement is performed if it is determined to be safe. During the validation, the real-time distance and a warning message are displayed on the dashboard.

Fig. 1.
figure 1

Framework for application of human–robot interaction.

3.2 Virtual Fence of the Robot

To solve the existing problems mentioned above, this study proposed a solution that can protect the operator. A virtual fence is installed in the robot joint and end effector. The collision of the operator detected during path execution refers to a contact between the digital human and virtual fence rather than with the robot. Polyhedral shapes are usually used for collision detection; however, in this study, a spherical shape was used to create a virtual fence because the minimum distance can be easily calculated using a sphere. The calculation involving the polyhedral model becomes complicated because many cases, such as vertex–vertex, vertex–edge, edge–edge, and vertex–surface, need to be considered for the calculation of short distances [19] The virtual fence follows the formula of the volume of a sphere. The radius is measured from the center of the robot’s end effector to the robot’s maximum reachable distance. Since the robot’s end effector is mostly located close to the human, the virtual fence radius was measured from the center of the end effector.same as that of a real human

$$Volume=\frac{4}{3}\pi {r}^{3}$$
(1)

Considering a scenario in which the robot moves unexpectedly with a speed of 30 cm/s or moves differently opposed to the controller’s intention, the robot’s virtual fence should cover the area up to which the robot arm can reach the farthest. Figure 2 shows the representations when (a) the robot is in the initial state and (b) the robot arm reaches the farthest. Figure 2(c) shows the scenario in which a virtual fence is installed at the end effector (Fig. 3).

Fig. 2.
figure 2

(a) Robot’s initial state, (b) robot’s abnormal state, (c) robot with virtual fence.

The radius of the virtual fence is calculated as follows. The distance from Joint 2 to the top of the robot along the z axis is considered the radius of the virtual fence shield, which is 240 mm [20, 21].

$$Radius\,\,of\,\,the\,\,Virtual\,\,Fence =safety\,\,distance$$
(2)
Fig. 3.
figure 3

(a) Directions of joint rotation. (b) Coordinate and mechanical dimensions of robot arm.

3.3 Calculation of Distance Between Human and Robot

The size of a virtual human is the same as that of a real human. For a safe and efficient collaboration, the human and robot should maintain a safe distance for seamless interaction. The minimum Euclidean distance of two given convex sets \({H}_{i}\) and \({R}_{j}\) is used, where i = 1, …, p and j = 1, …, σ. Here, i indicates the upper-body joints of the virtual human shown in Fig. 4(a), and j indicates the sphere of the robot in Fig. 4(b) [21]. Since there are many cases in which the operator is likely to collide, nine body parts are designated for measurement with the robot hand to prevent potential accident. The nine points for the operator are on the head, neck, shoulders, elbows, and hands. The robot has one point corresponding to the end effector. Two vectors measure the values at these points (one from the operator and one from the robot) to calculate the distance [22].

Fig. 4.
figure 4

(a) Upper body joints of humans, (b) joint in a robot.

The minimum distance is calculated using the following equation:

$${d}_{min}\left({H}_{i}, {R}_{j}\right)=\mathrm{min}\left\{\Vert {h}_{i}-{r}_{j}\Vert : \forall {h}_{i}\in {H}_{i}, \forall {r}_{i}\in {R}_{i}\right\}.$$
(3)

For example, the shortest distance is selected as follows. Since Eq. (3) yields the shortest distance, it is selected as the criteria before a warning message appears.

$$\sqrt{{\left|{h}_{H}-{R}_{h}\right|}^{2}} =\sqrt{{\{\left(x,y,z\right)-\left(a,b,c\right)\}}^{2}}=310\, \mathrm{mm},$$
$$\sqrt{{\left|{h}_{T}-{R}_{h}\right|}^{2}} =\sqrt{{\{\left(x,y,z\right)-\left(a,b,c\right)\}}^{2}}=230\, \mathrm{mm},$$
$$\sqrt{{\left|{h}_{LH}-{R}_{h}\right|}^{2}} =\sqrt{{\{\left(x,y,z\right)-\left(a,b,c\right)\}}^{2}}=140\, \mathrm{mm}.$$

3.4 User Interface

The dashboard shown in Fig. 5 combines general information that is useful to a supervisor, such as the production line number, operator’s name, current job order, and real-time operation data during human–robot interaction. The real-time distance is calculated from the designated points of the human to the robot’s end effector. The interface shows the closest distance between the human and robot in Fig. 5(a). A warning message in Fig. 5(a) with a graphic shows the position where the collision may occur, indicated in red in Fig. 5(b).

Fig. 5.
figure 5

Dashboard shows (a) Useful information include distance between the human and robot and numbers of collision and (b) Collied spots on the body.

This study uses a Teslasuit as a wearable device with motion capture and haptic feedback. The Teslasuit’s haptic feedback system is integrated within the suit and can be activated during actions, on demand, or in response to MOCAP comparison. Teslasuit’s sensitization system works by sending tiny pulses of different amplitudes, frequencies, and voltages to the electrodes. The two electrostimulation systems, TENS, and EMS, of Teslasuit stimulate nerve endings; these stimuli are experienced on the skin surface. In the event that a virtual part of the body collides with a virtual object, the commutation unit delivers pulses from a pulse generator to electrodes in the same vicinity on the wearer’s real body [23]. There are 68 haptic points that realize sensation on certain areas of the body. Haptic mapping is designed to interact with Teslasuit using a coordinate system that is independent of the suit configuration. In the API, this process is called target mapping. Hit events work with the default source mapping. When a target map is loaded, hit coordinates are processed in a target mapping coordinate system to be transformed into the source mapping coordinates. In the virtual world, when either a robot or an operator approaches within a certain distance, a warning message is generated on the dashboard. Haptic feedback is retrieved to the actual area simultaneously in the physical world. The operator can identify an abnormal situation while interacting with the robot. Figure 6 shows the scenario in which the robot responds when an operator collides with the virtual fence.

Fig. 6.
figure 6

Scenarios in which (a) there is no contact, and (b) the robot avoids the human.

4 Implementation and Case Study

4.1 Experiment

The proposed framework is implemented in a virtual testbed with an assembly process producing fan filter units. Figure 7 shows that a worker located near the manufacturing work-center is connected through a conveyor line. This work-center depicts a typical example of an assembly process, which picks equipment and performs tasks using the upper body at a fixed location. The bolting and screwing processes for the product in this virtual testbed are basic assembly processes. To validate and evaluate the proposed application, we selected the testbed in a virtual factory production line where produce fans and human–robot collaboration processes exist. This virtual factory is a real floor-shop-like environment. The experimental setup aims to validate the following functionalities when an operator collides with robot during production.

4.2 Results and Analysis

Virtual fence and haptic feedback are utilized in the application, and the supervisor can monitor real-time data, such as distance calculation and the number of collisions shown in the dashboard in Fig. 7. In addition, the current order information is shown while the human and robot are handling the job. During the production hours, the operator and robot should execute a defined motion after they are tested in the validation process. When an operator does not maintain the safety distance, collision is detected, and the area of collision is marked as red and displayed in the skeleton figure.

Fig. 7.
figure 7

Dashboard during production

5 Conclusions

The paper presents a framework of human–robot interaction using a wearable device with motion-capture and haptic-feedback that tracks human body interaction in the virtual world. The first aspect of the framework is to indicate a collided body part in the dashboard while the human is wearing a haptic suit. The second aspect is a method of using a virtual shield and calculating distance in real-time to protect human during collaboration. This framework enables humans and robots to share the same workspace and protects operators from severe accidents while combining their complementary strengths to improve work efficiency. Through the proposed framework, human safety in proximate human–robot collaboration is guaranteed through real-time human body motion capture, robot’s virtual fence installation, and human-robot virtual-fence distance measurement. A dashboard and haptic sensors are used to provide feedback to the user. The results show that this solution is feasible for improving safety in real applications. This study developed the human–robot interaction system by supplementing the existing limitations through the proposed framework. However, posture correction must be performed through calibration whenever the sensors in the Teslasuit can be heavily affected by ambient electromagnetic waves. Moreover, a battery equipped should not be exposed to heat or water vapor. Currently, after a collision occurs between a machine and a person, the person receives feedback and manually designs the next path; however, efficiency might be improved if an optimized path is automatically created. In the future, route optimization will be applied as a reinforcement learning method and modified to an optimal route. For a better visualization using an MR device, HoloLens will be suitable to enable an operator to work in entirely new ways; for example, the virtual fence of the robot can be shown in real time. Controlling a robot through ROS enables the robot to move simultaneously in the virtual and real worlds. Detouring paths are currently designed manually, but training data for optimization using reinforcement learning algorithms will help to achieve automatic robot movements.