Keywords

1 Introduction

Nowadays robots are an indispensable attribute of modern human society. Despite the diversity of robots, their use in industry has determined the rapid growth and development of the technical direction, therefore, the park of industrial robots is the most widespread and in demand today.

Industrial robot (IR) or industrial manipulator (IM) is a mechanical arm that has an actuating mechanism at the end. Often, the manipulator copies the human arm, which has a shoulder, elbow, wrist, and hand, but for more complex technological operations, a few more links may be added.

Industrial robots are used to replace human labor in the process of performing various major and minor technological operations. It also eliminates the possibility of injury of varying severity, if the robot performs operations in hazardous working conditions for humans. Manipulators do an excellent job with monotone types of work, and surpass human labor in the quality. Automated production in which robots are used, allows solving problems with a wide range of products by reprogramming [1].

The history of mechanics is rich in examples, which testify to the constant desire of man to create mechanisms and devices similar to living beings.

The process of controlling the actions of the automatic manipulation robots can take place with the participation and without the direct participation of a person. The manipulation robot consists of a manipulator, actuators, devices intended to replace feelings, communication devices with the operator and the computer. The manipulator simulates the movements of the human hand and is a multi-faceted open mechanism with one moving rotary and translational kinematic pairs. The number of degrees of the mobility of manipulators varies in the range from 3 to 10. The function of the brush in the manipulator performs the so-called probe, the design of which involves performing operations with a certain type of objects of manipulation.

The work is devoted to the realization of finding the object, recognizing it (its geometric shape) and moving the manipulator to the corresponding coordinates for capturing the object with a probe (pincers) and moving it to the given coordinates. The solution of this problem allows implementing a system of automatic operation of industrial robotic manipulators [2].

Also based on operator recognition, the Kinect camera is used to implement a manipulator control; it stops the programmed actions and waits for the instructions from the operator. The instructions from the operator are transmitted to the manipulator using the principles of gesture recognition.

2 Main Focus of the Chapter

Human-Robot Interaction.

Nowadays, industrial manipulation works are used in various fields of industry, which administrate a wide range of technological tasks. The main type of robot manipulation systems are mechanical manipulators.

Human-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers.

Despite the fact that human workers have sufficiently advanced problem-solving skills and sensory-motor abilities, they have limited strength and accuracy [1]. Nevertheless, robotic systems have high resistance to fatigue, speed, repeatability and better performance, but they are limited in flexibility [2]. HRI can free human workers from difficult tasks and will enable communication channels between people and robots to increase their overall effectiveness [4].

Ideally, a bundle (team) HRI should work just like a human collaborative team [3]. To create an effective HRI team, we can analyze human-to-human groups as examples. In human teamwork and collaboration, there are two theories: joint intention theory [4] and situated learning theory [3, 5].

Gesture Recognition for HRI.

Gesture is one type of communication methods. Facial expressions, hand gestures and body postures are effective communication channels in human-human collaboration [6]. Gestures can be divided into three types [7]:

  • body gestures: actions or movements of the whole body;

  • hand and arm gestures: arm posture, hand gestures;

  • head and facial gestures: nodding or shaking head, winking.

To recognize gestures in the context of HRI, we need to develop an information processing algorithm for our system [4].

Imagine a generalized information processing algorithm consisting of four stages  [8]:

  • Stage 1: Data collection.

  • Stage 2: Data analysis.

  • Stage 3: Decision making.

  • Stage 4: Response.

A fairly comprehensive review and analysis of data collection technologies was given in [9], Table 1 represents the result:

Table 1. Advantages and disadvantages of different sensor technologies

Based on the generalized algorithm, a specific gesture recognition algorithm in HRI was proposed. As shown in Fig. 1, there are six basic steps involved in the process of gesture recognition for HRI (besides checking the camera functioning):

Fig. 1.
figure 1

The algorithm of the gesture recognition in the human-robot interaction

  • receiving data from the camera.

  • identification of found objects.

  • finding a gesture among them.

  • gesture classification.

  • determination of the distance to the object and its position.

  • the reaction of the robot manipulator.

Robots have been successfully used in different stages of production for decades. Robots replace a person when performing routine, energy-intensive, dangerous operations. Machines do not get tired, they do not need pauses on rest, water and food. The robots do not require wage increases and are not members of trade unions. As a rule, industrial robots do not possess artificial intelligence. Typical is the repetition of the same moves of the manipulator on the hard program. Great success has been achieved, for example, in the use of robots on conveyors of automobile factories. Already there are plans for the automotive industry, where all the processes of assembling cars and transporting semi-finished products will be carried out by robots, and people will only control them. In 2016 1.8 million industrial robots were used all over the world, it is forecasted that by 2020 their number will exceed 3.5 million units [5].

Before writing a software for a physical robot, it is recommended to try it on simulators such as Gazebo, V-Rep ROS, etc., regardless of whether there is a physical robot in stock [10, 11]. This allows to write the final version of the program, as well as set it up offline, before trying it on a physical robot. One of the most popular robotics applications is the 3D modeling of the robot itself, as well as its surroundings. Such programs allow the simulator to program a virtual robot that is capable of reproducing the movements and the logic of physical robot in its working environment.

Functional Structure of the System.

The main task of this work is to automate the joint work between the work-arm and the person, which should be guided by the movement of the hand with the help of the camera.

Figure 2 describes the functional structure of the system considering a risk-oriented design approach as described in [12], which consists of two levels of decomposition.

Fig. 2.
figure 2

Functional structure of the system

The functional structure of the system is considered in the general plan, from the moment when the system enters the indicators from the camera, until the system performs calculations and synchronization of data between the robot manipulator.

Automation of Manipulator Work.

Figure 3 describes the “Automation of robot manipulator” block.

Fig. 3.
figure 3

The “Automation of robot manipulator” block

In the “Automation of robot manipulator” subtask the developer must analyze the configuration of the machine, manipulator or recognition system, as well as the motherboard that will be responsible for processing the data. The developer analyzes the possibility of implementing any other program element. If the developer can not programmatically implement any element, he communicates with the project manager. The project manager decides whether it is possible to change the item and how it is. The optimization of software for one or another motherboard is often a problematic element [6].

As a result of solving this problem, it is necessary to identify the main elements that affect the automation of the manipulator:

  • design and development of the manipulator construction;

  • recognition of the human hand movement;

  • adjustment of the manipulator position.

For the “Automation of robot manipulator” subtask the input data is obtained indicators from the sensors, the camera and the reduction to the required format. The initial data will be executed by the command and coordinates of the operator (person) movement.

Automation of the Wheel Robot.

Figure 4 describes the “Automation of the recognition system of the operator’s hand position in space” block.

Fig. 4.
figure 4

The “Automation of the recognition system operation” block

In the subtask “Automation of the recognition system operation” the developer must determine: what type of robot manipulator and what functionality will perform in pairs with the recognition system.

When the task, that the recognition system will perform, should be formed, the developer is engaged in calculating the choice of algorithm that will be used by the system when recognizing a person and his position in the space, as well as the algorithm of behavior in extreme situations (when a signal is lost with a manipulator, several objects (people) are found). It is also necessary to determine how the recognition system will calculate the distance to the objects, as well as the algorithm of behavior.

As a result of the solution to this problem, it is necessary to analyze the functionality and type of the manipulator, and perform the following steps:

  • movement of the manipulator;

  • definition of the distance to the object.

For the “Automation of the recognition system operation” subtask the input data will be derived from sensors, cameras and then it will be reduced to the required format. Output data will be the coordinates of the manipulator position.

2.1 The Process of Robot Manipulator Assembling

To begin with, the materials for the assembly were purchased, as well as a frame for the robot manipulator was printed on a 3D printer. First the details, that will be the basis of the turning mechanism, have been taken. A bearing has been put in one of the designated holes to ensure the rotation of two parts relatively to each other with the correct load distribution and maximum reduction in friction. Next, we needed to consolidate these two details. Then the base plate should be fixed to in the rotary mechanism [7]. This base plate serves as a base for fixing the stepper motor. As the platform for fixing the stepper motor is ready, it should be fastened on a rotary base.

We have performed similar operations with other platforms and stepping motors and received the following result, which is shown in Fig. 5.

Fig. 5.
figure 5

Ready-made base with fixed stepping motors

After the base has been assembled, the pipe with prepared metal tubes should be fastened and fixed to the swivel mechanism. That leads to the following result (Fig. 6).

Fig. 6.
figure 6

Swivel mechanism with fixed metal tubes

Next the probes for robot manipulator should be assembled, the final result can be seen in Fig. 7.

Fig. 7.
figure 7

Assembly probe for robot manipulator

When the probe is assembled, it can be fixed on the manipulator. After this step, the assembly can be called finished, the robot manipulator is assembled.

3 The Interaction of the Manipulator and Kinect

The way of interaction of the manipulator and Kinect recognition camera will be described next.

An important part of adaptive robots is their developed software, designed for processing information coming from external and internal sensors and operational changes to the motion program. Due to the ability to perceive changes in the external environment and adapt to existing conditions, adaptive work can be manipulated with non-oriented parts of arbitrary shape and produce assembly operations [8].

The copying control of biotechnical manipulation robots is carried out with the help of a human operator who moves the master device, and the manipulator repeats these movements simultaneously in all degrees of mobility of the device.

Kinect’s touchscreen gaming controller will be used to get information about human hand motion. In order to transfer these movements to the manipulator, a software product will be developed in the Python programming language, because this language has the ability to interact with the ROS library, and is compatible with Kinect.

As an electronic computer, the minicomputer Raspberry Pi 3 is used. As a robot arm of the manipulator, Robot Arm MK2 is used with some developments and modifications.

The manipulator layout management system is developed based on Arduino, which connects to a personal computer via the COM port, through which the board receives power and control signals for controlling the servomotors of the manipulator [9].

Servo Library is a library of the Arduino Controller and provides a set of functions for managing servo drives. Standard servo drives can rotate the drive to a certain angle from 0 to 180° normally. Some servo drives enable full rotation at a given speed. The Servo library lets simultaneously manage 12 servo drives on most Arduino boards and the 48 on Arduino Mega. If the library is used for controllers other than Mega, the 9-th and 10-th outputs are in PWM (pulse-width modulation) mode, even if the drive is not connected to these pins. With the Mega board, up to 12 servo drives can be used without losing the PWM functionality. When using Mega for controlling from 12 to 23 servo drives it will not be possible to use 11-th and 12-th outputs for PWM.

The Stepper library provides a convenient interface for managing bipolar and unipolar stepper motors. In order to control the stepper motor, depending on its type (bipolar or unipolar) and the chosen connection method, some additional electronic components will be needed.

In order to realize all the features of the proposed construction, it is necessary to solve a number of tasks, such as:

  1. 1.

    Synthesis and analysis of the kinematic model of the manipulator, considering the peculiarities of the kinematic pairs formed by the pairs of segments.

  2. 2.

    Synthesis and analysis of dynamic, complete and explicit models of the manipulator.

  3. 3.

    Development of a multichannel data collection and processing system in real time for the information provision of manipulator management tasks and experimental study of the dynamic properties of the manipulator.

  4. 4.

    Development of engineering techniques for selecting the optimal parameters of the design of the manipulator, taking into account the solvable problems.

  5. 5.

    The synthesis of the control system of the manipulator.

Consider the first task of the synthesis and analysis of the kinematic model of the manipulator.

The manipulator has a complex geometry, so we will link the reference frame with different mechanical components of the manipulator and describe the relationship between these frameworks.

To study the processes in the manipulator, at first kinematic model should be made. This model would relate the movement of its links with the position of the center of the probe in space.

In a three-dimensional space, to indicate the location of a point in space, it is enough to accurately determine its coordinates in a fixed coordinate system. When describing the position of a solid body, an associated coordinate system is associated with it. Three parameters that specify the orientation of the axes of the connected coordinate system in relation to the motionless (Euler’s corners) and the three coordinates of the beginning of the associated coordinate system. Thus, the main task of describing the kinematic characteristics of a manipulator is the transformation between its own and the chosen working coordinate systems.

The calculation of direct problems of kinematics is an important first step to using a new robot, a kind of familiarity with it. The direct problem of kinematics of manipulators is formulated as follows: the given kinematic scheme of the manipulator and at a certain moment of time are known values of general coordinates, which determine the position of all parts of the manipulator in relation to one. It is necessary to determine the position and orientation of capturing the last link of the manipulator in the reference frame associated with the base.

The investigated two-stage mechanism (Fig. 8) is a two-leg mechanical manipulator with two rotating pairs.

Fig. 8.
figure 8

Two-stage mechanism

The result of solving all the above tasks is the system presented in Fig. 9, which includes a manipulator control system (the program code is implemented with the Python programming language), robot arm of the Robot Arm MK2 manipulator, as well as the Kinect touch proximity gaming controller, as a microcomputer Arduino Duo is used.

Fig. 9.
figure 9

The manipulator

The functionality of the developed robot manipulator includes the movement in the automatic mode, as well as in the mode of receiving commands: the coordinates of the manipulator position will come from the recognition system of the operator’s hand movement. For the wheel robot it is possible to send signals to move forward, back, turn left and right.

The motion recognition system will use the coordinates of the operator’s hand who passed the first calibration. Other operators will not be able to influence on the position calculation (Fig. 10).

Fig. 10.
figure 10

The calibrated manipulator operator

4 Conclusion

As a result of the work done, the main problems encountered when controlling the robot using the Kinect sensor were identified, such as:

  • overlapping of operator’s controls during operation;

  • loss of productivity when sending control actions;

  • inaccuracy of control due to the limited number of degrees of mobility;

  • the difficulty of determining the angle corresponding to the standard position of the servomotor.

The advantages of this type of control are:

  • the possibility of forming control commands for the robot in real time;

  • expansion of the possibilities of interaction of the robot with the external environment (manipulation with objects);

  • the ability to record certain movements and call them with voice commands;

  • low cost of necessary equipment for management.