1 Introduction

In terms of convenience of human being, robots provide many helpful functions in our daily life. To meet the expectation of robot’s abilities, an alternative robotic grasping gripper is proposed in this paper. The robot is equipped with manipulator in which the gripper is used to grasp different types of objects. In general, robot manipulators are complex and highly coupled nonlinear systems with structured and unstructured uncertainties; thus, it is difficult to establish an appropriate mathematical model for the design of a model-based control system.

Grasping function in the robot industry is common and important. In a simple environment for known circumstance, the parameters and mathematical model of the object to grasp are easy to generate without great effort. However, after leaving the pure environment, it became much more complicated and there are many unexpected movement, occlusion, vibration, parameters’ variations, and disturbances. Thus, the instant information gathered from sensor modules is necessary. For non-contacting objects, robot usually uses image [1] to measure figure of length, width, area, center of mass, the degree of bend, and reconstruct three-dimensional model. Then, the data or images are provided to grasping function a suitable location to crawl [24]. A dynamic environment’s image would be all kinds of interference by the environmental factors, such as object occlusions, shadow, reflection, background color interference, etc. Among these factors, object occlusions significantly weaken the tracking performance when the grasping gripper moves and tracks the object. To detect and handle the object occlusions, much research has been done in recent years to apply various approaches in image processing field. For example, the mixture of three distributions [5], outlier pixels examination [6, 7], motion characteristics comparison [8], content-adaptive progressive occlusion analysis algorithm, and variant-mask template matching [9] are the powerful approaches to handle occlusions robustly. However, the image still cannot provide the most accurate data for robot to recognize the non-contact object.

As for contacting objects, robot mainly uses pressure sensors and tactile sensors, the former is composed of a piezoresistive material, and the latter is composed of a piezoelectric material. Piezoresistive material is much more slower to reach the steady state, but it is able to stationary power moments. The reaching of steady state of piezoelectric material response is faster, but its principle will not apply force static timing. Both piezoresistive and piezoelectric material can be used to surveillance whether it touches the items or not [1012]. Through proper detection, the robot can discern the body, force feedback, and other useful information of the items when grasping. Because the non-contacting and contacting sensor modules have their own advantages and disadvantages, there have been a mixed type of architecture, or the multi-sensor architecture, trying to crawl task which could be more perfect [11, 13].

On the other hand, looking over the announced dissertations about the grasping gripper control techniques, the conventional proportional-integral-derivative (PID) control techniques [14, 15] are utilized mostly. However, due to the highly nonlinear and dynamic characteristics within the gripper mechanism, the PID controller is difficult to evaluate appropriate control effort to track the desired trajectory. Sliding-mode control (SMC) technique is one of the effective nonlinear robust control approaches since it provides system dynamics with an invariance property to uncertainties once the system dynamics are controlled in the sliding mode [16, 17, 18]. However, the insensitivity of the controlled system to uncertainties only exists in the sliding mode, but not during the reaching phase. Thus, the system dynamic in the reaching phase is still influenced by uncertainties. From a practical point of view, these methods may not respond well to significant changes in operating points. On the other hand, fuzzy systems have supplanted conventional technologies in many applications, especially in control systems. One major feature of fuzzy logic is its ability to express the amount of ambiguity in human thinking. Thus, when the mathematical model of the process does not exist, or exists but with uncertainties, fuzzy control (FC) is an alternative way to deal with the unknown process [19, 20]. Nowadays, much attention has focused on the combination of FC and other control techniques. Su [21] proposed a robust tracking control design method in 2012; Aloui et al. [22] and Faieghi et al. [23] proposed an adaptive fuzzy sliding-mode-based tracking controller for a class of nonlinear MIMO systems in 2011 and 2012, respectively; but there still existed chattering control efforts under serious uncertainties. To tackle this problem, Pan et al. [24, 25] proposed the adaptive fuzzy H∞ tracking control and enhanced adaptive fuzzy control with optimal approximation error convergence for uncertain nonlinear systems in 2012 and 2013, respectively. Additionally, neural network approaches, such as neural-fuzzy [26, 27, 28], recurrent fuzzy neural network [29], robust wavelet-neural-network sliding-mode control [30], self-tuning neural-fuzzy sliding-mode control scheme [31], neural network-based sliding-mode adaptive control [32], robust adaptive neural network control [33], and adaptive neural network control [34], have been proposed for the electro-hydrostatic actuator, electrical servo drive, robot manipulators, or other dynamic systems. One major advantage of these schemes is that the adaptive laws are derived based on the Lyapunov synthesis method, and therefore, the stability of the controlled systems could be guaranteed. However, some compensating components are necessary, and therefore, the structures are complex. Furthermore, the grasping gripper controller adopts above neural network control architecture completely, there exists slow response and chattering phenomena under instant external impact. So it is a stringent topic to research a suitable control scheme for grasping gripper.

To deal with these problems, a combination of simple vision system, pressure modules, and smart fuzzy controller is proposed to avoid the complexity of image recognition, and to facilitate a better adaptability to the environment in this research. The proposed architecture can provide effective initial values to reduce the number of pressure modules and the groping time. Although the pressure module is slow, it can be used to force the timing of stationary; so it can provide a more accurate and reliable gripping force. On the other hand, when the pressure modules work together with fuzzy inference engine, it can identify the stiffness index of the grasping object. This paper is organized as follows. The design procedures of the proposed fuzzy grasping controller are detailed in Sect. 2. Finally, the implementation and some experimental results of robotic grasping gripper are stated in Sect. 3.

2 Design of Robotic Fuzzy Grasping Controller

In robotic grasping gripper, the gripper’s forward kinematics refers to the utilization of the kinematic equations of a grasping gripper to calculate the position of the end-finger from the object location and the specified values of the joint parameters. The kinematics equations for the series chain of a grasping gripper are obtained by the transformation to characterize the relative movement allowed at each joint. Robotic manipulators have great potential to perform useful work in real world, e.g., industrial appliances, space exploration, etc. The action to pick up an object is an important action which can be decomposed into six distinct steps in the proposed robotic grasping gripper:

  • Step 1: Approach object.

  • Step 2: Grasp test.

  • Step 3: Identify and tune the joint angles.

  • Step 4: Lift test.

  • Step 5: If object slip down then release and go to Step 2, otherwise go to next step.

  • Step 6: Lift, move, and release object.

In this study, the planar motion dynamic of the gripper is modeled as two parallel plates which are driven by RC servo motors as illustrated in Fig. 1. Based on the Denavit and Hartenberg’s method, the Cartesian coordinate system is attached to the end-finger. The kinematic equation about the gripper can be roughly expressed by the following equation:

$${\text{dX}} = {\kern 1pt} {\text{X}}_{i} - {\text{X}}_{i - 1} \; = {\text{H}}_{1} [\sin ( {\text{PWM}}_{i} )- \sin ( {\text{PWM}}_{i} + \Delta {\text{PWM)]}},$$
(1)

where dX is the width change of the gripper; X i is the gripper width at present iteration; X i−1 is the gripper width at previous iteration; H1 is the arm length; PWM i represents the rotation angle at present iteration; and ΔPWM represents the rotation angle change between two consecutive iterations.

Fig. 1
figure 1

Planar motion dynamic of the gripper

The flowchart for picking up an object is integrated in Fig. 2. Executing such task requires hardware and controller that are both capable and robust. Consequently, an alternative robotic grassing gripper is proposed. The hardware of proposed grasping gripper is shown in Fig. 3, in which the gripper’s actuators are two servo motors with a planetary gearbox and an encoder. Owing to the function limitations of the proposed framework which is implemented in our laboratory, we emphasize following two topics: (1) The implementation of the combination of the vision system, machine fingers, pressure modules, and the fuzzy controller; (2) The application of smart fuzzy controller. So the relative location between the gripper and object is fixed in this study. To move to the correct position, the motors’ rotary motions are converted to the parallel jaw motions through an internal mechanism in the body of the gripper. However, the captured image maybe distorted due to the camera distortion. Errors for the distortion may impede the passage of the robot; hence, the coordinates in the field should be calibrated and recovered by a calibration function. The calibration function is obtained by a standalone calibration procedure. Once the procedure has been performed and the consequent calibration function is obtained, the function can then be continuously used until the experimental framework has been changed. The principle of camera is shown in Fig. 4 [35]. According to this principle, the object size can be obtained by following equation.

Fig. 2
figure 2

Flowchart for picking up an object

Fig. 3
figure 3

Hardware of the proposed grasping gripper

Fig. 4
figure 4

Principle of camera

$$\frac{1}{u} + \frac{1}{v} = \frac{1}{f}$$
(2)

On the software architecture, a smart fuzzy grasping controller is proposed to implement above grasping steps. The first function of the controller is to identify the stiffness and shape of different grasping objects. The fuzzy inference engine is embedded into the recognition process firstly. The second function of the controller is to generate the joint angles of the servo motors. To designate the initial values of membership functions of the fuzzy controller, some experiments under the proposed hardware architecture are carried out. The obtained results are listed in Table 1, which shows the different characteristics between hard and soft objects from the concept of Hooke’s law. The derivation of the proposed smart fuzzy grasping controller is discussed as follows.

Table 1 Measured characteristics of hard and soft objects

In this research, three equal-span triangular membership functions are adopted for input linguistic variable and three singleton membership functions are used for output linguistic variable. Let the sensed pressure value, PRE, be the input linguistic variable of the fuzzy logic, and the pulse width difference, ΔPWM, be the output linguistic variable; the associated fuzzy sets for PRE and ΔPWM are expressed as follows:

For PRE [in antecedent proposition]: L (Large), M (Medium), S (Small).

For ΔPWM[in consequent proposition]: P (Positive), N (Negative), Z (Zero).

The membership functions of input and output fuzzy sets are depicted in Fig. 5a, b. The fuzzy linguistic rule base involved in the fuzzy system is summarized as

  • Rule 1: If PRE is S then ΔPWM is N.

  • Rule 2: If PRE is M then ΔPWM is Z.

  • Rule 3: If PRE is L then ΔPWM is P.

Fig. 5
figure 5

Membership functions of input and output fuzzy sets a PRE b ΔPWM

The triangular membership functions and center average defuzzification method are adopted, as they are computationally simple, intuitively plausible, and most frequently used in the opening literatures. Then, the pulse width difference can be estimated by fuzzy logic inference mechanism as follows:

$$\Delta {\text{PWM}} = ( - w_{1} a + w_{2} 0 + w_{3} a)\Bigg/\sum\limits_{i = 1}^{3} {w_{i} } = a(w_{3} - w_{1} ),$$
(3)

where 0 ≤ w 1 ≤ 1, 0 ≤ w 2 ≤ 1, and 0 ≤ w 3 ≤ 1 are the firing strengths of rule 1, 2, and 3, respectively; −a and a are the center of the membership functions N and P, respectively. Totally, the ith grasping control effort can be represented as

$${\text{PWM}}_{i} = {\text{PWM}}_{i - 1} + \Delta {\text{PWM}}.$$
(4)

The changes of hard and soft objects under grasping process are shown in Fig. 6. From Table 1, the criterion to identify the stiffness and shape of different grasping objects can be implemented through the computation of the proposed stiffness index, which is calculated by the proposed fuzzy grasping controller online.

Fig. 6
figure 6

Changes of soft and hard objects under grasping

$${\text{Stiffness index}} = {{\left( {\frac{{\sum {\left| {\Delta {\text{P}}} \right|} }}{\text{Iteration}}} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\sum {\left| {\Delta {\text{P}}} \right|} }}{\text{Iteration}}} \right)} {\left( {\frac{{\sum {\left| {\Delta {\text{PWM}}} \right|} }}{\text{Iteration}}} \right)}}} \right. \kern-0pt} {\left( {\frac{{\sum {\left| {\Delta {\text{PWM}}} \right|} }}{\text{Iteration}}} \right)}}$$
(5)
$${\text{Stiffness}}\,{\text{index}} \ge 1,{\text{ clustered as hard type}}$$
(6)
$${\text{Stiffness}}\,{\text{index}} < 1,{\text{ clustered as soft type}}$$
(7)

During Step 4 (Lift test) and Step 5 (Judge whether object slips down or not), the criterion to identify the stiffness and shape of different grasping objects can be implemented through the computation of the proposed stiffness index, which is calculated by the proposed fuzzy grasping controller online. If the grasping object slips down then the following fine-tuned algorithm for the membership functions of PRE is adopted as

$$m^{\prime} = {\kern 1pt} \;\frac{{{\text{sgn(}}m )}}{{{{ (1} \mathord{\left/ {\vphantom {{ (1} m}} \right. \kern-0pt} m} )+ a}},$$
(8)

where m′ is the slope of tuned triangular membership functions; m is the slope of previous triangular membership functions; sgn(·) is a sign function; |·| represents the absolute value; and a is denoted as the adaptation constant which is the tuning step size. The tuning process of the triangular membership functions is depicted in Fig. 7.

Fig. 7
figure 7

Tuning process of the triangular membership functions

According to the identifying results, the membership functions of the smart fuzzy grasping controller are precisely tuned to generate the joint angles of the servo motors online. The overall flowchart is depicted in Fig. 8.

Fig. 8
figure 8

Overall flowchart of the proposed smart fuzzy grasping controller

3 Implementation of Robotic Grasping Gripper

To assess the performance of the proposed control system, a home-made robotic gasping gripper is implemented in laboratory, and a series of experiments are carried out in this study. The implemented grasping gripper and machine fingers are shown in Figs. 3 and 9, respectively. The whole framework shown in Fig. 3 includes computer, image processing unit, pressure modules, motor controller, RC servo motor, and machine finger. The joints and fingers are driven by five RC servo motors (MG996R) with the weight (55.0 g), stall torque (9.4 kg/cm under 4.8 V, 11 kg/cm under 6 V), and dimension (40.7 × 19.7 × 42.9 mm). The motor controller is installed to transfer the PWM instructions from computer. The pressure module (FSR402), which sensing area is 0.2″ circle, output signal is passive variable resistance, and the force sensitive range is 0–10 kg, is embedded into the machine finger and connected to Arduino board [36]. Moreover, the object location and its rough width estimation system are realized by an image processing unit. In this study, the simple image processing unit is adopted as a preliminary development tool to speed up the testing procedure of the proposed smart fuzzy controller for the grasping gripper.

Fig. 9
figure 9

Machine fingers

The control instruction set is shown as follows:

$$\# \, <\!{\text{ch}}\!> {\text{ P }} <\!{\text{pw}}\!> {\text{ S }} <\!{\text{spd}}\!> \, \cdots \, \# \, <\!{\text{ch}}\!> {\text{ P }} <\!{\text{pw}}\!> {\text{ S }} <\!{\text{spd}}\!> {\text{ T }} <\!{\text{time}}\!>$$

where <ch> is servo motor numbers 0–31, <pw> is pulse width in microseconds, the range of 500–2500, <spd> is mobile rate us/s pulse per second to move a servo motor for effective, <time> is move to a specific location using the number of milliseconds (Optional).

For example, the instruction “#5 P1600 S750” will actuate following actions: No. 5 RC servo motor will be actuated by the pulse width 1600us pulse rate of 750 microseconds per move.

For example, the instruction “#5 P1600 #10 P750 T2500” will actuate following actions:No. 5 RC servo motor will be actuated by the pulse width 1600us and the No. 10 RC servo motor will be actuated by the pulse width 750us time as 2500 ms. T can be effective in front of all servo motor in addition to S.

Some experimental results are provided to demonstrate the effectiveness of the proposed robotic grasping gripper. Three experimental cases including soft, moderate, and hard objects are addressed as follows:

Case 1: Soft object; Case 2: Moderate object; Case 3: Hard object.

The experimental results at Case 1 are depicted in Figs. 10, 11, 12, 13. Figure 10 is the original image of the grasped object. After calibration procedure, the inverse binarization processing image and its estimated width are shown in Fig. 11. From the scale, the estimated width is (391 − 51)/54 = 6.27 cm. Figure 12 shows the real width of original image and its value is 6.30 cm. From above experimental results, the estimated error is 6.30 − 6.27 = 0.03 cm. Furthermore, the proposed smart fuzzy grasping controller under rules 1, 2, 3, and the control laws in (3)–(7) is applied to grasp the object. The instantaneous pressure values are shown in Fig. 13. From the experimental results, the groping time to pick up an unknown soft object is approximately 34 s, and the steady pressure value is 50 g.

Fig. 10
figure 10

Original image of the grasped object (Case 1)

Fig. 11
figure 11

Inverse binarization processing image and estimated width (Case 1)

Fig. 12
figure 12

Real width of the grasped object (Case 1)

Fig. 13
figure 13

Instantaneous pressure values of proposed smart fuzzy grasping controller (Case 1)

The experimental results at Case 2 are depicted in Figs. 14, 15, 16, 17. Figure 14 is the original image of the grasped object. After calibration procedure, the inverse binarization processing image and its estimated width are shown in Fig. 15. From the scale, the estimated width is (291 − 120)/54 = 3.17 cm. Figure 16 shows the real width of original image and its value is 3.10 cm. From above experimental results, the estimated error is 3.10 − 3.17 = −0.07 cm. Furthermore, the instantaneous pressure values of the proposed smart fuzzy grasping controller are shown in Fig. 17. From the experimental results, the groping time to pick up an unknown moderate object is approximately 15 s, and the steady pressure value is 50 g. The experimental results at Case 3 are depicted in Figs. 18, 19, 20, 21, 22. Figure 18 is the original image of the grasped object. After calibration procedure, the inverse binarization processing image and its estimated width are shown in Fig. 19. From the scale, the estimated width is (317 − 160)/54 = 2.91 cm. Figure 20 shows the real width of original image and its value is 2.90 cm. From above experimental results, the estimated error is 2.90 − 2.91 = −0.01 cm. Furthermore, the instantaneous pressure values of the proposed smart fuzzy grasping controller are shown in Fig. 21. The final membership functions are depicted in Fig. 22. From the experimental results, the groping time to pick up an unknown moderate object is approximately 5 s, and the steady pressure value is 50 g.

Fig. 14
figure 14

Original image of the grasped object (Case 2)

Fig. 15
figure 15

Inverse binarization processing image and estimated width (Case 2)

Fig. 16
figure 16

Real width of the grasped object (Case 2)

Fig. 17
figure 17

Instantaneous pressure values of proposed smart fuzzy grasping controller (Case 2)

Fig. 18
figure 18

Original image of the grasped object (Case 3)

Fig. 19
figure 19

Inverse binarization processing image and estimated width (Case3)

Fig. 20
figure 20

Real width of the grasped object (Case 3)

Fig. 21
figure 21

Instantaneous pressure values of proposed smart fuzzy grasping controller (Case 3)

Fig. 22
figure 22

Final membership functions (Case 3)

4 Conclusions

In this study, a simple image processing unit and microprocessor are adopted as a preliminary development tools to demonstrate the application of the proposed smart fuzzy controller for the robotic grasping gripper. First, the whole framework including computer, image processing unit, pressure module, motor controller, RC servo motor, and machine finger are developed. Then, the smart fuzzy grasping controller is designed. In addition, the effectiveness is verified by some experimental results, and the proposed architectures are implemented in the home-made robotic grasping gripper in laboratory.

The major contributions of this study are (1) the fuzzy inference engine is embedded into the recognition process, (2) the smart fuzzy controller is used to generate the joint angles of the servo motors, and (3) Successful implementation of the home-made robotic grasping gripper. In future research project, a high-performance image processing unit will be utilized to promote the image processing performance.