Abstract
A dual-camera assisted method of the SCARA robot for online assembly of cellphone batteries is proposed, to solve the problem of low success rate of SCARA robot used for assembling cellphone battery. In this method, both cellphone battery and cellphone base can be precisely located with the help of the dual-camera, thus the success rate of assembly can be significantly improved. Two steps are mainly included in this method: First, calibrating hand-eye relationships between the SCARA robot and each of the two cameras; second, extracting features of the assembled targets and controlling the robot to rectify the error. The experimental results show that the proposed method can well meet the requirements of cellphone battery assembly and improve the success rate.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Due to the needs for mass production, robot technology is widely used in modern automated production assembly work [1]. SCARA (Selective Compliance Robot Arm) robot, a cylindrical coordinate industrial robot invented by Makino in 1978 [2], which has four degrees of freedom, including translation along the X-, Y-, Z-axis, and rotation with Z-axis, is particularly suitable for assembly and handling work. Currently, it is widely used in the areas of automotive industry, electronics industry, pharmaceutical industry and food industry, etc. [3,4,5,6,7,8].
During the process of assembly using SCARA robot, a key step is to identify and locate the workpieces. In the early applications, this step was normally accomplished with the help of a dedicated carrier or fixture on the assembly line, meanwhile, the SCARA robot is in accordance with the pre-set parameters to finish the required procedure. These kind of methods are suitable for applications with low accuracy requirements, such as simple handling works and sorting works, etc. However, for applications where high precision are needed, such as cellphone battery assembly, these traditional locating methods are not applicable. Whereas, the rising of machine vision technology provides an alternative for SCARA robot’s high precision assembly applications [9].
Robot hand-eye system (HES) is a kind of robot vision system that can provide visual feedback for robot. According to the position between the camera and the robot, the HES can be roughly divided into Eye-to-Hand (ETH) system and Eye-in-Hand (EIH) system. For the former, camera is mounted in a fixed position. As for the latter, the camera is fixed with the robot’s end effector, and moves with the robot [10]. Nowadays, the single HES (SHES) is widely used in the field of robot automatic assembly. However, an obvious drawback of SHES is that only one pose of the assembly targets can be precisely obtained with the help of camera. Whereas the pose of the other assembly part is still not accurate enough. As a result, for the assembling tasks where quite high accuracy is needed, such as the assembly of cellphone batteries, robot combined with SHES still may be failed. In order to realize the assembly of cellphone batteries, this paper presents a dual-camera assisted method of the SCARA robot for online assembly of cellphone batteries. Both EIH and ETH HES are used, and each of them provides the location and gesture information of the cellphone base and the cellphone battery, respectively. Once the pose information is obtained, the robot is then manipulated to finish the high accurate assembly work. The experimental results show that the assembly system with dual-camera assist works better than the one with only single camera assist.
2 System Description
In this paper, a high precision automatic assembly system is constructed and employed to realize the assembly task of cellphone battery, as shown in Fig. 1. It consists of three modules: hardware module, control module and sensor module. The hardware module is composed of a conveyor belt, a vacuum suction device (VSD), and a SCARA robot; the control module is composed of industrial computer (IPC), a PLC controller, and servo controller; and the sensor module consists of a photoelectric switch and two industrial CCD cameras.
The workflow of the system includes the following steps: (1) First, the PLC commands the servo controller to control the conveyor belt to transport the battery and the base to a specific location; (2) After that, the photoelectric switch is triggered, and a signal for starting assembly is sent to the PLC; (3) Once the starting signal is received, the robot moves to the top of the cellphone base and an image of the base is captured by camera 1 and sent to the IPC; (4) Straight after, the robot is controlled to suck the battery with the help of the VSD and then moves the cellphone battery to the top of the camera 2; (5) In succession, an image of the battery is acquired by camera 2 and sent to the IPC; (6) Finally, after the compensation values of position component and gesture component are calculated by processing the images acquired in the step 3 and step 5, they are sent to the servo controller to control the SCARA robot to accurately complete the cellphone battery assembly works.
3 Coordinate Transformation and Hand-Eye Calibration
3.1 Coordinate Transformation
The realization principle of this system is based on the coordinate transformation, and the position compensation and gesture compensation of the two assemblies are obtained by the two HESs to complete the high precision assembly work. Therefore, the transformation between different coordinate systems will be the focus of this paper. Assuming that \( O_{w} \) is the world coordinate system, \( O_{t} \) is the SCARA robot tool coordinate system, \( O_{c1} \) is the CCD camera 1 coordinate system, \( O_{b1} \) is the cellphone base coordinate system, \( O_{c2} \) is the CCD camera 2 coordinate system, \( O_{b2} \) is the cellphone battery coordinate system. And the relationships of each coordinate system are shown in Figs. 2 and 3, where Fig. 2 is the EIH system and Fig. 3 is the ETH system.
3.2 Camera Imaging Model
The use of cameras to obtain the position and gesture of objects is a process of three-dimensional coordinates mapping to two-dimensional coordinates, each single brightness on the image reflects the intensity of the emitted light at a point on the surface of the space object, and the position of the point on the image is related to the geometrical position corresponding to the surface of the space object. The relationship between these positions is determined by the camera imaging geometry model, we call the parameters of the geometric model as camera parameters, which must be determined by experiments and calculations, and the process of obtaining camera parameters called camera calibration.
The camera imaging model is built by introducing the pinhole model, as is shown in Fig. 4, \( O_{w} \) is the world coordinate system, \( O_{c} \) is the camera coordinate system, \( Z_{c} \) is the camera optical axis, which is perpendicular to the image plane, O f is the image coordinate system.
There is a point \( P_{w} (x_{w} ,y_{w} ,z_{w} ) \) in the world coordinate system, it’s coordinates in the camera coordinate system is \( P_{c} (x_{c} ,y_{c} ,z_{c} ) \), \( P_{f} (u_{f} ,v_{f} ) \) is the corresponding image point of \( P_{w} \). The image on the imaging plane is magnified to get the digital image. Accordingly, the imaging point on the plane is converted into an image point \( (u,v) \). The image coordinates of the intersection between the optical axis and the imaging plane are recorded \( (u_{0} ,v_{0} ) \), the homogeneous coordinates relationship of the image pixels and the points in the world coordinate system:
In formula (1), \( k_{x} \) is the magnification of the X-axis direction, \( k_{y} \) is the magnification of the Y-axis direction. And \( R \) is the \( 3 \times 3 \) rotation matrix and \( T \) is the translation matrix, they represent the positional relationship between the camera coordinate system and the world coordinate system. \( M_{1} \) called camera intrinsic parameters and \( M_{2} \) called camera extrinsic parameters. There are many methods for camera calibration, Zhang’s classical calibration method is adopted in this paper, and the internal and external parameter matrix can be obtained by obtaining more than 4 coplanar corners on the plane calibration template [11].
3.3 EIH System Calibration
As discussed in the previous section, the position relationship between the camera coordinate system and the workpiece coordinate system can be obtained by camera calibration. In order to obtain the positional relationship between the SCARA robot tool coordinate system and the workpiece coordinate system, it is necessary to know the relative position relationship between the camera coordinate system and the SCARA robot tool coordinate system, which we use a rotation matrix \( R \) and a translation matrix \( T \) to represent, the process of solving \( R \) and \( T \) we called the hand-eye calibration. Since the camera is fixed at the end of the SCARA robot actuator, the calibration is also known as the EIH system calibration.
The idea of the EIH system calibration is to control the CCD camera, which is mounted on the SCARA robot end effector, to observe a known calibration reference in different locations, so as to deduce the \( R \) and \( T \). Figure 5 shows the relative position of each coordinate system when the SCARA robot end effector moving from position \( P_{a} \) to position \( P_{b} \).
It is easy to know that \( _{c1} T^{b} \) and \( {}_{c2}T^{b} \) can be obtained when camera at the positions \( P_{a} \) and \( P_{b} \) by camera calibration, the position of the SCARA robot tool coordinate system \( O_{t1} \) and \( O_{t2} \) can be read by the robot motion controller, so \( {}_{t1}T^{r} \) and \( {}_{t2}T^{r} \) can also be obtained. \( _{c} T^{t} \) is the positional relationship between the SCARA robot tool coordinate system and the camera coordinate system. The following relations can be obtained through the above coordinate diagram:
From formulas (2) and (3) we can get:
Let \( A = ({}_{t2}T^{r} )^{ - 1} \cdot {}_{t1}T^{r} \), \( B = ({}_{c2}T^{b} )^{ - 1} \cdot {}_{c1}T^{b} \), \( X = {}_{c}T^{t} \):
Formula (5) is the basic equation of EIH system calibration. \( A,B \) can be derived from the known conditions, \( X \) is what we want. The principle is based on that the relative position of camera and SCARA robot end effector remains unchanged before and after the robot end effector moves. Let the SCARA robot end effect take multiple points and camera take photos of the same calibration plane, so as to solve the \( X \).
3.4 ETH System Calibration
In the ETH system, the camera is fixed on a workbench in a certain position, the idea of hand eye calibration is to move the known objects to different positions, and to get the results of the calibration, so as to obtain \( R \) and \( T \). Figure 6 shows the relative position of each coordinate system when the SCARA robot end effector moving from position \( P_{a} \) to position \( P_{b} \).
It is easy to know that \( _{c1} T^{b} \) and \( {}_{c2}T^{b} \) can be obtained when camera at the positions \( P_{a} \) and \( P_{b} \) by camera calibration, the position of the SCARA robot tool coordinate system \( O_{t1} \) and \( O_{t2} \) can be read by the robot motion controller, so \( {}_{t1}T^{r} \) and \( {}_{t2}T^{r} \) can also be obtained. \( _{c} T^{r} \) is the positional relationship between the SCARA robot base coordinate system and the camera coordinate system. The following relations can be obtained through the above coordinate diagram:
From formula (6) and formula (7) we can get:
Let \( C = {}_{t2}T^{r} \cdot ({}_{t1}T^{r} )^{ - 1} \), \( D = ({}_{c}T^{b2} )^{ - 1} \cdot {}_{c}T^{b1} \), \( X = {}_{c}T^{r} \):
Formula (9) is the basic equation of ETH system calibration. \( C,D \) can be derived from the known conditions, \( X \) is what we want. The principle is based on that the relative position of calibration object and SCARA robot end effector remains unchanged before and after the robot end effector moves. Fix a calibrator on the SCARA robot end effector to take a number of points and shoot the calibration plane with a camera fixed to the workbench, so as to solve the \( X \).
4 Image Processing and Posture Rectification
In order to obtain the position and gesture of the cellphone base and battery, it is necessary to extract the specific mark points on the cellphone base and battery. By calculating the position deviation of several mark points, the position offset and angle deviation of the cellphone base and battery can be obtained, and then the position and angle compensation can be carried out during the assembling process.
As the cellphone battery and base have a lot of characteristic points, there is no need for additional mark points. Figure 7 shows the cellphone camera on the base part, the center of the camera hole can be the mark point to detect the position deviation of the cellphone base, the straight line on the edge of cellphone base can be the reference line, which is used to detect the angle deviation. Figure 8 shows a corner of the cellphone battery, we can use the corner point as the mark point to detect its position deviation, and the edges of the battery can be the reference lines, which are used to detect the angle deviation.
4.1 Line and Circle Detection
The key of the assembly system to complete the high-precision assembly work of the cellphone battery is whether it can accurately find the mark points and reference lines. Hough transform is a classical algorithm for line detection and circle detection, which is one of the basic methods of image processing [12]. The core principle of Hough transform is the mapping relation between image space and image parameter space. In the standard parameterization, the line \( l \) expression in the image space:
As shown in Fig. 9, \( \rho \) represents the vertical distance from the origin to the line, \( \theta \) represents the angle between the vertical line and the \( X \) axis. According to the formula (10), it is easy to prove that the different points \( (x,y) \) on the line \( l \) are transformed into a set of sinusoidal curves which intersect at the same point \( p(\rho ,\theta ) \) in the parameter space. Obviously, if we can determine the point \( p \) in the parameter space, we can realize the line detection in the image space. The method to derive the point \( p \) is to find the peak value of the accumulator by the cumulative voting in the parameter space, the corresponding point of the peak is the line that needs to be detected in the image space. Due to there are many linear features on the cellphone base, the Hough transform operation may take a lot of time, for which you can set a region of interest and perform a Hough linear detection in the region of interest. The main steps include: (1) Image banalization; (2) Set region of interest; (3) Extract the edge of the region of interest by Canny algorithm, set \( P \) as the set of edge points; (4) for \( \theta \in (0,180^{ \circ } ) \), using the formula (10) to calculate corresponding polar diameter of each edge point; (5) Accumulate statistics on voting units \( (\rho ,\theta ):H(\rho ,\theta ) = H(\rho ,\theta ) + 1 \); (6) For a unit with the highest number of cumulative votes is the line we need to find.
In the standard parameterization, the circle expression in the image space:
As shown in Fig. 10, a circle needs to be described by three parameters, \( (x_{0} ,y_{0} ) \) is the center coordinates, \( r \) is the circle radius. It is easy to know that a point \( (x,y) \) in the image space is transformed into a three-dimensional cone in the parameter space, the points on the same circle are transformed into a set of three-dimensional cones which intersecting at a point in the parameter space. The computational complexity is increased due to the three-dimensional problem. For the center of the circle \( (x_{0} ,y_{0} ) \) we can limit it to a certain extent. As the cellphone camera hole has a certain degree of accuracy to ensure that \( r \) can be set to a fixed value, so that a three-dimensional problem can be reduced to a two-dimensional problem. The main steps of Hough circle detection include: (1) Image banalization; (2) Set region of interest; (3) Extract the edge of the region of interest by Canny algorithm; (4) Do regional segmentation according to the edge, establish the regional edge point list; (5) Find the location of the centroid of the region, and a possible range \( D \) of the center of the circle is obtained; (6) Calculate the distance \( r^{{\prime }} \) from the edge points to all points in the \( D \); (7) Accumulate for possible center coordinates which satisfy the condition: \( r^{{\prime }} = r \); (8) The unit with the highest votes is the circle to be detected.
4.2 Position Compensation and Gesture Compensation
After we get the mark points and reference lines, we can use these features to calculate the position offset and angle offset of the cellphone battery and the cellphone base, and then make the appropriate compensation to complete the high precision assembly of cellphone batteries. Taking the position offset and angle offset of the cellphone base as an example, the principle is shown in Fig. 11.
Where \( (X_{r} ,Y_{r} ) \) is the rotation center, which is fixed and set, manually. In the standard case, the connecting line between mark point and rotation center is perpendicular to the reference line. \( (X,Y) \) is the mark point position in the actual case, and \( l_{1} \) is the reference line in the actual case, \( l_{2} \) is an auxiliary line which pass through the center of rotation and perpendicular to the line \( l_{1} \). \( \theta \) is the angle between the connect line of the mark point and the center of rotation and the horizontal line. The position offset and angle offset compensation strategy of the cellphone base is to rotate around the rotation center of \( \Delta T \) degree first, and then move the distance \( \Delta X \) in the X-axis direction, lastly move the distance \( \Delta Y \) in the Y-axis direction. Set \( (X^{{\prime }} ,Y^{{\prime }} ) \) as the coordinates of the mark point, which has been rotated. \( k_{1} \) is the slope of the line \( l_{1} ,k_{2} \) is the slope of the line \( l_{2} \). According to the geometric relationship in the Fig. 11:
Since \( (X_{r} ,Y_{r} ) \) and \( (X_{b} ,Y_{b} ) \) are known, \( (X,Y) \) and \( k_{1} \) can be derived from the image processing, so the position offset \( \Delta X,\Delta Y \) and angle offset \( \Delta T \) of cellphone base can be obtained according to the above formulas. Similarly, the position offset and angle offset of cellphone battery can also be obtained. The offset is fed back to the SCARA robot controller to make the corresponding compensation, so as to complete the high precision assembly of the cellphone battery.
5 Experiment and Results
The experimental device of SCARA robot high precision cellphone battery assembly system assisted by dual-camera is shown in Fig. 12. In the experiment, the SCARA robot is the YK400XG type multi-joint robot provided by Yamaha, and the repeat positioning accuracy is ±0.01 mm [13]. CCD Camera 1 and CCD Camera2 are used SCI-CM500-GL-01 model 500 W pixel black and white camera with a resolution of 2592 × 1944 pixels, both lenses are 500 W pixel telecentric lens, model OPT-5M03-110, provided by OPT Company [14]. The cellphone battery is placed in a table, the position accuracy in the direction X and Y are ±1 mm. The cellphone base is placed on a conveyor belt with a mobile carrier, its location accuracy is determined by the precision of the conveyor belt and the installation accuracy of the carrier, the accuracy in the X direction is ±0.05 mm, and ±0.15 mm in the Y direction.
The dimensional tolerance of cellphone battery and battery compartment is ±0.08 mm, the unilateral reserved gap is 0.10 mm. In order to ensure that the cellphone battery can be installed in any case, the assembly accuracy should be at least 0.04 mm. To verify the effect of dual-camera assistant on SCARA robot assembly, repeat 100 times online assembly of the cellphone battery in the case of the system with dual-camera assisted, and do the same experiment in the case of the system with only CCD camera 2 assisted. The experimental results are shown in Table 1, and Table 2 is ten position and angle offset which are randomly selected in the experiments.
As can be seen from the Table 1, the cellphone battery assembly success rate can reach 100% when the assembly system is added with dual-camera assistance, its assembly accuracy can reach 0.04 mm. While the assembly success rate is only 13% when the assembly system is assisted by CCD camera 2 only. Known from the analysis, it can be found that the position offset compensation and angle offset compensation for both cellphone base and battery can be realized in the process of using the dual-camera assisted assembly, it can achieve a higher precision assembly work. For the single camera assisted assembly system, only the cellphone battery can be compensated for position and angle offset. However, the positioning accuracy of the cellphone base is limited by the accuracy of the carrier and the conveyor belt, which is difficult to guarantee the quality of assembly, so the assembly success rate is bad.
6 Summary
In this paper, we propose a dual-camera assisted method of the SCARA robot for online assembly of cellphone batteries. Both the hardware configuration of the assembly system and the mainly procedures for realizing the online assembly task are introduced. The experimental results verify that the proposed two-camera assisted method are effective and practical for the improvement of the success rate. Hence, this method can be an alternative for the real industrial application. Future works will focus on applying this scheme to accomplish more complex assembly tasks.
References
Mikkel, R.P., Lazaros, N., Rasmus, S.A., Casper, S., Volker, K., Ole, M.: Robot skills for manufacturing: from concept to industrial deployment. Robot. Comput.-Integr. Manuf. 37, 282–291 (2016)
Furuya, N., Soma, K., Chin, E., Makino, H.: Research and development of selective compliance assembly robot arm (2nd report): hardware and software of SCARA controller. J. Japan Soc. Precis. Eng. 49(7), 835–841 (1983)
Subhashini, P.V.S., Raju, N.V.S., Venkata, R.: Study on robotic deburring of machined components using a SCARA robot. In: International Conference on Robotics, pp. 91–103 (2015)
Nkomo, M., Collier, M.: A color-sorting SCARA robotic arm. In: International Conference on Consumer Electronic, pp. 763–768 (2012)
Kitahara, Y.: Development of compact assembly robot and its application. Robot Tokyo 110, 9–14 (1996)
Li, W.B., Cao, G.Z., Guo, X.Q., Huang, S.D.: Development of a 4-DOF SCARA robot with 3R1P for pick-and-place tasks. In: International Conference on Power Electronics Systems and Applications, pp. 110–121 (2015)
Yang, X.: Robotic assembly of automotive wire harnesses: New research suggests that six-axis robots can be used to install automotive wiring harnesses. Assembly 57(7), 7–13 (2014)
Gojin, M., Yanting, L., Zhong, L., Mingyu, G.: A machine vision based sealing rings automatic grabbing and putting system. In: IEEE International Conference on Industrial Informatics, pp. 202–206 (2016)
Lu, R., Lidai, W., Mils, J.K., Dong S.: 3-D automatic micro assembly by vision-based control. In: IEEE/RSJ International Conference on Intelligent Robots and System, 2017, IROS 2007, pp. 297–302 (2007)
Flandin, G., Chaumette, F., Marchand, E.: Eye-in-hand/eye-to-hand cooperation for visual servoing. In: IEEE International Conference on Robotics and Automation, vol. 3, pp. 2741–2746 (2000)
Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)
IIIingworth, J., Kittler, J.: A survey of the Hough transform. Comput. Vis. Graph. Image Process. 43(1), 87–116 (1988)
The YAMAHA SCARA robot information. http://www.yamaha-motor.com.cn/robot/lineup/ykxg/small/yk400xr/index.html. Accessed 25 Apr 2017
The telecentric lens information. http://www.optmv.com/pro_listjt.aspx?ProductsCateId=175&CateId=175&page=2. Accessed 25 Apr 2017
Acknowledgement
This work was supported by the Scientific and Technological Research Project of Guangdong Province (2014B090922001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Feng, K., Zhang, X., Li, H., Huang, Y. (2017). A Dual-Camera Assisted Method of the SCARA Robot for Online Assembly of Cellphone Batteries. In: Huang, Y., Wu, H., Liu, H., Yin, Z. (eds) Intelligent Robotics and Applications. ICIRA 2017. Lecture Notes in Computer Science(), vol 10463. Springer, Cham. https://doi.org/10.1007/978-3-319-65292-4_50
Download citation
DOI: https://doi.org/10.1007/978-3-319-65292-4_50
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-65291-7
Online ISBN: 978-3-319-65292-4
eBook Packages: Computer ScienceComputer Science (R0)