Abstract
A novel high-precision global calibration method based on structured light and 2D target is proposed for the multiple vision sensors (MVS) measuring system. By capturing a series of feature points on the light plane at the same time, the relative position of local coordinate system and global coordinate system (GCS) can be acquired in the constraint condition of the uniqueness of a world point. The method is simple and low-cost, which is feasible for on-line measurement. The experiment is designed to illustrate the high-precision of the proposed method. The result shows that the average projection error is less than 0.5 pixel, which can meet the accuracy requirement of multiple camera.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Because of its high accuracy, timeliness, and other advantages, vision sensors measurement is widely used in industrial measurement in recent years [1, 2]. MVS measurement technology has obvious advantages in the industrial measurement. In MVS system, each sensor has independent coordinates, which make it impossible for different sensors to describe the scene in consistency, unless a global calibration is achieved.
Researches on the global calibration method of MVS have been done by lots of scholars and various solutions have been proposed. Kumar etc used plane mirror to achieve the global calibration [3]. This method requires a strict restriction for the location of the mirrors and the precision of calibration is not stable. Zhang Guang-jun etc suggested the global calibration based on 1D target [4]. It is simple, but only one feature point can be obtained in each moving of the target and the extracting accuracy of feature point on target cannot be guaranteed. Hu Hao proposed a precise camera calibration method which based on close-range photogrammetry technology [5]. But this technique needs a large target and not portable, so it is inconvenient to use.
This paper proposed a simple and practical method of MVS’s global calibration. The system just need a checkerboard which can be printed in lab and a structured light, What’s more, this system has a lot of advantages. For example, it can calibrate the relationship of the position of all sensors quickly, the times of coordinate transformation in the proposed method are few. It has a high precision which has been confirmed by experiments.
2 Mathematical Model of Structured Light Vision Sensor
Figure 1 shows the geometrical model [6] of a structured light. The world coordinate frame and the camera coordinate frame o c _x c y c z c are the same, and the image plane coordinate are defined as O-XY. The relation between the world coordinates and the image coordinates can be illustrated by (1).
\( A=\left[\begin{array}{ccc}\hfill {\alpha}_x\hfill & \hfill 0\hfill & \hfill {\mu}_0\hfill \\ {}\hfill 0\hfill & \hfill {\alpha}_y\hfill & \hfill {\nu}_0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right] \) is the known intrinsic matrix of the camer.ρis a unknown scale. ax + by + cz + d = 0 is the function of the light plane in the world coordinate.
3 Global Calibration for Multi-vision Sensors
3.1 Calibration of Single Structured Light Vision Sensor
The process to calibrate a structured light vision sensor includes two parts: camera calibration and projector calibration.
As the camera calibration, we use Zhang’s method [7] to get camera intrinsic parameters and distortion factors [8]. In the experiment, we will correct image distortion before processing image.
As the projector calibration method, we use the method in the literature [6] and the method introduction is as follows.
Firstly we will determine the world coordinates of the target feature points. A plane chessboard is selected as the target. Corner points in each line of the chessboard can be seen as a 1D target, such as A B C D shown in Fig. 1. The laser should intersect with the target. The world coordinates of A C D are denoted by (x A , y A , z A ), (x C, y C, z C) and (x D, y D, z D). According to the relative positional relationship between the three points, we obtain the following equation:
Where h is a known parameter. The image coordinates of A, C and D are denoted by (X A , Y A ), (X C, Y C ) and (X D , Y D ), and they can be got from the Chessboard target. Depending on the camera imaging model and (2), we can get (3):
Because the distance L between A and D is known, we can get (4)
According to (3) and (4), we can get the world coordinates of the target feature points A and D. Then the world coordinates of point C can be determined by (2).
Secondly we will calculate the world coordinates of light plane feature point. The advantages of using corner points in each line of the chessboard as a 1D target are listed as follows. Firstly, at each position, more than one light plane feature points can be got. Secondly, the image coordinates of feature points in light plane can be determined by two intersecting lines: one line is fit by the structured light and the other is matched through each row corner points in chessboard, such as corner points A C D shown in Fig. 1. This can improve the accuracy of the image coordinates of light plane feature point. The image coordinates of B is denoted by (X B , Y B ). According to the camera imaging model, we can get (5).
The relative positional relationship of A, B and D is described in (6)
Where h x is unknown. According to (5) and (6), we can get h x and obtain the world coordinates of point B by (6).
Thirdly we will get light plane coefficients assuming. we freely place the target at n positions within the FOV, light plane feature points denoted as (x Bi , y Bi , z Bi ) (i = 1,2,3… kn) can be obtained and k can be determined by the size of the chessboard. According to (1), the following objective function are created:
Where F(x Bi , y Bi , z Bi ) = (ax Bi + by Bi + cz Bi + d)2. Using the least squares method, light plane coefficients a, b, c and d can be got. Calibration of single structured light vision sensor is finished.
3.2 Global Calibration for MVS
Figure 2 shows the arrangement of MVS system. ViCF (i = 1,2,3) represented the coordinate system corresponding to each camera respectively. GCS was established in V1CF, and transformed V2CF, V3CF to V1CF coordinate system respectively to complete the global calibration of MVS.
The key to the conversion between two coordinate systems is to obtain the [R, t] matrix that represents a relative positional relationship [9]. Take the solving of [R 21 , t 21 ] between the local coordinate system V2CF and the global coordinate system V1CF as an example, then illustrated the idea of global calibration.
As shown in Fig. 2, the point P on the Light-receiving surface was marked (X P1 , Y P1 ) in the camera 1, and marked (x P2, y P2, z P2) in the camera 2, then the two coordinate must meet (8).
The matrix A1 represents the intrinsic parameters matrix of camera 1. [R 21 , t 21 ] represents the transformation from V 2 CF to V 1 CF. According to (8), if we get a series of image points in the camera1 and world points in the camera 2, we can obtain the outside parameter matrix [R 21 , t 21 ].
The following steps are used to complete global calibration:
-
1.
According to Section 2, the positional relationship between the camera 2 and the light plane as shown in Fig. 2 can be determined.
-
2.
Obtain the feature points in light plane by the chessboard. And camera 1 and 2 capture the images at the same time. Calculate the image coordinates (X P1 , Y P1 ) of the feature point P in the camera 1.
-
3.
Calculate the image coordinates (X P2 , Y P2 ) of the feature point P in camera2.
-
4.
According to the calibration results in step 1, calculate the world coordinates (x P2 , y P2 , z P2 ) of the feature point P in the camera 2.
-
5.
According to the size of the chessboard, we can get multiple image plane coordinates in camera1and world coordinates in camera 2 at each position. Then by freely moving the target, we can get enough image coordinate points in camera1 and world coordinate points in camera 2. Put the coordinates into (8), using the L-M optimization algorithm [10], we can obtain [R 21 , t 21 ].Other V i FC (i = 3,4, ......) coordinate system can be transformed directly or indirectly through the above method. That is the global calibration method we proposed.
4 Experiment Result
The calibration results are shown as follows:
Camera 1: Intrinsic Matrx A 1
The radial and tangent distortion coefficients:
k 1 = -9.77 × 10-2, p 1 = 5.20 × 10-2, k 2 = 1.29 × 10-3, p 2 = -2.67 × 10-3
Camer 2: Intrinsic Matrx A 2
The radial and tangent distortion coefficients
k 1 = -8.65 × 10-2, p 1 = 4.13 × 10-2, k 2 = -2.17 × 10-3, p 2 = 1.16 × 10-3
Camer 3: Intrinsic Matrx A 3
The radial and tangent distortion coefficients:
k 1 = -8.67 × 10-2, p 1 = 8.60 × 10-2, k 2 = -9.17 × 10-3, p 2 = -9.72 × 10-3
The equation of the light plane1 under coordinate V 2 CF:
The equation of the light plane2 under coordinate V 3 CF:
The Rotation and Translation Matrix [R 21 ,t 21 ]from V 2 CF to V 1 CF is:
The Rotation and Translation Matrix [R31, t31] from V3CF to V1CF is:
An experiment to verify the accuracy of the calibration result is designed. Feature point P on light plane is selected. Camera 1 and 2 capture P at the same time. We can get P’s image coordinate (X P1 , Y P1 ) in camera 1.and P’s world coordinate(x P2 , y P2 , z P2 ) in camera 2 can be calculated according to the calibration result above. Equation (8) can make it possible to translate the coordinate (x P2 , y P2 , z P2 ) to camera 1’s image coordinate (X P1 ’, Y P1 ’). The D-value between (X P1 , Y P1 ) and (X P1 ’, Y P1 ’) could illustrate the precision of the calibration result. And the results are shown in Tables 1 and 2.
Ten feature points are obtained from light plane1 and 2. Calibration precision between camer1 and camera 2: the average reproject error in coordinate x is 0.014 pixel and -0.07 pixel in coordinate y. Calibration precision between camer 1 and camera 3: the average reproject error in coordinate x is 0.398 pixel and 0.476 pixel in coordinate y. The results of the experiment show that the method proposed above is feasible as well as high-precision.
5 Conclusions
The novel global calibration method for MVS proposed in this paper, which involved a structured light and plane target, provides a good accuracy. The method makes use of the uniqueness of the feature point to calculate the rotation and translations matrix between MVS. It is easy to use on-site with limited space. The target is easy to make with low-cost. What’s more, the real experiment results show a good performance of the proposed methods.
References
Senoh M, Kozawa F, Yamada M (2006) Development of shape measurement system using an omnidirectional sensor and light sectioning method with laser beam scanning for hume pipes. J Opt Eng 45(6):064301
Chen F, Brown GM, Song M (2000) Overview of the three-dimensional shape measurement using optical methods. J Opt Eng 39(1):10–22
Kumar RK, Ilie A, Frahm JM (2008) Simple calibration of non-overlapping cameras with a mirror. Comput Vis Pattern Recogn 2008:1–7
Huang BK, Liu Z, Zhang GJ (2011) Global calibration of multi-sensor vision measurement system based on line structured light. J Optoelect Laser 22(12):1816–1820
Hu H, LIANG J, Tang ZZ, Shi BQ, Guo X (2012) Global calibration for multi-camera videogrammetric system with large-scale field of view. Optic Precis Eng 20(2):369–378
Wei Z, Cao L, Zhang G (2010) A novel 1D target-based calibration method with unknown orientation for structured light vision sensor. Optic Laser Tech 42(4):570–574
Zhang ZY (2000) A flexible new technique for camera calibration. J Pattern Anal Mach Intell IEEE Trans 22(11):1330–1344
Helferty JP, Zhang C, McLennan G, Higgins WE (2001) Video endoscopic distortion correction and its application to virtual guidance of endoscopy. J IEEE Trans Med Imaging 20(7):605–617
Liu Z, Zhang G, Wei Z (2011) Novel calibration method for non-overlapping multiple vision sensors based on 1D target. Opt Lasers Eng 49(4):570–577
More J (1978) The Levenberg-Marquardt algorithm, implementation and theory. Numer Anal Lect Notes Math 630:105–116
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Li, X., Zhang, J., Chen, P. (2014). A Novel Global Calibration Method for Multi-vision Sensors. In: Zhang, B., Mu, J., Wang, W., Liang, Q., Pi, Y. (eds) The Proceedings of the Second International Conference on Communications, Signal Processing, and Systems. Lecture Notes in Electrical Engineering, vol 246. Springer, Cham. https://doi.org/10.1007/978-3-319-00536-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-00536-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-00535-5
Online ISBN: 978-3-319-00536-2
eBook Packages: EngineeringEngineering (R0)