Keywords

1 Introduction

With the rapid development of machine vision system, different types of cameras play significant roles in industrial intelligence more and more. Because of compact structure and being less sensitive to illumination, time-of-flight (ToF) camera is widely applied into many fields, such as mobile robot avoidance and navigation [1], object detection and recognition [2, 3], 3D reconstruction and gesture recognition [4, 5].

Time-of-flight camera is a kind of active sensors, it emits near infrared (NIR) and receive reflected light [6], and calculate the distance from the optical center of the camera to an target object surface. There are some noise and errors exist in TOF camera [7, 8], which cause negative impact on the accuracy of distance measurement [9]. Thus it is essential to correct the data error of ToF camera at the initial procedure. There are several source of errors in ToF camera, generally the error can be classified as systematic errors and non-systematic errors [10]. Systematic errors can be predicted and corrected by different calibration methods, but the non-systematic errors are complexed and unpredicted and generally removed by filtering.

Several phenomena from the idealistic model of ToF camera are analyzed, but the temperature, multi-path reflections and ambient light effects are still hard to solve [11]. An error model is utilized to estimate ground and wall planes, acquisition geometry and noise characteristic are analyzed in this error model [12]. A 2.5D pattern board with holes is used to capture both color and ToF depth images for feature detection. The transformation of the ToF camera is reset. In order to get accurate bias after ray correction, k-means clustering and B-spline functions are utilized [1].

In ToF camera, the radial distance is acquired between the optical center to an target object surface, which is longer than the actual distance. To remove the radial distance error, this paper divides the various errors in the TOF camera into errors caused by the non-imaging principle and error caused by the imaging principle. When compensating errors caused by the non-imaging principle, data error caused by temperature changes and integration time are analyzed and corrected, multipath errors and other noises are compensated by taking offsets, and an evaluation function is proposed to determine the optimal reference distance. To tackle the radial error caused by imaging principle, Distance Overestimation Error Correction method (DOEC) based on the principle of pinhole imaging is proposed. By analyzing the positional relationship between the center pixel and other pixels in depth image, the angle relationship between the two is obtained, and the radial distance is convert to vertical distance. Thus the right distance of ToF camera is obtained.

The remainder of this paper is organized as follows. Section 2 is errors caused by non-imaging principle in ToF camnera. Section 3 corrects the radial error caused by the imaging principle. Section 4 states the experimental results.

2 Errors Caused by Non-imaging Principle in ToF Camnera

TOF cameras are able to obtain distance information by utilizing the time of flight measurement of photons. Photons are emitted by modulated infrared light. The modulated infrared light determines the phase shift \( \varphi \) between reference signal and reflected light. where \( c \) is the speed of light, \( f \) is the modulation frequency. The distance value \( d \) is formulated as:

$$ d = \frac{1}{2} \times c \times \frac{\varphi }{2\pi f} $$
(1)

The temperature of the chip changes with the operating time, which will cause drift of depth data, especially when the temperature of the chip changes significantly when the ToF camera is just started. In order to reduce the deviation caused by temperature changes, warm-up time of the chip is required before the measurement of ToF camera. Observed from the experimental results of EPC660 chip, it takes about 15–20 min to warm up before the depth image is acquired, and the obtained depth data is relatively more stable and accurate.

Integration Time (IT) is the time span when ToF camera computes multiple phase of photons and obtain the distance. Thus the Integration Time is important to the accuracy of ToF camera data. In this paper, when the integration time is 1500 μs, the effect of integration time on the depth data is studied by taking pictures of flat walls at different distances. As shown in Fig. 1(a), (b), (c), (d) is depth images acquired at different distance. Different colors represent different distances and black represents invalid points. It can be seen that when the integration time is fixed, the larger the measurement distance is, the more invalid points are, and the more pixels that appear jitter at the edge of the image, the accuracy of the measurement decreases. Therefore, it is necessary to select an appropriate integration time.

Fig. 1.
figure 1

Depth image of different distance with integration time is 1500 μs (Color figure online)

Multipath errors and other noises can be eliminated through precise modeling, but the implementation process is more complex. Some nonlinear errors will change with distance, which are difficult to solve. In order to simplify the errors correction procedure, this paper sets offsets to correct such errors.

Through the above analysis of the source of error, this paper corrects the error of ToF camera under the range of 0–4 m. Since the nonlinear errors are related to the distance, 1 m, 2 m, 3 m, and 4 m are utilized as the distance baseline for the experiment. The experiment steps of errors correction caused by non-principle are shown as follows, where 1 m is sets as the distance baseline firstly.

  • Step1: Warm up before starting. To reduce errors due to temperature changes, image acquisition should start after 20 min so that the ToF camera has enough time to warm up, meanwhile the data fluctuation range is small.

  • Step2: Adjust the integration time. The integration time was adjusted enable the amplitude at 1 m distance baseline reach about 1000LSB regarded as suitable amplitude. Integration time did not change during the subsequent experiment.

  • Step3: Set the global offset. Since the data accuracy of the pixel at the optical center is less affected by various errors than other pixels in the depth image, the trueness of the data is higher. Therefore, the measured depth data at the optical center denoted as \( basemea_{i} \) is used to compute the global offset at the distance baseline of 1 m. \( i \) is the representation of different distance baseline from 1 m to 4 m. The value of global \( offset_{i} \) can be formula as follow:

    $$ offset_{i} \, = \,distancebaseline_{i} - basemea_{i} \quad i = 1,\,2,\,3,\,4 $$
    (2)
  • Where the \( distancebaseline_{1} \) is 1 m. After the value of global \( offset_{1} \) is acquired, it can be used to compensate errors caused by multiple path and other noises.

  • Step4: Measure the depth data at other distances. To analyze the data accuracy at different distance after the integration time and global offset were set at the distance baseline of 1 m, the camera position was adjust at the actual distance of 2 m, 3 m, and 4 m respectively, denoted as \( act_{i} \). Measured distance data of optical center at different actual distance were marked and denoted as \( mea_{i} \).

  • Step5: Calculate the evaluation function \( v \). To evaluate the effect on data accuracy by changing integration time and global offset set at different distance baseline, evaluation function \( v \) is used and formula as follow:

    $$ v\, = \,\sum\limits_{i = 1}^{4} {(act_{i} - mea_{i} )^{2} } \quad i = 1,\,2,\,3,\,4 $$
    (3)
  • If the difference between the actual distance and measured distance is smaller, indicates that the accuracy of the measured data is higher, so the value of \( v \) is smaller.

  • Step6: Repeat the above process by changing the distance baseline to 2 m, 3 m, and 4 m respectively.

Experiment results are shown in Table 1. When the distance baseline is 2 m, the value of evaluation function \( v \) is the smallest than that of other distance baseline. It indicates that the distance data obtained are more reliable, and errors caused by the non- imaging principle have been improved to some extent. Thus the integration time and global offset set at the distance baseline of 2 m are chosen as the better experiment parameters in the 0–4 m operating range of ToF camera. Depth data acquired at distance baseline of 2 m are also saved as data sample, which are used to correct the errors caused by imaging principle in the next section.

Table 1. Experiment results and value of v

3 Error Caused by Imaging Principle

After correcting errors caused by the non-imaging principle, a more accurate depth image is obtained. In this section, we analyze the error caused by the imaging principle and propose a Distance Overestimation Error Correction method (DOEC) based on the pinhole imaging principle.

3.1 Distance Overestimation Error

The distance overestimation error means that the distance measured by ToF camera is longer than the actual distance. In actual measure process, as shown in Fig. 2, there are three test points \( A,\,B,\,C \) on the same object plane need to be measured, just imaging that there are three test points \( A,\,B,\,C \) on a smooth wall, test point \( B \) is the intersection of the line started from optic center \( O_{C} \) and the object plane. The straight line distance of test point \( A \), test point \( B \), test point \( C \) between optic center \( O_{C} \) denoted as \( r_{A} \), \( r_{B} \), \( r_{C} \), and the vertical distance between object plane and camera plane is denoted as \( d \). it is clear to see that \( d \) is equal to \( r_{B} \), \( r_{A} \) and \( r_{C} \) are longer than the true distance \( r_{B} \), this phenomena is called distance overestimation error. The overestimated distance \( r_{A} \) and \( r_{C} \) can be corrected by the following formula:

Fig. 2.
figure 2

Distance overestimation error

$$ d\, = \,r_{B} \, = \,r_{A} \, \times \,\sin \theta_{A} \, = \,r_{C} \, \times \,\sin \theta_{C} $$
(4)

Where \( \theta_{A} \) is the angle between the straight line \( r_{A} \) to camera plane, \( \theta_{C} \) is the angle between the straight line \( r_{C} \) to camera plane. This kind of distance overestimation will have a great impact on reconstruction of the 3D image.

Experiment has been done in front of smooth wall, the vertical distance between the wall to the camera optical center \( O_{c} \) is 1 m. Figure 3 is the scatter diagram of the wall after using the depth data collected from the experiment. It can be seen that the farther the test point is away from the optic center \( O_{c} \), the more obvious the overestimation of the depth in the 3D image is, that means the depth value of points near the optical center are closer to 1 m and the depth value near edge points are greater, even though they are in same plane. So it is necessary to compensate such error in accurate measurement.

Fig. 3.
figure 3

Distance data at 1 m of wall

3.2 Distance Overestimation Error Correction Method (DOEC)

Aim to remove the distance overestimation error, this paper proposed Distance Overestimation Error Correction method (DOEC) by exploiting the principle of pinhole imaging, a model is built to correct such error, which is shown in Fig. 4. Where imaging plane, camera plane and object plane are shown in Fig. 4. \( O_{b} \) is the center point of imaging plane, \( O_{c} \) is the optic center of camera plane, and \( O_{a} \) is the center point of object plane. \( p \) is an arbitrary point on object plane, \( p^{\prime } \) is the corresponding point of \( p \) on imaging plane. \( f \) is the focal length, \( d \) is the vertical distance from the object plane to the depth camera plane, \( r \) is the actual distance measured by the depth camera. \( \theta \) is the angle between the laser light and the line started from optic center \( O_{c} \).

Fig. 4.
figure 4

Imaging model of ToF

According to the principle of linear propagation of light, it is easy to know that \( \theta = \theta ' \). The actual distance of camera plane and object plane is \( d \), however, under the impact of distance overestimated, distance of point \( p \) to optic center \( O_{c} \) is \( r \), which is longer than \( d \). The geometric relationship between the vertical distance \( d \) and the TOF camera measurement \( r \) can be expressed as (5):

$$ d\, = \,r\, \times \,\cos \theta $$
(5)

The distance of each point can be corrected if \( \theta \) is knowable. \( \theta^{{\prime }} \) can be measured as:

$$ \theta^{{\prime }} \, = \,\arctan (\frac{{O_{b} p^{{\prime }} }}{f}) $$
(6)

where \( O_{b} p^{{\prime }} \) is the image distance between \( p \) and image plane center \( O_{b} \). The depth map exists as a two-dimensional pixel matrix in the depth camera. Since there is a conversion relationship between successive imaging plane and discrete image pixel coordinate system, \( O_{b} p^{{\prime }} \) actually represents the true distance of pixel point in the discrete image pixel coordinate system, and the position of \( p^{{\prime }} \) in the continuous imaging plane coordinate system is known as (x, y). The coordinate transformation between the continuous imaging plane and the discrete image pixel coordinate is shown in Fig. 5.

where \( u - v \) represents the discrete pixel coordinate system, \( x - y \) represents the continuous image coordinate system, and from a continuous imaging plane to a discrete image pixel coordinate system, \( p^{{\prime }} (x,\,y) \) will be transformed into \( (u_{{p^{{\prime }} }} ,\,v_{{p^{{\prime }} }} ) \), which is shown in (7):

Fig. 5.
figure 5

Coordinate transformation

$$ u_{{p^{{\prime }} }} \, = \,\frac{x}{dx}\, + \,u_{0} \, \quad \, v_{{p^{{\prime }} }} \, = \,\frac{y}{dy}\, + \,v_{0} $$
(7)

where \( dx \) represents the true physical width of one pixel in the x-axis direction, and \( dy \) represents the same in the y-axis direction. These two widths are intrinsic parameters of the imaging chip and can be found by consulting the chip manual. If the chip is stable, \( dx \) and \( dy \) will not change. \( (u_{0} ,v_{0} ) \) is the center point of the image plane, which is the projection point of the camera aperture center on the image plane and can be obtained by calibration. So \( O_{b} p^{{\prime }} \) can be calculated by (8).

$$ o_{b} p^{{\prime }} = \sqrt {(u_{{p^{{\prime }} }} - u_{0} )^{2} + (v_{{p^{{\prime }} }} - v_{0} )^{2} } $$
(8)

Then the final depth vertical distance can be computed as

$$ \begin{array}{*{20}l} {\begin{array}{*{20}l} {d\, = \,r\, \times \,\cos (\arctan (\frac{{\sqrt {(u_{{p^{{\prime }} }} \, - \,u_{0} )^{2} \, + \,(v_{{p^{{\prime }} }} \, - \,v_{0} )^{2} } }}{f}))} \hfill \\ {z\, = \,r\, \times \,\cos (\arctan (\frac{{\sqrt {(x/dx)^{2} \, + \,(y/dy)^{2} } }}{f}))} \hfill \\ \end{array} } \hfill \\ \end{array} $$
(9)

Through the above steps, the distance overestimated has been corrected, final result of the correction are shown in Fig. 6. It can be seen that depth data of points far from the optical center point are corrected, and the measured data fluctuate around 1 m about \( \pm 2 \) cm of 1 m. Thus the effectiveness of the DOEC has been proved.

Fig. 6.
figure 6

Experiment result of DOEC

4 Experiment Results

By correcting errors caused by the non-imaging principle and the error caused by imaging principle, the accuracy of the depth data is greatly improved, and the depth data in the same plane can be kept within a small fluctuation range. To prove the effectiveness of the above method, objects on the desktop and the desktop are taken as the experimental objects. By analyzing the scope of the depth data, the plane where the desktop is located and the plane where the objects are located are divided.

Figure 7(b) is a grayscale image, as shown in Fig. 7(a) is a depth map after non-imaging principle error correction, it can be seen that the overall image data after the error correction caused by the non-imaging principle is relatively stable, but due to the impact of errors caused by imaging principle, radiating radially from the center of the depth image, the color gradually darkens from yellow to blue. This phenomenon indicates that the depth data is different at different locations although they are in the same plane. The accuracy of the depth data is not high, the fluctuation range is large, and it is difficult to distinguish between the target object and the background. Therefore, in order to further improve the accuracy of the depth data, it is necessary to correct the depth overestimation error caused by the imaging principle.

Fig. 7.
figure 7

Experiment process and results of errors correction (Color figure online)

After correcting errors caused by non-imaging principle and error caused by imaging principle, the corrected depth image is finally obtained as shown in Fig. 7(c)–(e). In order to facilitate the analysis and processing of the image, this paper maps the data after the overestimation error correction to the range of 0–255, so that the corrected depth image is represented by a grayscale image, as shown in Fig. 7(c). Different depth data corresponds to different gray values in the range of 0–255. The greater the depth distance is, the greater the gray value is. It can be seen from the Fig. 7(c) that the gray value on the same plane after the error correction is relatively uniform, indicating that the fluctuation range of the depth data on the same plane is small, so the background and the target object can be separated according to the depth distance range of different planes. Figure 7(d) is a divided background, and Fig. 7(e) is a divided target object. Because the distance from the center of the camera to the desktop is greater than the distance from the surface of the article, the gray value corresponding to the desktop after correction is greater than the corresponding gray value of the surface of the target object.

After change the number and shape of target objects, final results are shown in Fig. 8. The background planes and the target objects are well separated, proved that the two error correction methods are valid.

Fig. 8.
figure 8

Experiment process and results of errors correction

5 Conlusion

Due to the imaging characteristics and various factors in the external environment, there are many kinds of errors exist in the depth data of ToF camera. Starting from the imaging principle, this paper classifed errors into two categories: errors caused by the non-imaging principle and errors caused by imaging principle. To improve errors correction efficiency of errors caused by non-imaging principle, temperature changes, amplitude intensities and integration time are analyzed and corrected in this paper, multipath errors and other noises are compensated by taking offsets, an evaluation function is proposed to determine the optimal reference distance, so as to select appropriate integration time and global offsets. In the correction of errors caused by imaging principle, Distance Overestimation Error Correction method (DOEC) based on the pinhole imaging principle is proposed to remove the radial distance error of the ToF depth camera, which contributes to limit the data on the same plane within a certain range. Finally, objects of different heights in different planes were taken as experiment objects, and plans are segment by depth data range. Even the shape of the item are complicated, the background plan can be split, which demonstrates that the correction methods proposed in this paper are effective.