Keywords

1 Introduction

Optical three-dimensional (3D) measurement methods, are playing an increasingly important role in modern manufacturing, such as structured light method [1] and phase measurement deflectometry [2]. Where, the phase shifting technique [3] is used to determine the phase information. The accuracy of phase information directly determines the measurement accuracy. Generally, the quality of the phase shift images (contains: noise, non-linear intensity and surface reflectivity), the number of phase shift steps and the intensity modulation parameter are the main reasons that affect phase accuracy. Increasing the number of phase shift steps will greatly reduce the measurement speed. And generally, the adjustment range of intensity modulation parameter is limited. In summary, it is an effective way to improve the accuracy of phase recovery by suppressing the phase error caused by the low-quality of the fringe images.

Filtering methods are used to inhibit the noise in captured fringe images. For instance,Gaussian filtering [4] is used to inhibited the phase errors caused by Gaussian noise in the captured fringe images. The fuzzy quotient space-oriented partial differential equations filtering method is proposed in literature [5] to inhibit Gaussian noise contrary to literature [4]. The median filtering [6] is used to preprocess the captured images and filter out the invalid data by the masking. The wavelet denoising method and Savitzky-Golay method are proposed in literature [7, 8] and literature [4], respectively. The captured images are converted to the frequency domain for filtering [9] the captured fringe images.Thereby the influence of noise is inhibited and the accuracy of phase recovery is improved. The gamma value of the light source [10, 11] is calibrated to correct the non-linear intensity. Tn literature [12], the measurement error (caused by Gamma) is inhibited by the Gamma calibration method expressed as Fourier series and binomial series theorem. A robust gamma calibration method based on a generic distorted fringe model is proposed in literature [13]. In literature [14], a gamma model is established to inhibit phase error by deriving the relative expression.

The multi-exposure and polarization techniques are applied to solve the measurement of objects with changes in reflectivity. For instance, multi-exposure technique [15] is proposed to measure objects with high reflectivity. Among them, reference image with middle exposure is selected and used for the slight adjustment of the primary fused image. High signal-to-noise ratio fringe images [16, 17] are fused from rough fringe images with different exposures by selecting pixels with the highest modulated fringe brightness. In literature [18], the high dynamic range fringe images are acquired by recursively controlling the intensity of the projection pattern at pixel level based on the feedback from the reflected images captured by the camera. The absolute phase is recovered from the captured fringe images with high dynamic range by multi-exposure technique. Spatially distributed polarization [19] state is proposed to measure objects with high contrast reflectivity. Generally, the degree of linear polarization (DOLP) is estimated, and the target is selected by DOLP, finally the selected target is reconstructed. The polarization coded can be applied for the target enhanced depth sensing in ambient [20]. But, the polarization technique is generally not suitable for the measurement of complex objects.

In general, the factors (noise, non-linear intensity and surface reflectance changes) that affect the quality of the captured phase shift fringe images are comprehensive and not isolated. When the noise is suppressed by the filtering method, the captured images are distorted by the interference surface reflectivity. And the multi-exposure [15,16,17,18] limited in the measurement speed for its large number of required project images. in order to improve the phase accuracy from the low-quality fringe images (affected by noise, non-linear intensity and surface reflectance changes), an iterative Gaussian filter method is proposed. The main approach is regenerating the fringe images from the wrapped phase and performed the iterative Gaussian filter. Generally, the proposed iterative Gaussian filter method can filter the noise without interference from reflectivity, improve the measurement accuracy and recover the wrapped phase information from the low-quality fringe images.

2 Principle of Iterative Phase Correction Method

For the optical 3D measurement methods, the standard phase shift fringe technique is widely used because of its advantages of good information fidelity, simple calculation and high accuracy of information restoration. A standard \( N \)-steps phase shift algorithm [21] with a phase shift of \( {\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-0pt} 2} \) is expressed as

$$ I_{n} (x,y) = A(x,y) + B(x,y)\cos [\phi (x,y) + \frac{(n - 1)}{N}2\pi ],\;n = 1,2, \cdots N, $$
(1)

where \( A(x,y) \) is the average intensity, \( B(x,y) \) is the intensity modulation, \( N \) is the number of phase step, and \( \phi (x,y) \) is the wrapped phase to be solved for. \( \phi (x,y) \) can be calculated from the Eq. (1).

$$ \phi (x,y) = \arctan \left\{ {\frac{{\sum\limits_{n = 1}^{N} {I_{n} \sin \left( {\frac{2\pi (n - 1)}{N}} \right)} }}{{\sum\limits_{n = 1}^{N} {I_{n} \cos \left( {\frac{2\pi (n - 1)}{N}} \right)} }}} \right\}. $$
(2)

The phase error in wrapped phase (\( \phi \)) is propagated from the intensity (\( I_{n} \)) . The relationship between the intensity standard variance \( \sigma_{{I_{n} }} \) and the phase standard variance \( \sigma_{\phi } \) is calculated by the principle of the error propagation and described as

$$ \sigma_{\phi }^{2} = \sum\limits_{n = 1}^{N} {\left[ {\left( {\frac{\partial \phi }{{\partial I_{n} }}} \right)^{2} \sigma_{{I_{n} }}^{2} } \right]} . $$
(3)

\( \frac{\partial \phi }{{\partial I_{n} }} \) (The partial derivatives of \( \phi \) to \( I_{n} \)) is derived from Eq. (2).

$$ \begin{aligned} & \frac{\partial \phi }{{\partial I_{n} }} = \frac{1}{{1 + \left( {\frac{\sin (\phi )}{\cos (\phi )}} \right)^{2} }}\left\{ {\frac{{\left[ {\sum\limits_{n = 1}^{N} {I_{n} \cos \left( {\frac{2\pi (n - 1)}{N}} \right)} } \right]\sin \left( {\frac{2\pi (n - 1)}{N}} \right)\left[ {\sum\limits_{n = 1}^{N} {I_{n} \sin \left( {\frac{2\pi (n - 1)}{N}} \right)} } \right]\cos \left( {\frac{2\pi (n - 1)}{N}} \right)}}{{\left[ {\sum\limits_{n = 1}^{N} {I_{n} \cos \left( {\frac{2\pi (n - 1)}{N}} \right)} } \right]^{2} }}} \right\} \\ & \begin{array}{*{20}c} {} & { - \frac{1}{{1 + \left( {\frac{\sin (\phi )}{\cos (\phi )}} \right)^{2} }}\left\{ {\frac{{2\sin \left( \phi \right)\sin \left( {\frac{2\pi (n - 1)}{N}} \right) - 2\cos \left( \phi \right)\cos \left( {\frac{2\pi (n - 1)}{N}} \right)}}{{NB\cos^{2} (\phi )}}} \right\}} \\ \end{array} \\ & \begin{array}{*{20}c} {} & { = - \frac{2}{NB}} \\ \end{array} \sin \left( {\phi - \frac{2\pi (n - 1)}{N}} \right). \\ \end{aligned} $$
(4)

The relationship between the wrapped phase and the intensity is expressed as follows.

$$ \left\{ {\begin{array}{*{20}c} {\sum\limits_{n = 1}^{N} {I_{n} \sin \left( {\frac{2\pi (n - 1)}{N}} \right)} = \frac{NB}{2}\sin (\phi ),} \\ {\sum\limits_{n = 1}^{N} {I_{n} \cos \left( {\frac{2\pi (n - 1)}{N}} \right)} = \frac{NB}{2}\cos (\phi ).} \\ \end{array} } \right. $$
(5)

In addition, the influence of error sources to \( N \) images is not different from one to another, therefore:

$$ \sigma_{{I_{1} }} = \sigma_{{I_{2} }} = \cdots = \sigma_{{I_{N} }} = \sigma_{I} . $$
(6)

In summary, the relationship between \( \sigma_{{I_{n} }} \) and \( \sigma_{\phi } \) is calculated and expressed as

$$ \sigma_{\phi } = \sqrt {\frac{2}{N}} \cdot \frac{{\sigma_{I} }}{B}. $$
(7)

As can be seen from the above content, reducing phase errors caused by phase shift images, increasing the number of the phase shift steps and improving the intensity modulation \( B \) are the main method to reduce the phase error in wrapped phase. In which, increasing the number of phase shift steps means to reduce the measurement speed. And, the intensity modulation parameter is limited by the measurement system (such as camera and projector) and the properties of measured objects (such as reflectivity). In summary, inhibit the phase error caused by the image error is a feasible way to improve the phase accuracy. Image noise (\( n_{oise\_n} (x,y) \)), non-linear intensity of light source (\( g_{amma} \)) and surface reflectivity changes (\( r(x,y) \)) are the main factors that affect the quality in captured fringe images.

$$ I_{n}^{c} (x,y) = r(x,y)(I_{n} (x,y)^{{g_{amma} }} ) + n_{oise\_n} (x,y) $$
(8)

The noise of fringe images is generally expressed as Gaussian distribution. Hence, the noise in captured fringe images can be inhibited by Gaussian filtering. And gamma value is calibrated to inhibit the phase errors caused by non-linear intensity. However, the pixels in the imaging area are affected by noise and the non-linear intensity are not uniform (Fig. 1), due to the non-uniform characteristics of the reflectivity of the object surface. The convolution operation is performed on the filter area in Gaussian filtering. For the Uneven reflectivity of objects, filtering effect is affected. Therefore, the conventional Gaussian filter method is limited in improving the wrapped phase accuracy from the objects with uneven reflectivity (Fig. 1).

Fig. 1.
figure 1

Influence of uneven reflectivity on filtering effect

Hence, an iterative Gaussian filter method with iterative Gaussian filtering is proposed to improve the accuracy of wrapped phase accuracy recovered from the phase shift images (Fig. 2).

Fig. 2.
figure 2

Iterative Gaussian filter method

Step 1: initialing calculation \( \phi^{c} \): \( \phi^{c} = \arctan \left\{ {\frac{{\sum\limits_{n = 1}^{N} {I_{n}^{c} \sin \left( {\frac{2\pi (n - 1)}{N}} \right)} }}{{\sum\limits_{n = 1}^{N} {I_{n}^{c} \cos \left( {\frac{2\pi (n - 1)}{N}} \right)} }}} \right\},n = 1,2, \cdots N \), when the projected fringe pattern is \( I_{n} \), the image captured by the camera is \( I_{n}^{c} \);

Step 2: wrapped phase \( \phi^{c} \) is projected to the phase shift fringe image space, then \( I_{n}^{g} \) is generated: \( I_{n}^{g} (x,y) = A(x,y) + B(x,y)\cos [\phi^{c} (x,y) + \frac{(n - 1)}{N}2\pi ] \);

Step 3: \( {}^{F}I_{n}^{g} \) is determined by filtering the images \( I_{n}^{g} \) with Gaussian filter, and the wrapped phase \( {}^{F}\phi^{c} \) is recalculated according to step 1;

Step 4: repeat step 2 and step 3. The stop condition is that the phase error between \( {}^{F}\phi^{c} \) and \( \phi^{c} \) is less than the set threshold \( T \).

It is worth noting that the quality of fringes is improve by Gaussian filtering without changing the sine of fringes if the objects with uniform reflectivity. This is the theoretical premise that step 3 can improve the accuracy of phase recovery by projecting the phase with errors into the fringe image space and performing Gaussian filtering.

3 Application-Binocular Structured Light

Phase shift is a key technique in optical measurement methods, which is applied to the phase measurement deflectometry and structured light. The accuracy of phase recovery directly determines the accuracy of 3D measurement. The performance of the proposed iterative Gaussian filter method is verified by the binocular structured light [22], as shown in Fig. 3.

Fig. 3.
figure 3

The measurement principle of binocular structured light

3.1 Measurement Principle of Binocular Structured Light

The binocular structured light is established with two cameras (camera 1, camera 2) and a projector. The light projected by the projector is reflected by the object point \( w \) and imaged on the pixel \( p_{1} \) of camera 1 and pixel \( p_{2} \) of camera 2. \( {}^{{c_{1} }}{\mathbf{R}}_{{c_{2} }} {}^{{c_{1} }}{\mathbf{T}}_{{c_{2} }} \) is the posed relationship between the two cameras. \( {\mathbf{K}}_{1} \) and \( {\mathbf{K}}_{2} \) are the intrinsic parameters of camera 1 and camera 2, respectively. The coordinates of the object point \( w \) are \( {}^{{c_{1} }}X_{w} \) and \( {}^{{c_{2} }}X_{w} \) in the coordinates systems of camera 1 and camera 2, respectively. The coordinates of the object point \( w \) is determined according to the reflected light \( r_{1} \) determined by \( p_{1} \) and reflected light \( r_{2} \) determined by \( p_{2} \). The correspondence between \( p_{1} \) and \( p_{2} \) is determined from the absolute phases calculated from the captured fringe images.

$$ \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{}^{{c_{1} }}X_{w} = k_{1} \cdot r_{1} ,} & {r_{1} = {\mathbf{K}}_{1}^{ - 1} \cdot \left[ {\begin{array}{*{20}c} {p_{1} } & 1 \\ \end{array} } \right]^{T} ,} \\ \end{array} } \\ {\begin{array}{*{20}c} {{}^{{c_{2} }}X_{w} = k_{2} \cdot r_{2} ,} & {r_{2} = {\mathbf{K}}_{2}^{ - 1} \cdot \left[ {\begin{array}{*{20}c} {p_{2} } & 1 \\ \end{array} } \right]^{T} ,} \\ \end{array} } \\ {{}^{{c_{1} }}X_{w} = {}^{{c_{1} }}{\mathbf{R}}_{{c_{2} }} \cdot {}^{{c_{2} }}X_{w} + {}^{{c_{1} }}{\mathbf{T}}_{{c_{2} }} .} \\ \end{array} } \right. $$
(9)

The coefficient parameters \( k_{1} \) and \( k_{2} \) is determined through the Eq. (9). The camera intrinsic parameters \( {\mathbf{K}}_{1} \), \( {\mathbf{K}}_{2} \), and the posed relationship \( {}^{{c_{1} }}{\mathbf{R}}_{{c_{2} }} {}^{{c_{1} }}{\mathbf{T}}_{{c_{2} }} \) are calibrated by the iterative calibration method [23].

3.2 Measurement Experiments

The binocular structured light is constructed by two cameras (resolution: 1280  ×  1024 pixels) with 12-mm lens and a laser projector (resolution: 1280  ×  1024 pixels). The multi-frequency heterodyne is used to determine the phase order of the wrapped phase. The periods are chosen as (28, 26, 24) in multi-frequency heterodyne. Befor measurement, the system parameters of binocular structured light is calibrated at first. During calibration process, the calibration board (chess board) at different positions are captured by two cameras. The iterative calibration method is applied to determine the intrinsic parameters of the two cameras and the their posed relationship, as shown in Table 1. The posed relationship (the camera 2 relative to camera 1) is \( {}^{{c_{1} }}{\mathbf{R}}_{{c_{2} }} {}^{{c_{1} }}{\mathbf{T}}_{{c_{2} }} \). To verify the performance of the proposed iterative phase correction technique, the non-linear parameter (gamma) of the projector is not calibrated.

Table 1. The intrinsic parameters of camera 1 and camera 2
$$ \left\{ {\begin{array}{*{20}c} {{}^{{c_{1} }}{\mathbf{R}}_{{c_{2} }} = \left[ {\begin{array}{*{20}c} {0.9044} & { - 0.0120} & {0.4263} \\ {0.0106} & {0.9999} & {0.0056} \\ { - 0.4264} & { - 0.0005} & {0.9045} \\ \end{array} } \right]} \\ {{}^{{c_{1} }}{\mathbf{T}}_{{c_{2} }} = \left[ {\begin{array}{*{20}c} { - 118.6958} & { - 0.7826} & {26.1722} \\ \end{array} } \right]} \\ \end{array} } \right. $$

A Φ38.092-mm standard spherical with the surface accuracy of 0.5-μm is measured to verify the proposed iterative Gaussian filter method. As shown in Fig. 4, the absolute phase obtained by direct Gaussian filtering is less accurate than the absolute phase recovered from the proposed iterative Gaussian filter method.

Fig. 4.
figure 4

The absolute phase decoding from the captured image with Gaussian filtering and iterative phase correction. (a): The phase shift fringe image captured by camera 2; (b): image obtained by applying the Gaussian filter to (a); (c):\( {}^{F}I_{n}^{g} (x,y) \) determined from the absolute phase; (d): the absolute phase calculated from (a); (e): the absolute phase calculate by the iterative Gaussian filter method from (c); (f): the absolute phases of blue line in (d) and red line in (e),respectively; (g): the enlarged view of the red box in (f). (Color figure online)

Figure 5 shows the measurement results of standard spherical. The error of the reconstructed point cloud with direct Gaussian filtering is shown in Fig. 5(c). The reconstruction error are 0.13 mm and the RMS deviations is 0.0247 mm. The error of the reconstructed point cloud with proposed iterative Gaussian filter method is shown in Fig. 5(d). The less accurate parts of the reconstruction error are around 0.04 mm and the RMS deviations is 0.0094 mm.

Fig. 5.
figure 5

The measurement results of standard spherical. (a): the point cloud determined from the absolute phase calculated from the captured images with direct Gaussian filtering; (b): the point cloud determined from the absolute phase calculated from the captured images with iterative phase correction; (c): the reconstruction error of (a); (d): the reconstruction error of (b).

The conventional direct filtering method will fail when reconstructing the 3D information of object with change drastically in reflectivity (for instance the reflectivity is too low or too high), as shown in Figs. 6(b), 7(b). The iterative Gaussian filter method proposed is not sensitive to surface reflectivity. Therefore, effective measurement results can still be obtained by measuring the surface of the objects whose reflectance change more drastically, as shown in Figs. 6(d), 7(c). The error of the reconstructed point cloud with direct Gaussian filtering is shown in Fig. 6(c), The reconstruction error are 0.48 mm and the root mean square (RMS) deviations is 0.042 mm. The error of the reconstructed point cloud with proposed iterative Gaussian filter method is shown in Fig. 6(d). The less accurate parts of the reconstruction error are around 0.13 mm and the RMS deviations is 0.024 mm. When the surface reflectivity of the object decreases, the intensity of the captured phase shift fringe images also decreases. When the reflectivity of the surface of the object is very low, it is difficult to recover the effective phase from the captured fringe images by convention methods.

Fig. 6.
figure 6

The measurement results of table tennis (low reflectivity in localized areas). (a): the captured phase shift fringes; (b): the surface determined from (a) with direct Gaussian filtering; (c): the reconstruction error of (b); (d): the surface determined from (a) with iterative phase correction; (e): the reconstruction error of (d).

Fig. 7.
figure 7

The measurement results of box (low reflectivity in localized areas). (a): the captured phase shift fringes; (b): the surface determined from (a) with direct Gaussian filtering; (c): the surface determined from (a) with iterative phase correction

The experimental results show (Fig. 7) that the wrapped phase information can be recovered from the low-quality phase shift fringe images by the iterative Gaussian filter method, and then measurement accuracy can be improved. Generally, compared with conventional methods, the proposed iterative Gaussian filter method can filter the noise without interference from reflectivity, improve the measurement accuracy and recover the wrapped phase information from the low-quality fringe images (which is difficult to recover with the conventional methods). Thereby, compare with multi-exposure technique, objects (with drastically changing surface reflectivity) can be reconstructed without additional projection of phase shift fringe images.

4 Conclusions

The iterative Gaussian filter method is proposed to recover the wrapped phase information. The whole approach is regenerating the fringe images from the wrapped phase and performing the iterative Gaussian filter. The phase errors caused by the low-quality in fringe images are effectively inhibited. Therefore, the proposed iterative Gaussian filter method can be applied to the structured light method for improve the measurement accuracy. Especially, the proposed method can be used for the phase recovery from the objects with large changes in reflectivity (which is very difficult to be effectively recovery with the conventional methods). And, the effectiveness of the proposed method is verified by experiments.