In many areas of technology, measurements of linear dimensions and displacements reach 95% of all measurements of controlled parameters [1]. The results of two-dimensional (2D) and three-dimensional (3D) measurements are widely used in physical and engineering modeling of spatially complex objects.

The methods considered in [2, 3] are the most promising for contactless measurement of the profile of the surface of three-dimensional objects. These methods are based on illumination by structurized light and observation of the object under investigation from a direction that differs from the direction of illumination. An observed 2D image, representing a spatial distribution of the intensity of the light scattered by an object, contains distortions that encode information about the third coordinate [46]. The error of 3D measurements depends on the precision with which the structurized illumination and the light-scattering properties of the surfaces of the object are recorded [7]. The algorithms used to reconstruct the profile are generally adapted to a particular class of objects with known light-scattering properties of the surface.

The image of parallel half-tone bands with intensity that varies in the transverse direction according to a periodic law is used as structurized illumination in [2]. In the recording process, the parallel half-tone bands acquire spatial distortions caused by the surface relief of the test object. Information about the relief is obtained by analyzing the phase of the distribution function of the intensity of the image used. Therefore, the analyzed images of structured illumination are referred to as “phase images.” The basic advantage of the method of phase triangulation consists in the resistance to defocusing of the projected and detected optical images. A high degree of precision is achieved in measurements of the profile of the surface of a three-dimensional object with relief greater than the intensity of the definition of the optical system. Since phase images are similar to interference patterns, they are often processed by means of well-known methods used to process interference patterns.

A substantial increase in the quality and precision of 3D measurements may be achieved with the use of the method of phase steps [8]. The method is based on successive generation of several realizations of structurized illumination and the analysis of the resulting set of images. Phase information is recorded by successively introducing a specified shift between neighboring phase images. A number of different approaches are used to reconstruct the relief of a surface by means of this method. If the phase shift is determined by a linear function, the method of expansion into a Fourier series [9] or orthogonal relationships between trigonometric functions [7] is used. A generalized algorithm for identification of phase images with arbitrary step shifts on the basis of a vector representation of the resulting system of equations [10] is known. A desired phase distribution may be found by means of the algorithm, but the algorithm does not assure minimization of the error when there is noise present in the phase patterns. There exists a noise-resistant method of identification of phase images with arbitrary step shift [11] based on the method of least squares which may be used to obtain a desired phase distribution that assures minimization of the error [12].

The advantage of methods based on structured illumination lies in scalability. If a source of structured illumination is sufficiently stable, an increase in the geometric dimensions of the measurement space is achieved by dispersal of the elements of the measurement system to distances that make it possible to illuminate and record the entire required volume. Such an effect may be achieved by means of a wide-angle optical system for the source and detector of the radiation. The use of methods of structurized illumination for large objects possessing linear dimensions that far exceed the dimensions of the optical elements of the measurement system is limited due to difficulties with calibration. At great distances, the optical elements of the detector and source of the radiation unavoidably produce substantial nonlinear distortions in the optical system which are difficult to formalize [13]. The classical solution of the problem of calibration is based on compensation of the nonlinear distortions of the optical elements through measurement of a calibrated target with well-known geometric parameters. However, for large objects a calibrated target with linear dimensions greater than several meters possesses either very great mass or has a construction that is insufficiently rigid, which complicates the calibration procedure and makes it difficult to implement under industrial conditions.

The objective of the present article is to develop methods for the measurement of three-dimensional geometry on the basis of structurized illumination to achieve high-precision measurements of large complexly shaped objects. The novelty of the study lies in the universal method of calibration of the measurement system (with the use of neural nets), which ensures compensation of nonlinear distortions of the optical elements and, ultimately, low error at a level of 0.1% of the range of measurements. The analytical error of measurements performed by a method based on phase triangulation, which takes into account all the stages of the measurement process, is also specified.

Method of Measurements. The optoelectronic method of contactless measurement is based on phase triangulation. The device that implements the method (Fig. 1) comprises an illuminator (source of structurized illumination) and photodetector spatially displaced relative to the illuminator. The source and detector are connected to a computer which controls the measurements and performs processing of the data.

Fig. 1
figure 1

Functional diagram of measurement complex: 1, 3) source and detector of illumination; 2) test object; PC – personal computer; DB – database.

The profile of the surface of three-dimensional large complexly shaped objects is measured in the following way. The object is successively illuminated by projected structurized images (a series of frames with images of parallel half-tone bands). The intensity of the projected bands in the transverse direction varies by a periodic law with linear shift of the initial phase.

Images of the object produced from a direction differing from the direction of illumination contain distortions of the structurized images that encode information about the profile of the surface. A dependence of the intensity as a function of the ordinal number of the frame is reconstructed for each point in the image of the controlled object. The initial phase of the projected periodic signal is calculated from this dependence. The Cartesian coordinates of the point in space are determined on the basis of local values of the phase, the coordinates of the point in the image, and the calibration coefficients. The set of coordinates of all the points corresponds to the desired profile of the surface of the three-dimensional object.

Processing of Phase Images. This step begins with a determination of the distribution of intensity in the structurized (phase) images:

$$ \begin{array}{cc}\hfill {I}_i\left(x,y\right)=A\left(x,y\right)\left(1+V\left(x,y\right) \cos \left(\upvarphi \left(x,y\right)+\updelta, \right)\right),\hfill & \hfill i\in 0,\dots, N-1,\hfill \end{array} $$
(1)

where I i (x, y) is the distribution of the intensity in the ith image of the controlled object; A(x, y), distribution of background intensity; V(x, y), average spectral luminous efficiency (visibility factor); φ(x, y), distribution of phase difference, which encodes information about the distance of the object; δ i , introduced phase shift between neighboring realizations of the structurized illumination; and N, number of shifts.

A stable method of identification of the phase images with an arbitrary step shift that assures minimization of the error in the determination of the phase when there is noise present is used to reconstruct the distribution φ(x, y). We represent relationship (1) in the form

$$ \begin{array}{c}\hfill \begin{array}{cc}\hfill {I}_i=A+B \sin {\updelta}_i+C \cos {\updelta}_i;\hfill & \hfill \upvarphi =- \arctan \left(B/C\right);\hfill \end{array}\hfill \\ {}\hfill v=\sqrt{B^2+{C}^2}/A,\hfill \end{array} $$
(2)

where the coefficients A, B, and C are determined from a condition that requires minimization of the discrepancy functional between the experimental and theoretical data:

$$ S\left(A,B,C\right)={\displaystyle \sum_{i=1}^N{\left({I}_i-A-B \sin {\updelta}_i-C \cos {\updelta}_i\right)}^2,} $$

i.e., with all partial derivatives equal to zero,

$$ \partial S/\partial A=\partial S/\partial B=\partial S/\partial C=0. $$

Thus, a system of linear equations is obtained, and the coefficients A, B, and C are found once the system is solved. The desired value of the phase φ is determined from (2).

The method of identification of phase images is stable in the presence of substantial variations in the light scattering properties of the surface of a measured object with complex 3D geometry. The method is contactless and may be effectively implemented on the basis of a detecting camera with dynamic range of the intensity.

Calibration in Measuring Circuits with Structurized Illumination. To a significant extent, such calibration defines the attainable precision. When measuring the profile of a surface, it is necessary to know the optical parameters of the detector and the source of the radiation in the global coordinate system. The process of calibration consists in calculating the parameters of an image of the 2D coordinates of an optical source in terms of the spatial 3D coordinates of the measured object (world coordinates) and the parameters of an image of the world 3D coordinates of the measured object in terms of the 2D coordinates of the optical radiation detector. Once the coordinates of a point of the measured object in the image of the optical radiation detector and the coordinates in a plane perpendicular to the optical axis of the source of the optical radiation have been obtained, the 3D coordinates of the point in the world coordinate system may be uniquely determined [14]. Unlike calibration using robotized arms or automatic trolleys [15], this type of calibration is a static process. It is performed only once in order to determine the relative position of the source and the detector of the optical radiation.

Calibration was performed by means of a calibration target situated at different points of the measurement space. Measurements of the phase shift and the coordinates of the target in images of the optical radiation detector were performed. A regression function of the dependence of the 3D coordinates (x, y, z) in the global coordinate system on the phase shift and coordinates of the target in the image was created on the basis of the obtained data:

$$ \left(x,y,z\right)=F\left({x}_{\mathrm{d}},{y}_{\mathrm{d}},{P}_{\mathrm{s}}\right), $$
(3)

where x d, y d are the coordinates of a point in an image obtained by the optical radiation detector, and P s is the phase shift of the signal generated by the source of optical radiation.

We used a neural network with topology 3–X–3, i.e., three inputs, 3 outputs, and controlled number of perceptrons in the intermediate layer. Training of the neural network was achieved on the basis of an inverse error propagation algorithm. Figure 2 presents the dependence of the standard deviation of the measured value of the coordinates on the specified number of perceptrons M in an intermediate layer of the network. The training sample comprises data obtained from 45 measurements of the calibrated target. It was established that the use of 10 perceptrons in the intermediate layer (topology of neural network 3–10–3) is optimal. If there are fewer perceptrons, the network will not encompass the dimensions needed for adaption to the input data, and if there are more pereceptrons, effects of retraining will appear, where the excessive quantity of perceptrons will yield additional degrees of freedom in the course of training the network and, as a consequence, failures that lead to an increase in the total error in the operation of the neural network will arise.

Fig. 2
figure 2

Standard deviation as a function of number of perceptrons M in the neural network: 1, 2, 3) correspondingly, on axes X, Y, Z.

Measurement of the profile of the controlled object was performed in two stages; the phase shift needed for each point in the image of the controlled object was determined on the first stage, while on the second stage the Cartesian coordinates in the global coordinate system were calculated for each point, using the regression calibration function that had been obtained.

Analytical Estimate of Measurement Error. An analytical estimate of the measurement error in the determination of the phase φ(x, y) in the images is presented in [12]. The estimate was determined under the assumption that the inaccuracy of measurements of intensity I at a point is the basic source of the error. Moreover, it was assumed that the error in the determination of the phase φ(x, y) by means of a stable method of identification of phase images will be no greater than in a linear distribution of the initial phase shifts δ i on the interval [0, 2π]. The relative error in the measurement of the phase φ(x, y) is estimated as

$$ \uptheta =\varDelta I/\left(I\sqrt{N}\right), $$

where ΔI/I is the relative error in the measurement of the intensity of optical radiation detected by the detector, ΔI/I = 21–b; and b is the number of bits by means of which the intensity of the color of the source of optical radiation is encoded.

The Cartesian coordinates of points on the surface of the measured object are determined with the use of a regression of the form of (3). The total relative error in measurements of each Cartesian coordinate is composed of the following components:

$$ \varOmega =\upalpha \uptheta +\upbeta \varDelta +\uplambda, $$

where θ is the relative error in the determination of the phase by a stable method of processing phase images with arbitrary step shift; Δ, error in the determination of the coordinate of a point in the images of the optical radiation detector; λ, relative error obtained as a result of calibration; and α and β, positive coefficients less than 1 that determine the contribution of each component to the total error. The values of the two coefficients α and β determined for each Cartesian coordinate depend on the relative position of the source and detector of the optical radiation, and the Cartesian system in the laboratory coordinate system. We will consider the case in which α = 1 and β = 1 to estimate the maximally possible error in the measurements.

We estimate the error in the determination of the coordinate of a point in the images of the optical radiation detector using the formula Δ = S − 1c , where S c is the linear resolution of the camera array. The relative error induced by the calibration procedure λ = δ/L 1/2, where δ is the error in measurements of the Cartesian coordinates of the calibrated object and L the number of measurements with distinct Cartesian coordinates. Then the total relative error is given by

$$ \varOmega ={2}^{1-b}/\sqrt{N}+{S}_{\mathrm{c}}^{-1}+\updelta /\sqrt{L.} $$
(4)

The latter expression is the maximal estimate. As a result, the source and detector of the optical radiation are situated so that the X and Y axes of the laboratory coordinate system are parallel to the corresponding coordinate axes in the images of the optical radiation detector and the Z-axis perpendicular. In this case, α Z ≈ 1, β Z ≈ 0 and α X,Y ≈ 0, β X,Y ≈ 1.

Experimental Demonstration. An experimental demonstration of the performance of the proposed approaches was carried out with the use of a plant corresponding to the design in Fig. 1. An NEC VT570 digital LCD projector (NEC Display Solutions, Germany) with resolution 1024 × 768 points was used as the source of structurized optical radiation. A KC-383C digital CCD camera (Kampro, Taiwan) with Tamrom 13VM2818AS lens (Tamrom Co. Ltd., Japan) by means of which an image measuring 320 × 240 pixels could be obtained was used as the optical radiation detector. The source and detector were connected to a computer that implements the measurement procedure, data processing, and display of the results. The choice of inexpensive devices with low resolution and simple lens was due to the wish to demonstrate the operation integrity and high stability of the proposed approaches.

An experiment designed to measure a plane surface with unevenness of around 500 μm was performed in order to determine the measurement precision. Planeness was monitored by direct measurements by a plane metallic ruler (series of 10 measurements). It was established that the maximal deviation of the surface from an ideal plane did not exceed ±500 μm. The position of the source and detector of the optical radiation was selected so as to support measurements in a 2 × 2 × 2 m space. The distance to the measured object was around 4 m and that between the source and detector 3 m. Measurements were performed with the use of a series of structurized images (number of frames N = 200).

Measurements of 250 calibration targets situated at different points of space with setting error of the coordinates less than 5 mm were performed in the calibration. In order to estimate the errors, the measurements were performed for a plane surface situated parallel to the XZ plane. The maximal scatter of the measured points in the plane did not exceed 3 mm while the standard deviation over the entire measurement space was estimated as 1.14 mm, i.e., the relative error amounted to 0.057% of the measurement range. Analytically, the error may be estimated using (4). Setting b = 8, N = 200, S c = 240, δ = = 5/2000, L = 200, α = 1, and β = 0, we obtain Ω = 0.062%, which is entirely in agreement with the experimental results.

As an example, the geometry of a deformed wood-fiber slab measuring 1200 × 1000 × 500 mm was measured. The measurement space amounted to around 2000 × 2000 × 2000 mm. The result of the measurements is presented in Fig. 3.

Fig. 3
figure 3

3D image of reconstructed surface of deformed wood-fiber slab in ZY projection.

The experimental results confirmed the operational integrity and reliability of the proposed optoelectronic method of 3D measurements of large objects on the basis of space-time modulation of a source of optical radiation. The method is contactless and resistant to substantial variations in the light-scattering properties of the surface of the measured object, and may be implemented on the basis of a detecting camera with limited dynamic range. The new method of calibration supports compensation of optical distortions of the elements of the measurement system.

Conclusion. A development of the method of phase triangulation for contactless reconstruction of the profile of the surface of large complexly shaped objects that is especially promising for machine-construction technologies was proposed.

A universal method of calibration of a measurement system based on the use of a three-level neural network with topology 3–10–3 was demonstrated. Compensation of nonlinear distortions of the optical elements is supported in the calibration. Experimental and analytic estimation of the measurement error was performed. The computed analytic estimate of the error coincided with the experimental estimate. The relative measurement error was around 0.06%.

The proposed methods are distinguished by a high degree of reliability and resistance to variations over a broad range of optical properties of the surface of objects as well to additive noise in the images (flares, noise of the optoelectronic path, etc.). Reliable results may be obtained with the use of inexpensive optoelectronic circuitry consisting of a photodetector with low resolution, ordinary lens, and low-power source of structurized illumination. The results obtained demonstrate the operating integrity of the optoelectronic method of contactless measurement of the profile of the surface of large complexly shaped objects and the promise of future development of the method.

This study was supported by the Russian Science Fund (Grant No. 14-29-00093).