Abstract
A model to predict and characterize the impact of out-of-focus blurring on the range uncertainty associated with the measurement by a phase-shift 3D scanner is presented. First, the reduction of the sine wave magnitude introduced by the projector lenses and the camera lenses is considered. Then, the noise reduction effects related to the camera image blurring introduced by the camera lenses are also included in the model. The main results of this study indicate that the uncertainty for a high-resolution system varies and exhibits a slanted “W” shape, which significantly differs from the inverse square of the range expected from the triangulation equation or the slanted “U” shape, which may be intuitively expected when combining blurring caused by a limited depth of field and the triangulation equation. We provide a comprehensive experimental characterization of a purposely constructed 3D scanner designed to isolate the performance degradation caused by out-of-focus projection and acquisition lenses. This scanner is designed to mimic the characteristics of a high-resolution scanner that can be employed for demanding quality control applications. In the tested configurations, the predicted depth-of-fields were within 16.3% of the corresponding measured values. This concordance between the theoretical results and experimental results suggests that the proposed model can be used to assist the design of phase-shift scanners.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Phase-shift 3D scanners are used to perform non-contact three-dimensional measurements on objects. In order to do so, they project periodic pattern(s) of light on the object of interest and record its appearance(s) using a camera. The 3D positions associated with camera pixels are then calculated based on their recovered phases and the geometric configuration of the system. This approach is employed in many commercial systems [1,2,3,4,5] and by an extensive range of users for many practical applications. For instance, this approach is popular in quality control applications where it can replace or complement traditional measurement instruments, such as callipers and tactile coordinate measurement machines. A considerable amount of research has been dedicated to the development of the technology for these systems [6,7,8]. However, the prediction of the expected error of a high-accuracy system before building a prototype is difficult. This problem generates larger development costs and suboptimal choices. Improving the understanding and modelling of the different error sources in triangulation systems is an important step towards establishing best practices, standards and measurement traceability.
In this manuscript, we examine a very important determinant of the error in phase-shift systems: the blurring effect of the camera and projector lenses. This blurring affects both the range uncertainty of the measurement and the lateral resolution of the scanner. In the remainder of this manuscript, we focus on the effect of the blurring on the range uncertainty. The lateral resolution of triangulation systems is briefly discussed in Sects. 2.1 and 5.2. The root of the analysis of the phase uncertainty can be traced to interferometry [9,10,11,12]. However, uncertainty models designed for interferometry systems do not consider the spatially varying blurring effects inherent to structured-light technology.
We therefore propose in Sect. 3 a model that predicts the performance degradation induced by optical blurring in phase-shift 3D scanners. The model is based on first-order geometric optics and only requires the values of the parameters that are available when selecting the component for assembling a 3D scanner (i.e. focal length, f-number, standoff distance).
In Sect. 4, we also provide a comprehensive experimental characterization of a purposely constructed 3D scanner designed to isolate the performance degradation caused by the out-of-focus projection and acquisition lenses. This scanner is designed to mimic the characteristics of a high-resolution and short-range scanner that can be employed in demanding quality control applications. The data obtained from the experimental characterization of this scanner correspond to the data predicted by the model.
We provide in Sect. 5 two examples of applications that are enabled by the proposed model. We show that the proposed model can be integrated into 3D scanner reconfigurable software. This type of software enables a user to reconfigure a scanner to satisfy the requirements of a particular application or better understand accuracy variations for a given configuration. The second application improves the uniformity of the performance of the 3D scanner across its reconstruction volume. This type of capability can be useful for non-expert users and situations in which a given scanner has to match a given specification.
2 State of the art: characterization of 3D imaging systems
The different approaches for characterizing 3D imaging systems can be classified into two different categories. The first category contains methods that are technologically agnostic. These approaches consider the 3D scanner as a black box. Their objective is to provide users with a method for evaluating the fitness of a given scanner for performing a specific task. They do not attempt to predict how the performance of a system is affected when some characteristics, such as the baseline, focal length and standoff distance of the scanner, are modified. The second category focuses on the evaluation of a particular type of 3D scanner. They sometimes provide a low-level characterization of only the sub-systems of a 3D scanner, which enables engineers to locate performance bottlenecks and make better design choices. These approaches enable scanners to be used in an optimal manner.
2.1 Technologically agnostic
The characterization of active 3D imaging systems is an active research field [13,14,15,16,17,18,19,20,21,22]. An artefact for the characterization of short-range 3D scanners (primarily laser triangulation and fringe projection) is presented in [23, 24]. The main particularity of this artefact is that it employs geometric dimensioning and tolerancing (GD&T), which is actively employed by mechanical engineers in the manufacturing industry. An automatic methodology to evaluate short-range 3D imaging system is presented in [21]. This comparative study evaluated eight 3D scanners for robotic applications. The same methodology was employed in [25]. Some researchers adopt methods to evaluate the performance of 2D cameras to 3D sensors by introducing a slanted edge modulation transfer function, which is similar to the 2D modulation transfer function (MTF) [13]. Other researchers also investigated the lateral resolution of triangulation systems [26,27,28]. Other artefacts, quality metrics and methodologies are presented for short-range scanners in [15,16,17,18,19,20]. Techniques for the characterization of medium- to long-range scanners were also investigated [29,30,31,32,33,34,35].
National and international standardization organizations have been working on the characterization of active 3D imaging systems with the objective of producing guidelines and standards. The document designated E2544 from the American Society for Testing and Materials (ASTM) provides the definition and description of terms for 3D imaging systems [36]. The VDI 2634 is a document that addresses the characterization of optical distance sensors [37]. The two authoritative texts on the matter of uncertainty and vocabulary related to metrology are the Guide to the Expression of Uncertainty in Measurement (GUM) and the International Vocabulary of Metrology (VIM) [38, 39]. In [40], an overview of the progress of the development of standards for 3D imaging systems is presented.
2.2 Technology specific
An important sub-system for many laser-based triangulation scanners is the peak detector, which is responsible for finding the position of a laser spot in the detector image. Two peak detector comparative studies have been conducted [41, 42]. Some structured-light systems require algorithms that are similar to those for the peak detector. These algorithms need to detect the position in the image of fringe transitions; comparative studies of these algorithms have been conducted [8, 43]. The effect of blurring on the performance of a structured-light system based on binary code has also been explored [44].
The impact of speckle on the performance of laser triangulation scanners has been investigated, and models have been proposed [45,46,47]. A dept-of-field model for a laser-based 3D scanner has also been proposed [48, 49]. Many analyses of the impact of noise on phase-shift algorithms have been conducted [9,10,11,12, 50]. The phase uncertainty of a phase shift depends on the different phase offsets, the number of patterns, and the magnitude of the sine wave and may vary depending on the value of the phase [12]. In [51], a specific phase-shift triangulation scanner is presented, and its range uncertainty is presented using different aperture diameters. However, no model includes the combined effects of projector and camera defocalization.
3 Defocalization model for phase-shift 3D scanners
3.1 Projector and camera-induced attenuation
Assuming a negligible lens thickness, an out-of-focus point will be imaged as a circular region, which is referred to as the circle of confusion [52]. The diameter \(C_\mathrm{p}\) of this circle of confusion in the object plane of the projector lens is defined as
where \(\varPhi _\mathrm{p}\) is the aperture diameter of the projection lens, \(Z_\mathrm{p}\) is the distance from the object plane to the projector lens, and \(Z_\mathrm{p}'\) is the on-focus distance from the object plane to the projector lens [52]. Similarly, the circle of confusion in the object plane of the camera lens is defined as
where \(Z_\mathrm{c}\) is the distance from the object to the camera lens, \(Z'_\mathrm{c}\) is the corresponding on-focus distance, and \(\varPhi _\mathrm{c}\) is the aperture of the camera lens [52]. The magnification of the optical system is the scaling between the size of an object and the size of its image. The magnification \(m_\mathrm{p}\) associated with the projector lens is defined as
where \(f_\mathrm{p}\) is the focal length of the projector lens [52]. Using this magnification, the period \(\omega _{\mathrm{o}}(Z_\mathrm{p})\) of the projected sine wave pattern at distance \(Z_\mathrm{p}\) from the projector can be computed as
where \(\omega \) is the period of the sine wave pattern in the projector image. When the light is non-coherent, the degradation of the image induced by blurring can be modelled by the convolution of the image with a kernel, whose size depends on the circle of confusion. Using a thin lens approximation, the blurring kernel is a disc D defined as
where c is the diameter of the circle of confusion [52]. The images projected by a fringe projection system on the object plane assume the form
where \(\theta \) is a non-specified phase offset, p is the period of the sine wave on the object plane [i.e. \(p = \omega / m_\mathrm{p}(Z_\mathrm{p}) \)], \( \alpha (x,y)\) is the constant illumination for point (x, y), and \(\beta (x,y)\) is the sine wave magnitude for point (x, y). The spatially varying values of \(\alpha \) and \(\beta \) are induced by the non-uniformity of the projector light source, the vignetting of the projector lenses and the texture on the object. However, we will assume that \(\alpha \) and \(\beta \) are spatially smooth. More explicitly, we assume that
Using Eqs. 5, 6 and 7, the degraded image \(I_c\) can be computed as
where \(J_1\) is the Bessel function of the first kind. By examining Eqs. 6 and 11, we can verify that the attenuation factor A affecting the sine wave can be computed as
where \(r = c/p\). Refer to Fig. 1 for a graphical representation of A(r). Using Eq. 12, the attenuation of the sine wave for the camera and the projector are \( A[C_\mathrm{c}( Z_\mathrm{c}) / \omega _{\mathrm{o}}(Z_\mathrm{p})] \) and \( A[C_\mathrm{p}( Z_\mathrm{p}) / \omega _{\mathrm{o}}(Z_\mathrm{p}) ]\), respectively. In our experimentations, the proposed model provides a suitable approximation as long as the ratios \(C_\mathrm{c}( Z_\mathrm{c}) / \omega _{\mathrm{o}}(Z_\mathrm{p}) \) and \(C_\mathrm{p}( Z_\mathrm{p}) / \omega _{\mathrm{o}}(Z_\mathrm{p})\) are smaller than \(\frac{1}{2}\). This corresponds to a blurring kernel that is half the sine wave period. To improve the approximation for a ratio larger than \(\frac{1}{2}\), the modelling of the point-spread-function by a disc in Eq. 5 would have to be replaced by a more physically plausible model.
3.2 Camera-induced filtering
A camera pixel integrates the incoming light over its area, which defines the minimum amount of blurring introduced by the collection system. When out of focus, the collection optics blur the incoming light into circles of confusion. The area in pixel units of the resulting camera-induced blurring \(b_\mathrm{c}\), which depends on the distance \(Z_\mathrm{c}\), is computed as
where \(s_\mathrm{c}\) is the size of a camera pixel and \(c_\mathrm{c}\) is the diameter of the confusion circle in the camera image, defined as
where \(f_\mathrm{c}\) is the focal length of the camera lens and \(C_\mathrm{c}( Z_\mathrm{c})\) is defined in Eq. 2. In our experiment, the blurring induced by diffraction was modelled using the Airy disc [52]. The diameter \(d_\mathrm{c}\) of the Airy disc in the camera plane image is defined as
where \(\lambda \) is the wavelength of the light and N is the numerical aperture. For our system, \(\lambda =450\) \(\upmu \)m. The diffraction introduces some extra filtering, and the area in pixel unit of this filter is
When analysing the noise in the range images of high-resolution fringe projector systems, we observed that the noise is spatially dependent. The phase uncertainty is reduced through successive filtering by \(b_\mathrm{c}\) and \(b_\mathrm{d}\) by a factor of
where \(\gamma \) controls the amount of spatial correlation of the noise. The value of \(\gamma \) should range between 1 and \(\sqrt{b(Z_\mathrm{c}) b_\mathrm{d}}\). The value of \(\gamma \) depends on many factors, including the surface properties of the scanned artefact and the frequency spectrum of the projector light source. When the noise is spatially independent, \(\gamma =1\). The value of \(\gamma \) is experimentally determined as shown in Sect. 4.
3.3 The complete model
Although the projector-induced blurring always negatively impacts (by reducing the SNR) the phase uncertainty and the range uncertainty, the effect of the camera-induced blurring becomes more complex as the filtering reduces the range uncertainty and the attenuation of the sine wave increases the range uncertainty. This result creates a non-intuitive effect on the range uncertainty. By combining Eqs. 12 and 17 with the uncertainty model presented in [53], the standard deviation \(\sigma ({Q_\mathrm{W}})\) for 3D point \(Q_\mathrm{W}\) can be computed as
where \(s_\mathrm{T}(Q_\mathrm{W})\) is the scaling factor derived from the triangulation equation [53], \(\sigma _\mathrm{p}\) is the phase uncertainty and \(s_\mathrm{O}(Q_\mathrm{W})\) is the proposed optically induced blurring scaling factor, which is defined as
where \(Q_\mathrm{c} = (X_\mathrm{c},Y_\mathrm{c},Z_\mathrm{c})^T\) and \(Q_\mathrm{p} = (X_\mathrm{p},Y_\mathrm{p},Z_\mathrm{p})^T\) are the point \(Q_\mathrm{W}\) in the camera and projector reference frames, respectively. Note that \(s_\mathrm{O}(Q_\mathrm{W})\) is responsible for the “W” shape of the uncertainty with the variation of the range. \( s_\mathrm{T}(Q_\mathrm{W})\) varies with the range following an inverse-squared relation. For high-resolution fringe projection systems, which have a limited depth of field, the inverse-squared relation can be approximated by a linear relation over the in-focus region, and \( s_\mathrm{T}(Q_\mathrm{W})\) is responsible for the slanting of the “W”-shaped uncertainty curve. Depending on the parameters of the collection optics and camera pixel size, the effect of \( F_\gamma (Z_\mathrm{c}) \) can be attenuated, and the uncertainty curve can be made to resemble a slated “U” shape.
4 Experimental characterization
To validate the model using empirical data, a purposely constructed 3D scanner that isolates the different blurring sources was developed. A picture of the system is shown in Fig. 2. The camera and the projection system are independently mounted on two translation stages. The translation stage on which the camera is mounted enables the displacement of the camera along its optical axis. The projector can also be moved along its optical axis using the second translation stage. This set-up enables the first optical system to be kept in focus while varying the focus position of the second optical system. This set-up enables the performance degradation individually induced by the projector and camera to be examined. The projection system is based on the design presented by Drouin et al. [54]. It uses a white LED light source, and its lens has a focal length of 25 mm. A fixed aperture of 10 mm (f-number 1.5) was employed. The camera has a focal length of 100 mm. The camera aperture was set to either f / 8, f / 11, or f / 16 during the experiments. The standoff distance is 450 mm. The pixel size is 8/8 \(\upmu \)m, and the camera lens magnification is 1 / 2 at the on-focus standoff distance. We used a programmable projector, and thus, the number of patterns and the period of the patterns can be varied. We report the results for periods of 5, 7 and 10 projector pixels.
In the remainder of this section, we compare the proposed model and actual data. First, we examine the attenuation induced by the camera. Second, the attenuation related to the projector is examined. Last, we provide a comparison between the prediction of our model and the results obtained while scanning gauge blocks of various heights with the scanner in a fixed calibrated configuration (translation stages were not moved).
4.1 Camera-induced attenuation
For this experiment, a coated optical flat is positioned in front of the scanner. The camera is translated in front of the optical flat, while the projector remains at the on-focus position with respect to the optical flat. The axis of translation is aligned with the optical axis of the camera. Every time the camera is moved by one millimetre, a scan of the optical flat is performed, and the phase-shift images recovered by the scanner are analysed. Two types of information are extracted from the images. First, the magnitude of the sine wave is calculated for each camera pixel that views the optical flat [55]. By combining the average magnitudes and the translation stage position of every scan, these experimental results are compared with the prediction made by the proposed model. Note that the attenuation induced by the projection optics is constant during the experiment because its relative position with respect to the optical flat is not changed. The right side of Fig. 3 contains the graph of the measured attenuation that results from the blurring induced by the camera for a period of five projector pixels (a typical value for this projection system) and values predicted from the proposed model. This experiment was conducted using apertures of f/8, f/11 and f/16, and both the measured values and the predicted ones are very similar.
When neglecting lens distortion, a projector-camera system can be modelled using two-view geometry, and the surface of the optical flat can be modelled as a plane in the unwrapped-phase image [56]. The unwrapped-phase image is simply an image having the size of the camera image, where each camera pixel is associated with a projector pixel [55]. When considering lens distortions, we expect that the optical flat surface can be modelled as a low-order polynomial surface in the unwrapped-phase image and that the residual of the fit of the polynomial surface can be employed as a form error.Footnote 1 The left side of Fig. 3 shows the standard deviations of the form error in pixels and their predicted values. The predicted and measured attenuations are an excellent fit. For the predicted form error, a match is obtained for the regions where the attenuation ranges between 0.49 and 1, and we slightly underestimate it outside of the regions where the blurring kernels become too large. A standard deviation of 0.02 pixel at the on-focus position corresponds to a standard deviation of 15 \(\upmu \)m in range for our system configuration. Note that Fig. 3 shows the random component of the range uncertainty. It does not quantify the systematic errors that can be induced by the blurring. Systematic errors in a high-resolution 3D imaging system have various causes. Optically induced blurring is one of these causes. Lens distortion, aberrations and imperfect mechanical assembly are a few of the other sources. As separating one source from the other sources is difficult, they should be handled by the use of a nonparametric calibration method, such as the method presented by Bumbaca and Blais [57]. This procedure can handle complex spatially varying systematic errors at the cost of a careful calibration sequence. Section 4.3 presents the results using a residual 3D error obtained with a calibrated configuration of the scanner and evaluates the presence of systematic error.
Another aspect of the model that was tested is its capability to predict the depth of field (DOF) of the scanner. For a laser-based triangulation system, the DOF is computed using the Rayleigh criterion and the spot size of the laser [48, 49]. The authors are not aware of a similar criterion for a phase-shift system. We define the DOF as the interval around the in-focus position, for which the sine wave magnitude is a minimum of 0.49 times the in-focus value. For this attenuation, the maximum deviation between the predicted values and the measured values is \(0.4\%\). Table 1 contains the measured and predicted values for sine waves with periods of 5, 7 and 10 pixels. We also provide the depth-of-field intervals for other attenuation factors.
4.2 Projector-induced attenuation
The projector-induced attenuation can be isolated using a procedure similar to that for the camera-induced attenuation. The camera is translated to its in-focus position and remains fixed during the experiment. The projector is translated at various distances to evaluate the effect of its progressive defocalization. The results are shown in Fig. 4. Table 2 contains both the predicted DOF and measured DOF of the projection system for different attenuation factors. The model predictions and measured values remain similar in front of the focal plane. However, they start to significantly differ when the attenuation reaches 0.49 for both the attenuation curve and the uncertainty curve. For an attenuation of 0.49, the maximum deviation between the predicted value and the measured value is \(14.8\%\). The differences between the model predictions and the measured values are more important than the differences in the case of the camera-induced attenuations. An element that may explain this different behaviour is the optical assembly of the projection system. This assembly contains a cylindrical lens [54] for which the thin lens approximation may be imperfect.
4.3 Evaluation of the complete model
We examine the performance of the complete system and compare the predictions with the experimental results obtained by scanning a series of gauge blocks. Gauge blocks are interesting because they enable the use of the structured-light system in a fixed configuration with multiple known planar surfaces on which to perform measurements. This set-up makes it feasible to calibrate the system, use predictions in metric space and evaluate the presence of systematic errors in the measurements. For this test, the baseline of the system was 140 mm, and the standoff distance was 450 mm. The in-focus position for the camera and projector was located between gauge block number two and gauge block number three. The system was calibrated using a nonparametric method adapted from Bumbaca and Blais [57], which enabled the production of 3D measurements in metric units (Euclidian space) instead of projector pixels (projective space). The surfaces of the gauge blocks (shown in Fig. 5) were altered to render them less specular using a vapour blasting treatment. This process has been extensively employed for producing artefacts dedicated to the characterization of 3D imaging systems [16, 23, 24, 40]. Table 3 presents the predicted and measured standard deviations of the different blocks. To compute the predicted uncertainty, we employed the complete model of Eq. 18. We also provide results for a commercial 3D scanner for comparison purposes. Based on the manufacturer’s recommendations, the standoff distance was set to 125 mm. The depth of field was 175 mm, and the field of view was varied from 100 to 200 mm. According to the manufacturer, the lateral resolution varied from 5 \(\upmu \)m in the front of the volume to 10 \(\upmu \)m in the back of the working volume. Note that the predicted value for gauge block number one significantly differs from the measured value, which is not surprising because our model overestimates the standard deviation in this part of the working volume, where the attenuation of the sine wave is large. For the other gauge blocks, the predicted values are similar to the measured values. The proposed model predicts the standard deviation of the uncertainty in a structured-light scanner caused by the defocalization of the projector and camera. It does not model the expectation of the error. The proposed model does not claim to model systematic error. We measured the height differences between a pair of gauge blocks with the structured-light system using commercial industrial inspection software (Polyworks from InnovMetric). The height differences were compared with the nominal values for the relevant gauge blocks. The height differences are computed using multiple 3D points, and thus, the effect of the standard deviation of the individual points is reduced, and the measurement biases become the dominant factor. The height differences are subject to the effect of varying biases on two surfaces. Table 4 contains the height difference between successive gauge blocks. The errors presented in Table 4 seem to indicate that biases for this system remain below the standard deviation on individual points, which was the objective when the system was calibrated. Although the calibration procedure is nonparametric, it can benefit from a prior approximation of the standard deviation of the noise to properly weigh the measurements aggregated in the calibration tables. Therefore, the proposed model can be integrated into future calibration procedures.
5 Application examples
In this section, two applications that employ the proposed model are briefly presented. The first application is computer-assisted reconfigurability software for a fringe projection system that targets an end user. It enables a non-expert to reconfigure a structured-light system for different volumes or to better understand the uncertainty variations for a given configuration. The second application is the digital filtering of a phase-shift image, which enables the flattening of the characteristic performance of the scanner. This flattening enables non-expert users to produce repeatable results when performing multiple 3D scans.
5.1 Reconfigurable system
Changing the relative position, orientation and focus of a projector and camera pair is easy. Therefore, we expect structured-light scanners to be easily reconfigurable. However, as demonstrated in the previous section, the uncertainty can significantly change when the system configuration is changed. The total reconstruction volume, which is defined as the intersection of the view pyramid of the projection and the view pyramid of the camera, is significantly larger than the usable reconstruction volume. The usable reconstruction volume is the subset of the total reconstruction volume where the range uncertainty remains below a certain threshold.
The main factor that contributes to the difference between the total reconstruction volume and the usable reconstruction volume is optically induced blurring. Thus, with a proper model, the size of the usable reconstruction volume can be predicted for a given scanner configuration. For example, Fig. 6 shows both the total reconstruction and the usable reconstruction volumes for the system in our experiments. Proper modelling of the usable reconstruction volume can be applied in several ways. It can be employed by expert users to determine the best scanner configuration for a given application.
Alternatively, it can be employed by less advanced users to assess uncertainty at different working distances and can be integrated in more complex modelling applications to optimize scanner configurations while avoiding interferences with other equipment (manipulator robots, for instance).
5.2 Performance uniformity
One application for the proposed model is obtaining an improved uniformity uncertainty curve by digitally filtering the images using a kernel, whose size is dependent on the range of a 3D point. The objective is to ensure that the performance of the system is as uniform as possible in the entire useful reconstruction volume. This uniformity is desirable for some categories of end users.
Two distinct aspects of a 3D imaging system are affected by this filtering: the first aspect is the range uncertainty, and the second aspect is the lateral resolution. The lateral resolution of the system is intuitively defined as the capability of a system to discriminate between two adjacent structures. At the on-focus position, the proposed digital filtering will reduce both the range uncertainty and the lateral resolution. This depth-dependent filtering enables the reduction of two types of human errors when non-experts have to perform multiple scans of different instances of an object.
The first type is related to the lateral resolution. A given defect can only be detected by a 3D imaging system for a small region of its useful reconstruction volume. This detection becomes critical as a user develops a misplaced confidence that the scanner will always detect this type of defect. A defect will not be detected on some of the test objects due to the positioning of the scanner.
The second type of human error is related to the non-uniformity of the range uncertainty. At the on-focus region of the scanner, the uncertainty of the measurement is higher than the uncertainty in a slightly off-focus region, which can have an impact when measuring the flatness of an object. Objects scanned while positioned in the focus region of the reconstruction volume will have a larger deviation from the nominal values as compared to objects scanned slightly out-of-focus. This finding can cause the rejection of samples based on the improper use of the scanner instead of an actual manufacturing defect. The left part of Fig. 7 shows the optical filtering diameter in pixels that occurs when varying the range. This diameter can be computed based on Eq. 17. We also display the diameter of the digital filter required to flatten the uncertainty curve. On the right side of Fig. 7 is the actual uncertainty curve and the flattened curve.
6 Conclusion
We presented a simple geometric model that can be employed to predict the impact of optically induced blurring on the performance of a structured-light 3D scanner based on phase shifts. The model predicts the effect of changing the focal length, aperture, standoff distance and triangulation angle on the range uncertainty of the measurement for all 3D points in the reconstruction volume rather than the 3D points that are in focus. Since the model is based on first-order geometric optics, it has a simple algebraic formulation that enables it to be easily integrated into computer-assisted reconfigurability software. The model is composed of three elements, one takes into account the noise reduction that is induced by the collection optics being out-of-focus and diffraction blurring. The remaining two elements consider the reduction of contrast of the sine wave patterns, which increases the range uncertainty. To validate the model using empirical data, a purposely constructed 3D scanner was developed to isolate the different blurring sources. We show a match between the predicted performance degradations induced by the optical system and the actual measurement obtained during the characterization of this purposely constructed 3D scanner.
Regarding future studies, we plan to extend the current study to cover other structured-light coding methods that are based on the detection of fringe transition. Moreover, we would like to create a model for predicting the lateral resolution of a system.
Notes
The form error is a typical quality factor used in mechanical engineering.
References
Inspect, Optional 3d digitizer, system and method for digitizing an object. U.S. patent office 6493095, (2002)
Breuckmann, Projector for an arrangement for three-dimensional optical measurement of object. U.S. patent office 7532332, (2009)
Numetrix, Device and method for obtaining three-dimensional object surface data ca 2771727, (2013)
Steinbichler, Apparatus and method for determining the 3d coordinates of an object and for calibrating an industrial robot app. 13/397,056, (2013)
Sansoni, G., Patrioli, A.: Noncontact 3D sensing of free-form complex surfaces. In: Proc. SPIE, Videometrics and Optical Methods for 3D Shape Measurement, vol. 4309 (2001)
Will, P.M., Pennington, K.S.: Grid coding: A novel technique for image processing. IEEE. Proc. 60(6), 669–680 (1972)
Benoit, P., Mathieu, E., Hormire, J., Thomas, A.: Characterization and control of three-dimensional objects using fringe projection techniques. Nouvelle Revue d’Optique 6(2), 67–86 (1975)
Salvi, J., Pages, J., Batlle, J.: Pattern codification strategies in structured light systems. Pattern Recognit. 37, 827–849 (2004)
Goldberg, K.A., Bokor, J.: Fourier-transform method of phase-shift determination. Appl. Opt. 40(17), 2886–2894 (2001)
Surrel, Y.: Additive noise effect in digital phase detection. Appl. Opt. 36(1), 271–276 (1994)
Hibino, K.: Susceptibility of systematic error-compensating algorithms to random noise in phase-shifting interferometry. Appl. Opt. 36(10), 2084–2093 (1997)
Rathjen, C.: Statistical properties of phase-shift algorithms. J. Opt. Soc. Am. A 12(9), 1997–2008 (1995)
Goesele, M., Fuchs, C., Seidel, H.-P.: Accuracy of 3d range scanners by measurement of the slanted edge modulation transfer function. In: International Conference on 3D Digital Imaging and Modeling, p. 37 (2003)
Boehler, W., Marbs, A.: Investigating scanner accuracy, tech. rep., German University FH Mainz, (2003)
Brownhill, A., Brade, R., Robson, S.: Performance study of non-contact surface measurement technology for use in an experimental fusion device. In: 21st Annual IS&T/SPIE Symposium on Electronic Imaging, (2009)
Robson, S., Beraldin, A., Brownhill, A., MacDonald, L.: Artefacts for optical surface measurement. In: Society of Photo-Optical Instrumentation & Electronics & Society for Imaging Science and Technology, in Videometrics, Range Imaging, and Applications XI, (2011)
Luhmann, T., Bethmann, F., Herd, B., Ohm, J.: Comparison and verification of optical 3-d surface measurement systems. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 37. Part B5. Beijing, (2008)
Bridges, R.E.: Ways to verify performance of 3d imaging instruments. In: International Society for Optics and Photonics, IS&T/SPIE Electronic Imaging, pp. 72390R–72390R, (2009)
Forbes, A.B., Hughes, B., Sun, W.: Comparison of measurements in co-ordinate metrology, IMEKO TC8-traceability to support CIPM MRA and other international arrangements. Measurement 42(10), 473–1477 (2009)
Boehler, W., Vicent, M.B., Marbs, A.: Investigating laser scanner accuracy. Int. Arch. Photogrammetry Remote Sens. Spat. Inf. Sci. 34(Part 5), 696–701 (2003)
Møller, B., Balslev, I., Krüger, N.: An automatic evaluation procedure for 3d scanners in robotics applications. IEEE Sens. J. 13(2), 870–878 (2013)
Cheok, G.S., Saidi, K.S., Franaszek, M., Filliben, J., Scott, N.A.: Characterization of the range performance of a 3d imaging system, Tech. Rep. NIST TN-1695 (2011)
Carrier, B., Mackinnon, D., Cournoyer, L., Beraldin, J.-A.: Proposed NRC portable target case for short-range triangulation-based 3-d imaging systems characterization. In: 23st Annual IS&T/SPIE Symposium on Electronic Imaging, (2011)
MacKinnon, D., Carrier, B., Beraldin, J.-A., Cournoyer, L.: Gd&t-based characterization of short-range non-contact 3d imaging systems. Int. J. comput. Vis. 102(1–3), 56–72 (2013)
Hansen, K., Pedersen, J., Solund, T., Aanaes, H., Kraft, D.: A structured light scanner for hyper flexible industrial automation. In: 2nd International Conference on 3D Vision (3DV), vol. 1, pp. 401–408. (2014)
MacKinnon, D., Beraldin, J.-A., Cournoyer, L., Blais, F.: Evaluation laser spot range scanner lateral resolution in 3d metrology. In: 21st Annual IS&T/SPIE Symposium on Electronic Imaging, (2009)
MacKinnon, D., Beraldin, J.-A., Cournoyer, L., Carrier, B., Blais, F.: Proposed traceable structural resolution protocols for 3d imaging systems. In: Proceedings on SPIE, vol. 7447, (2009)
MacKinnon, D., Beraldin, J.-A., Cournoyer, L., Picard, M., Blais, F.: Lateral resolution challenges for triangulation-based three-dimensional imaging systems. Opt. Eng. 51(2), 021111 (2012)
Okouneva, G., McTavish, D., Okunev, O.G.: Selection of regions on a 3d surface for efficient lidar-based pose estimation.. In: VMV, pp. 387–388, (2009)
McTavish, D., Okouneva, G.: A new approach to geometrical feature assessment for icp-based pose measurement: Continuum shape constraint analysis. In: IEEE International Conference on Machine Vision and Image Processing IMVIP 2007. pp. 23–32, (2007)
Mechelke, K., Kersten, T.P., Lindstaedt, M.: Comparative investigations into the accuracy behaviour of the new generation of terrestrial laser scanning systems. Proc. Opt. 3, 19–327 (2007)
Nitzan, D., Brain, A.E., Duda, R.O.: The measurement and use of registered reflectance and range data in scene analysis. Proc. IEEE. 65(2), 206–220 (1977)
Mak, N., Beraldin, J., Cournoyer, L., Picard, M., et al.: A distance protocol for mid-range tls in support of astm-e57 standards activities. Proc. ISPRS Comm. V Mid-Term Symp. Close Range Image Meas. Tech. 38, 428–433 (2010)
Hebert, M., Krotkov, E.: 3-d measurements from imaging laser radars: How good are they?. In: IEEE/RSJ International Workshop on Intelligent Robots and Systems’ Proceedings IROS’91 Intelligence for Mechanical Systems, pp. 359–364, (1991)
Estler, W., Edmundson, K., Peggs, G., Parker, D.: Large-scale metrology-an update. CIRP Ann. Manuf. Tech. 51(2), 587–609 (2002)
ASTM E2544 10, Standard Terminology for Three-Dimensional (3d) Imaging Systems, (2010)
VDI 2634 Part 2, Optical 3-d Measuring Systems Optical System Based on Area Scanning, (2002)
International Standards Organization (ISO), ISO Guide 98-3, Uncertainty of Measurement Part 3: Guide to the Expression of Uncertainty in Measurement (GUM: 1995). (1995)
JCGM/WG-2, CGM 200:2008, International Vocabulary of metrology Basic and general concepts and associated terms (VIM). (2008)
Beraldin, J.-Angelo., Mackinnon, David., Cournoyer, Luc.: Metrological characterization of 3d imaging systems: progress report on standards developments. In: International Congress of Metrology (Array, ed.), p. 13003, (2015)
Fisher, R.B., Naidu, D.K.: A comparison of algorithms for subpixel peak detection. Image Technology. Advances in Image Processing, Multimedia and Machine Vision, pp. 385–404. Springer, Berlin (1996)
Naidu, K., Fisher, R.B.: A comparative analysis of algorithms for determining the peak position of a stripe to sub-pixel accuracy. In: Proceedings of British Machine Vision Conference, (1991)
Trobina, M.: Error model of a coded-light range sensor, Tech. Rep. BIWI-TR-164, ETH-Zentrum, (1995)
Rashidizad, H., Rahimi, A.: Effect of scanning depth of field on the measurement noises of developed fringe projection 3d scanning system. Appl. Mech. Mater. 624, 322–326 (2014)
Baribeau, R., Rioux, M.: Influence of speckle on laser range finders. Appl. Opt. 30(20), 2873–2878 (1991)
Dorsch, R.G., Häusler, G., Herrmann, J.M.: Laser triangulation: fundamental uncertainty in distance measurement. Appl. Opt. 33(7), 1306–1314 (1994)
Leach, R. (ed.): Optical Measurement of Surface Topography. Springer, Berlin (2011)
Rioux, M., Taylor, D., Duggan, M.: Design of a large depth of view three-dimensional camera for robot vision. Opt. Eng. 26(12), 1245–1250 (1987)
Blais, F.: Review of 20 years of range sensor development. J. Electron. Imaging 13(1), 231–243 (2004)
Ohyama, N., Kinoshita, S., Cornejo-Rodriguez, A., Tsujiuchi, J.: Accuracy of phase determination with unequal reference phase shift. J. Opt. Soc. Am. A 12(9), 1997–2008 (1995)
Guidi, G., Cioci, A., Atzeni, C., Beraldin, J.-A.: Accuracy verification and enhancement in 3d modeling: application to donatello’s maddalena. In: Fourth International Conference on Proceedings of 3-D Digital Imaging and Modeling, 3DIM 2003, pp. 334–341, (2003)
Smith, W.J.: Modern Optical Engineering, 3rd edn. McGraw-Hill, New York (2000)
Drouin, M.-A., Jodoin, P.-M., Premont, J.: Camera-projector matching using unstructured video. Mach. Vis. Appl. 23(5), 887–902 (2012)
Drouin, M., Blais, F., Godin, G.: High resolution projector for 3d imaging. In: 2nd International Conference on 3D Vision, 3DV 2014, Tokyo, Japan, December 8–11, pp. 337–344, (2014)
Pears, N., Liu, Y., Bunting, P.: 3D Imaging, Analysis and Applications. Springer, London (2012)
Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn., Cambridge, Cambridge University Press, ISBN: 0521540518, (2004)
Bumbaca, F., Blais, F.: Real-time correction of three-dimensional non-linearities for a laser range finder. Opt. Eng. 25(4), 254561 (1986)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Drouin, MA., Blais, F., Picard, M. et al. Characterizing the impact of optically induced blurring of a high-resolution phase-shift 3D scanner. Machine Vision and Applications 28, 903–915 (2017). https://doi.org/10.1007/s00138-017-0866-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00138-017-0866-y