Abstract
This chapter covers extensively the methods used to determine the flow velocity starting from the recordings of particle images. After an introduction to the concept of spatial correlation and Fourier methods, an overview of the different PIV evaluation methods is given. Ample discussions devoted to explain the details of the discrete spatial correlation operator in use for PIV interrogation. The main features associated to the FFT implementation (aliasing, displacement range limit and bias error) are discussed. Methods that enhance the correlation signal either in terms of robustness or of accuracy are surveyed. The discussion of ensemble correlation techniques and the use of single-pixel correlation in micro-PIV and macroscopic experiments is a novel addition to the present edition. A detailed description is given of the standard image interrogation based on multigrid image deformation, where the advantages in the treatment of complex flows are discussed as well as the issues in terms of resolution and numerical stability. Another new feature introduced in this chapter is the discussion of the recent developments of algorithms in use for PIV time series as obtained by high-speed PIV systems. Namely, the algorithms to perform Multi frame-PIV, Pyramid Correlation and Fluid Trajectory Correlation and Ensemble Evaluation are treated. Furthermore, a new section that discusses the methods used for individual particle tracking is introduced. The discussion describes the working principles of PTV for planar PIV. The potential of the latter techniques in terms of spatial resolution as well as their limits of applicability in terms of image density are presented.
An overview of the Digital Content to this chapter can be found at [DC5.1].
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
This chapter treats the fundamental techniques for the evaluation of PIV recordings. In order to extract the displacement information from a PIV recording some sort of interrogation scheme is required. Initially, this interrogation was performed manually on selected images with relatively sparse seeding which allowed the tracking of individual particles [2, 20]. With computers and image processing becoming more commonplace in the laboratory environment it became possible to automate the interrogation process of the particle track images [25, 27, 28, 98]. The application of tracking methods, that is to follow the images of an individual tracer particle from exposure to exposure, is best practicable in the low image density case, see Fig. 1.10a. The low image density case often appears in strongly three-dimensional high-speed flows (e.g. turbomachinery) where it is not possible to provide enough tracer particles or in two phase flows, where the transport of the particles themselves is investigated. Additionally, the Lagrangian motion of a fluid element can be determined by applying tracking methods [14, 19].
In principle, however, a high data density is desired on the PIV vector fields, especially if strong spatial flow variations need to be resolved or for the comparison of experimental data with the results of numerical calculations. This demand requires a medium concentration of the images of the tracer particles in the PIV recording. Medium image concentration is characterized by the fact that matching pairs of particle images – due to subsequent illuminations – cannot be detected by visual inspection of the PIV recordings, see Fig. 1.10b. Hence, statistical approaches, which will be described in the next sections, were developed. After a statistical evaluation has been performed first, tracking algorithms can be applied additionally in order to achieve sub-window spatial resolution of the measurement, which is known as super resolution PIV [45].
Tracking algorithms have continuously been improved during the past decade. Today, particle tracking is an interesting alternative to statistical PIV evaluation methods as we will see in more detail in Sect. 5.4.
Comparisons between cross-correlation methods and particle tracking techniques together with an assessment of their performance have been performed in the framework of the International PIV Challenge [40, 88,89,90].
5.1 Correlation and Fourier Transform
5.1.1 Correlation
The main objective of the statistical evaluation of PIV recordings at medium image density is to determine the displacement between two patterns of randomly distributed particle images, which are stored as a 2D distribution of gray levels. Looking around in other areas of metrology (e.g. radars), it is common practice in signal analysis to determine, for example, the shift in time between two (nearly) identical time signals by means of correlation techniques. Details about the mathematical principles of the correlation technique, the basic relations for correlated and uncorrelated signals and the application of correlation techniques in the investigation of time signals can be found in many textbooks [8, 63, 64]. The theory of correlation is easily extended from the one dimensional (1D time signal) to the two- and three-dimensional (gray level spatial distribution) case. In Chap. 4 the use of auto- and cross-correlation techniques for statistical PIV evaluation has already been explained. Analogously to spectral time signal representations, for a 2D spatial signal I(x, y) the power spectrum \(| \hat{I}(r_x,r_y)|^2\) can be determined where \(r_x,r_y\) are spatial frequencies in orthogonal directions. The basic theorems for correlation and Fourier transform known from the theory of time signals are also valid for the 2D case (with appropriate modifications) [10].
For the calculation of the auto-correlation function two possibilities exist: either direct numerical calculation or indirectly (numerically or optically), using the Wiener-Khinchin theorem [8, 10]. This theorem states that the Fourier transform of the auto-correlation function \(R_{\text {I}}\) and the power spectrum \(| \hat{I}(r_x,r_y)|^2\) of an intensity field I(x, y) are Fourier transforms of each other.
The direct numerical calculation is computationally more intensive and has barely been used for 2D PIV. It becomes a viable approach for recording with a sparse distribution of particles, such as encountered in 3D PIV see Sect. 9.3.
Figure 5.1 illustrates that the auto-correlation function can either be determined directly in the spatial domain (upper half of the figure) or indirectly by Fourier transform FT (left hand side), multiplication, that is the calculation of the squared modulus, in the frequency plane (lower half of the figure), and by inverse Fourier transform FT\(^{-1}\) (right hand side).
5.1.2 Optical Fourier Transform
As already mentioned in Sect. 2.5 the far field diffraction pattern of an aperture transmissivity distribution is represented by its Fourier transform [26, 47, 68]. A lens can be used to transfer the image from the far field close to the aperture. For a mathematical derivation of this result some assumptions have to be made, which are described by the Fraunhofer approximation. These assumptions (large distance between object and image plane, phase factors) can be fulfilled in practical optical setups for Fourier transforms.
Figure 5.2 shows two different configurations for such optical Fourier processors. In the arrangement on the left hand side the object, which would consist of a transparency to be Fourier transformed (e.g. the photographic PIV recording), is placed in front of the so-called Fourier lens (at -f usually). In the second setup (right hand side) the object is placed behind the lens. As derived in the book of Goodman [26] both arrangements differ only by the phase factors of the complex spectrum and a scale factor. Light sensors (photographic film as well as CCD sensors) are only sensitive to the light intensity. The intensity corresponds to the squared modulus of the complex distribution of the electromagnetic field; hence phase differences in the light wave cannot be detected. Therefore, both arrangements shown in Fig. 5.2 can be used for PIV evaluation. The result of the optical Fourier transform (OFT, dashed line in Fig. 5.1) is the power spectrum of the gray value distribution of the transparency.
In the following this will be illustrated for the case of a pair of two particle images. White (transparent) images of a tracer particle on a black (opaque) background will form a double aperture on the photographic PIV recording. With good lens systems the diameter of an image of a tracer particle on the recording is of the order of 20 to \(30\,\upmu \mathrm{m}\). The spacing between the two images of a tracer particle should be approximately 150–250 \(\upmu \mathrm{m}\), in order to obtain optimum conditions for optical evaluation (compare Sect. 3.2). Figure 5.3 shows a cross-sectional cut through the diffraction pattern of a double aperture (parameters are similar to those of the PIV experiment). The figure at the left side shows several peaks of the light intensity distribution under an envelope. The envelope represents the diffraction pattern of a single aperture with the same diameter (i.e. the Airy pattern, see Sect. 2.5). The intensity distribution will extend in the 2D presentation in the vertical direction, thus forming a fringe pattern, that is the Young’s fringes. The fringes are oriented normal to the direction of the displacement of the apertures (tracer images). The displacement between the fringes is inversely proportional to the displacement of the apertures (tracer images). If the distance between the apertures (tracer images) is decreased, the distance between the fringes will increase inversely. This is illustrated in the center of Fig. 5.3, where the distance between the two apertures is only half that of the example on the left side. It can be seen that the distance between the fringes is increased by a factor of two. The same inverse relation, which is due to the scaling theorem of the Fourier transform, is valid for the envelope of the diffraction pattern: if the diameter of the aperture (particle images) decreases, the extension of the Airy pattern will increase inversely (see Fig. 5.3, right side). As a consequence, more fringes can be detected in those fringe patterns which are generated by smaller apertures (particle images). This is one reason to explain why small and well focused particle images will increase the quality and detection probability in the evaluation of PIV recordings. Due to another property of the Fourier transform, that is the shift theorem, the characteristic shape of the intensity pattern does not change if the position of the particle image pairs is changed inside the interrogation spot. Increasing the number of particle image pairs also does not change the Young’s fringe pattern significantly. Of course this is not true for the case of just two image pairs: two fringe systems of equal intensity will overlap, allowing no unambiguous evaluation.
5.1.3 Digital Fourier Transform
The digital Fourier transform is the basic tool of modern signal and image processing. A number of textbooks describe the details [8, 10, 36, 109]. The breakthrough of the digital Fourier transform is due to the development of fast digital computers and to the development of efficient algorithms for its calculation (Fast Fourier Transformation, FFT) [8, 10, 12, 109]. Those aspects of the digital Fourier transform relevant for the understanding of digital PIV evaluation will be described in Sect. 5.3.
5.2 Overview of PIV Evaluation Methods
In the following the different methods for the evaluation of PIV recordings by means of correlation and Fourier techniques will be summarized.
Figure 5.4 presents a flow chart of the fully digital auto-correlation method, which can be implemented in a straight-forward manner following the equations given in Chap. 4. The PIV recording is sampled with comparatively small interrogation windows (typically 30–250 samples in each direction). For each window the auto-correlation function is calculated and the position of the displacement peak in the correlation plane is determined. The calculation of the auto-correlation function is carried out either in the spatial domain (upper part of Fig. 5.1) or – in most cases – through the use of FFT algorithms.
If the PIV recording system allows the employment of the double frame/single exposure recording technique (see Fig. 3.2) the evaluation of the PIV recordings is performed by cross-correlation (Fig. 5.5). In this case, the cross-correlation between two interrogation windows sampled from the two recordings is calculated. As will be explained later in Sect. 5.3, it is advantageous to offset both these samples according to the mean displacement of the tracer particles between the two illuminations. This reduces the in-plane loss of correlation and therefore increases the correlation peak strength. The calculation of the cross-correlation function is generally computed numerically by means of efficient FFT algorithms.
Single frame/double exposure recordings may also be evaluated by a cross-correlation approach instead of auto-correlation by choosing interrogation windows slightly displaced with respect to each other in order to compensate for the in-plane displacement of particle images. Depending on the different parameters, auto-correlation peaks may also appear in the correlation plane in addition to the cross-correlation peak.
Computer memory and computation speed being limited in the beginning of the eighties, PIV work was strongly promoted by the existence of optical evaluation methods. The most widely used method was the Young’s fringes method [DC5.2], which in fact is an optical-digital method, employing optical as well as digital Fourier transforms for the calculation of the correlation function.
In the next section the most commonly used and very flexible digital evaluation methods will be discussed in more detail.
5.3 PIV Evaluation
With the wide-spread introduction of digital imaging and improved computing capabilities the optical evaluation approaches used for the analysis of photographic PIV recordings quickly became obsolete. Initially a desktop slide scanner for the digitization of the photographic recording replaced the complex opto-mechanical interrogation assemblies with computer-based interrogation algorithms. Then rapid advances in electronic imaging further allowed for a replacement of the rather cumbersome photographic recording process. In the following we describe the necessary steps in the fully digital analysis of PIV recordings using statistical methods. Initially, the focus is on the analysis of single exposed image pairs, that is single exposure/double frame PIV, by means of cross-correlation. The analysis of multiple exposure/single frame PIV recordings can be viewed as a special case of the former.
5.3.1 Discrete Spatial Correlation in PIV Evaluation
Before introducing the cross-correlation method in the evaluation of a PIV image, the task at hand should be defined from the point of view of linear signal or image processing. First of all let us assume we are given a pair of images containing particle images as recorded from a light sheet in a traditional PIV recording geometry. The particles are illuminated stroboscopically so that they do not produce streaks in the images. The second image is recorded a short time later during which the particles will have moved according to the underlying flow (for the time being ignoring effects such as particle lag, three-dimensional motion, etc.). Given this pair of images, the most we can hope for is to measure the straight-line displacement of the particle images since the curvature or acceleration information cannot be obtained from a single image pair. Further, the seeding density is too large and homogeneous that it is difficult to match up discrete particles. In some cases the spatial translation of groups of particles can be observed. The image pair can yield a field of linear displacement vectors where each vector is formed by analyzing the movement of localized groups of particles. In practice, this is accomplished by extracting small samples or interrogation windows and analyzing them statistically (Fig. 5.6).
From a signal (image) processing point of view, the first image may be considered the input to a system whose output produces the second image of the pair (Fig. 5.7). The system’s transfer function, \(\varvec{H}\), converts the input image I to the output image \(I'\) and is comprised of the displacement function \(\varvec{d}\) and an additive noise process, N. The function of interest is a shift by the vector \(\mathbf {d}\) as it is responsible for displacing the particle images from one image to the next. This function can be described, for instance, by a convolution with \(\delta ({\varvec{x}}-{\varvec{d}})\). The additive noise process, N, in Fig. 5.7 models effects due to recording noise and three-dimensional flow among other things. If both \(\varvec{d}\) and N are known, it should then be possible to use them as transfer functions for the input image I to produce the output image \(I'\). With both images I and \(I'\) known the aim is to estimate the displacement field \(\varvec{d}\) while excluding the effects of the noise process N. The fact that the signals (i.e. images) are not continuous – the dark background cannot provide any displacement information – makes it necessary to estimate the displacement function \(\varvec{d}\) using a statistical approach based on localized interrogation windows (or samples).
One possible scheme to recover the local displacement function would be to deconvolve the image pair. In principle this can be accomplished by dividing the respective Fourier transforms by each other. This method works when the noise in the signals is insignificant. However, the noise associated with realistic recording conditions quickly degrades the data yield. Also the signal peak is generally too sharp to allow for a reliable sub-pixel estimation of the displacement.
Rather than estimating the displacement function \(\varvec{d}\) analytically, the method of choice is to locally find the best match between the images in a statistical sense. This is accomplished through the use of the discrete cross-correlation function, whose integral formulation was already described in Sect. 4:
The variables I and \(I'\) are the samples (e.g. intensity values) as extracted from the images where \(I'\) can be taken larger than the template I. Essentially the template I is linearly ‘shifted’ around in the sample \(I'\) without extending over edges of \(I'\). For each choice of sample shift (x, y), the sum of the products of all overlapping pixel intensities produces one cross-correlation value \(R_{{\text {II}}}(x,y)\). By applying this operation for a range of shifts \((-M\le x\le +M, -N\le y \le +N)\), a correlation plane the size of \((2 M + 1) \times (2 N + 1)\) is formed. This is shown graphically in Fig. 5.8. For shift values at which the samples’ particle images align with each other, the sum of the products of pixel intensities will be larger than elsewhere, resulting in a high cross-correlation value \(R_{\text {II}}\) at this position (see also Fig. 5.9). Essentially the cross-correlation function statistically measures the degree of match between the two samples for a given shift. The highest value in the correlation plane can then be used as a direct estimate of the particle image displacement which will be discussed in detail in Sect. 5.3.5.
Upon examination of this direct implementation of the cross-correlation function two things are obvious: first, the number of multiplications per correlation value increases in proportion to the interrogation window (or sample) size, and second, the cross-correlation method inherently recovers linear shifts only. No rotations or deformations can be recovered by this first order method. Therefore, the cross-correlation between two particle image samples will only yield the displacement vector to first order, that is, the average linear shift of the particles within the interrogation window. This means that the interrogation window size should be chosen sufficiently small such that the higher-order effects can be neglected.
The first observation concerning the quadratic increase in multiplications with sample size imposes a quite substantial computational effort. In a typical PIV interrogation the sampling windows cover of the order of several thousand pixel while the dynamic range in the displacement may be as large as \(\pm 10\) to \(\pm 20\,\text {pixel}\) which would require up to one million multiplications and summations to form only one correlation plane. Clearly, taking into account that several thousand displacement vectors can be obtained from a single PIV recording, a more efficient means of computing the correlation function is required.
5.3.1.1 Frequency Domain Based Calculation of Correlation
The alternative to calculating the cross-correlation directly using Eq. (5.1) is to take advantage of the correlation theorem which states that the cross-correlation of two functions is equivalent to a complex conjugate multiplication of their Fourier transforms:
where \(\hat{I}\) and \(\hat{I'}\) are the Fourier transforms of the functions I and \(I'\), respectively. In practice the Fourier transform is efficiently implemented for discrete data using the fast Fourier transform or FFT which reduces the computation from \(\text {O}[N^2]\) operations to \(\text {O}[N \log _2 N]\) operations [12, 36, 66]. The tedious two-dimensional correlation process of Eq. (5.1) can be reduced to computing two two-dimensional FFT’s on equal sized samples of the image followed by a complex-conjugate multiplication of the resulting Fourier coefficients. These are then inversely Fourier transformed to produce the actual cross-correlation plane which has the same spatial dimensions, \(N\times N\), as the two input samples. Compared to \(\text {O}[N^4]\) for the direct computation of the two-dimensional correlation the process is reduced to \(\text {O}[N^2 \log _2 N]\) operations. The computational efficiency of this implementation can be increased even further by observing the symmetry properties between real valued functions and their Fourier transform, namely the real part of the transform is symmetric: \(\text {Re}(\hat{I}_i) = \text {Re}(\hat{I}_{-i})\), while the imaginary part is antisymmetric: \(\text {Im}(\hat{I}_i) = -\text {Im}(\hat{I}_{-i})\). In practice two real-to-complex, two-dimensional FFTs and one complex-to-real inverse, two-dimensional FFT are needed, each of which require approximately half the computation time of standard FFTs (Fig. 5.10). A further increase in computation speed can of course be achieved by optimizing the FFT routines such as using lookup tables for the required data, reordering and weighting coefficients and/or fine tuning the machine level code [22, 23].
The use of two-dimensional FFT’s for the computation of the cross-correlation plane has a number of properties whose effects have to be dealt with.
Fixed sample sizes: The FFT’s computational efficiency is mainly derived by recursively implementing a symmetry property between the even and odd coefficients of the discrete Fourier transform (the Danielson–Lanczos lemma [12, 66]). The most common FFT implementation requires the input data to have a base-2 dimension (i.e. \(16\times 16\,\text {pixel}\) or \(32\times 32\,\text {pixel}\) samples). For reasons explained below it generally is not possible to simply pad a sample with zeroes to make it a base-2 sized sample.
Periodicity of data: By definition, the Fourier transform is an integral (or sum) over a domain extending from negative infinity to positive infinity. In practice however, the integrals (or sums) are computed over finite domains which is justified by assuming the data to be periodic, that is, the signal (i.e. image sample) continually repeats itself in all directions. While for spectral estimation there exist a variety of methods to deal with the associated artifacts, such as windowing, their use in the computation of the cross-correlation will introduce systematic errors or will even hide the correlation signal in noise.
One of these methods, zero padding, which entails extending the sample size to four times the original size by filling in zeroes, will perform poorly because the data (i.e. image sample) generally consists of a nonzero (noisy) background on which the signal (i.e. particle images) is overlaid. The edge discontinuity brought about in the zero padding process contaminates the spectra of the data with high frequency noise which in turn deteriorates the cross-correlation signal. The slightly more advanced technique of FFT data windowing removes the effects of the edge discontinuity, but leads to a nonuniform weighting of the data in the correlation plane and to a bias of the recovered displacement vector. The treatment of this systematic error is described in more detail below.
Aliasing: Since the input data sets to the FFT-based correlation algorithm are assumed to be periodic, the correlation data itself is also periodic. If the data of length N contains a signal (i.e. displacements) exceeding half the sample size N / 2, then the correlation peak will be folded back into the correlation plane to appear on the opposite side. For a displacement \(d_{\text {x,true}} > N/2\), the measured value will be \(d_{\text {x,meas.}} = d_{\text {x,true}} - N\). In this case the sampling criterion (Nyquist-Shannon sampling theorem) has been violated causing the measurement to be aliased. The proper solution to this problem is to either increase the interrogation window size, or, if possible, reduce the laser pulse delay, \(\varDelta t\).
Displacement range limitation: As mentioned before the sample size N limits the maximum recoverable displacement range to \(\pm N/2\). In practice however, the signal strength of the correlation peak will decrease with increasing displacements, due to the proportional decrease in possible particle matches. Earlier literature reports a value of N / 3 to be an adequate limit for the recoverability of the displacement vector [107]. A more conservative, but widely adopted limit is N / 4, sometimes referred to as the one-quarter rule [43]. However, by using iterative evaluation techniques with window-shifting techniques these requirements are only essential for the first pass and obsolete for all other passes as we will see later.
Bias error: Another side effect of the periodicity of the correlation data is that the correlation estimates are biased. With increasing shifts less data are actually correlated with each other since the periodically continued data of the correlation template makes no contribution to the actual correlation value. Values on the edge of the correlation plane are computed from only the overlapping half of the data and should be weighted accordingly. Unless the correlation values are weighted accordingly, the displacement estimate will be biased to a lower value (Fig. 5.11). This error decreases with increasing sample sizes. Larger particle images and along with it wider correlation peaks are associated with larger bias errors. The proper weighting function to account for this biasing effect will be described in Sect. 5.3.5.
If all of the above points are properly handled, an FFT-based interrogation algorithm as shown in Fig. 5.10 will reliably provide the necessary correlation data from which the displacement data can be retrieved. For the reasons given above, this implementation of the cross-correlation function is sometimes referred to as circular cross-correlation compared to the linear cross-correlation of Eq. (5.1).
5.3.1.2 Calculation of the Correlation Coefficient
For a number of cases it may be useful to quantify the degree of correlation between the two image samples. The standard cross-correlation function Eq. (5.1) will yield different maximum correlation values for the same degree of matching because the function is not normalized. For instance, samples with many (or brighter) particle images will produce much higher correlation values than interrogation windows with fewer (or weaker) particle images. This makes a comparison of the degree of correlation between the individual interrogation windows impossible. The cross-correlation coefficient function normalizes the cross-correlation function Eq. (5.1) properly:
where
The value \(\mu _I\) is the average of the template and is computed only once while \(\mu _{{\text {I}}'}(x,y)\) is the average of \(I'\) coincident with the template I at position (x, y). It has to be computed for every position (x, y). Equation (5.3) is considerably more difficult to implement using an FFT-based approach and is usually computed directly in the spatial domain. In spite of its computational complexity, the equation does permit the samples to be of unequal size which can be very useful in matching up small groups of particles. Nevertheless a first order approximation to the proper normalization is possible if the interrogation windows are of equal size and are not zero-padded:
- Step 1::
-
Sample the images at the desired locations and compute the mean and standard deviations of each.
- Step 2::
-
Subtract the mean from each of the samples.
- Step 3::
-
Compute the cross-correlation function using 2D-FFTs as displayed in Fig. 5.10.
- Step 4::
-
Divide the cross-correlation values by the standard deviations of the original samples. Due to this normalization the resulting values will fall in the range \(-1 \le c_{\text {II}}\le 1\).
- Step 5::
-
Proceed with the correlation peak detection taking into account all artifacts present in FFT-based cross-correlation.
5.3.2 Correlation Signal Enhancement
5.3.2.1 Image Pre-processing
The correlation signal is strongly affected by variations of the image intensity. The correlation peak is dominated by brighter particle images with weaker particle images having a reduced influence. Also the non-uniform illumination of particle image intensity, due to light-sheet non-uniformities or pulse-to-pulse variations, as well as irregular particle shape, out-of-plane motion, etc. introduce noise in the correlation plane. For this reason image enhancement prior to processing the image is oftentimes advantageous. The main goal of the applied filters is to enhance particle image contrast and to bring particle image intensities to a similar signal level such that all particle images have a similar contribution in the correlation function [85, 102, 108].
Among the image enhancement methods, background subtraction from the PIV recordings reduces the effects of laser flare and other stationary image features. This background image can either be recorded in the absence of seeding, or, if this is not possible, through computation of an average or minimum intensity image from a sufficiently large number of raw PIV recordings (at least \(20-50\)). These images can also be used to extract areas to be masked.
A filter-based approach to image enhancement is to high-pass filter the images such that the background variations with low spatial frequency are removed leaving the particle images unaffected. In practice this is realized by calculating a low-passed version of the original image and subtracting it from the original. Here the filter kernel width should be larger than the diameter of the particle images: \(k_{\text {smooth}} > d_\tau \).
Thresholding or image binarization, possibly in combination with prior high-pass filtering, results in images where all particles have the same intensity and thus have equal contribution to the correlation function. This binarization, however, is associated with an increase in the measurement uncertainty.
The application of a narrow-width, low-pass filter may be suitable to remove high-frequency noise (e.g. camera shot noise, pixel anomalies, digitization artifacts, etc.) from the images. It also results in widening of the correlation peaks, thus allowing a better performance of the sub-pixel peak fitting algorithm (see Sect. 6.2.2). In cases where images are under-sampled (\(d_\tau < 2\)) it reduces the so-called peak-locking effect, but also increases the measurement uncertainty.
Range clipping is another method of improving the data yield. The intensity capping technique [85], which was found to be both very effective and easy to implement, relies on setting intensities exceeding a certain threshold to the threshold value. Although optimal threshold values may vary with the image content, it may be calculated for the entire image from the grayscale median image intensity, \(I_{\mathrm {median}}\), and its standard deviation, \(\sigma _I\): \(I_{\text {clip}} = I_{\text {median}} + n \sigma _I\). The scaling factor n is user defined and generally positive in the range \(0.5< n < 2\).
A similar approach to intensity capping is to perform dynamic histogram stretching in which the intensity range of the output image is limited by upper and lower thresholds. These upper and lower thresholds are calculated from the image histograms by excluding a certain percentage of pixel from the either the upper or lower end of the histogram, respectively.
While the previous two methods provide contrast normalization in a global sense, the min/max filter approach suggested by Westerweel [102] also adjusts to variations of image contrast throughout the image. The method relies on computing envelopes of the local minimum and maximum intensities using a given tile size. Each pixel intensity is then stretched (normalized) using the local values of the envelopes. In order not to affect the statistics of the image the tile size should be larger than the particle image diameter, yet small enough to eliminate spatial variations in the background [102]. Sizes of \(7 \times 7\) to \(15 \times 15\) are generally suitable.
A very robust method of image contrast enhancement capable of dealing with non-uniform image background intensity is to subtract an image based on the local mean and local standard deviation as proposed in [74]. First a Gaussian filtered version of the image \(M_I(x,y)\) is subtracted from the original image I(x, y) and then divided by an image of the local variance \(\sigma _I(x,y)\):
where \(\sigma _1\) and \(\sigma _2\) are user defined kernel sizes for the Gaussian filter operations with typically \(\sigma _1 < \sigma _2\).
Normalization by the time-average image intensity is also an effective technique to remove stationary background with time-varying intensity. By decomposing a given raw image sequence into its POD modes Mendez et al. [56] have shown that the PIV particle pattern can be recovered by filtering out few of the first POD modes and thus rejecting unwanted features such as wall-reflections, noise and non-uniform illumination.
When PIV time series are available, unwanted light reflections can be removed via a high pass filter in the frequency domain [83]. A robust approach to suppress background reflections is based on variants of local normalization method [55]. The specific problem of near-wall measurements contaminated by surface reflections requires dedicated approaches, including for instance the treatment of the solid object region as discussed in [111].
When applying any of the previously described contrast enhancement methods, it should be remembered that modifications of the image intensities may also affect the image statistics which in turn can result in increased measurement uncertainties. This has to be balanced against the increase in data yield. Selective application of contrast enhancing filters in areas of low data would be the logical consequence. Also it should be made clear that strong reflections near walls can only be alleviated through processing as long as the image is not overexposed (saturation) in these areas. Care should be taken to minimize these reflections while acquiring the PIV image data.
5.3.2.2 Phase-Only Correlation
Improvement of the correlation signal may be achieved through adequate filters applied in the spectral domain (Fig. 5.12). Since most PIV correlation implementations rely on FFT based processing, spectral filtering is easily accomplished with very little computational overhead. A processing technique proposed by Wernet [101] called symmetric phase only filtering (SPOF) is based on phase-only filtering techniques which are commonly found in optical correlator systems. SPOF has been shown to improve the signal-to-noise ratio in PIV cross-correlation. In practice these filters also normalize the contribution of all sampled particle images, thus providing contrast normalization. In addition the influence of wall reflections (streaks or lines) and other undesired artifacts can be reduced. According to [85] SPOF yields more accurate results in the presence of DC-background noise, but is not as well suited as the intensity capping technique in reducing the displacement bias influence of bright spots with high spatial frequencies.
5.3.2.3 Correlation-Based Correction
Another form of improving the signal-to-noise ratio in the correlation plane (e.g. displacement peak detection rate) was proposed by Hart [32]. The technique involves the multiplication of at least two correlation planes calculated from samples located close-by, typically offset by one quarter to half the correlation sample width. Provided that the displacement gradient between the samples is not significant the multiplication of the correlation planes will enhance the main signal correlation peak while removing noise peaks that generally have a more random occurrence. Correlation plane averaging, that is summation of the of correlation planes instead of multiplication, is more robust when the number of combined correlation planes increases. The method, however, is less effective in removing spurious correlation peaks [53].
5.3.2.4 Ensemble Correlation Techniques
While the previous method is applied within a given PIV image pair, it can also be applied to a sequence of images. This PIV processing approach, also known as ensemble correlation or correlation averaging, was developed in the framework of micro-PIV applications in an effort to reduce the influence of Brownian motion that introduces significant noise in data obtained from a single PIV recording. Rather than obtaining displacement data for each individual image pair, the technique relies on averaging the correlation planes obtained from a sequence of images. With increasing frame counts a single correlation peak will accumulate for each correlation plane reflecting the mean displacement of the flow [42, 54, 104]. Although computationally very efficient, the main drawback of this approach is that all information pertaining to the unsteadiness of the flow is lost (e.g. no RMS values). Its use with conventional (macroscopic) PIV recording has been verified by the authors [38, 39]. Among the main benefits of this method are the fast calculation of the mean flow and the potentially high spatial resolution. Since this processing is very fast, it has potential as a quasi-online diagnostic tool. To demonstrate the effectiveness of the ensemble correlation technique, Meinhart et al. compared three different averaging algorithms applied to a series of images acquired from a steady flow through a microchannel [54]. The signal-to-noise ratio for measurements generated from a single pair of images was low due to an average of only 2.5 particle images in each \(16 \times 64\,\text {pixel}\) interrogation window. As a result velocity measurements are noisy and approximately 20% appear to be erroneous. Three types of average are compared.
-
Image averaging: a time average of the first exposure and second exposure image series are produced that are then correlated (Fig. 5.13). This approach is very efficient and effective when working with low image density (LID) recordings. A variant of this technique consists in producing a maximum image from a given series of recordings (image overlapping). An example is given in Fig. 5.14. Especially in \(\mu \)PIV the low image density recordings are evaluated with particle-tracking algorithms, whereby the velocity vector is determined with only one particle, and hence the accuracy and reliability of the technique are limited. In addition, interpolation procedures are necessary to obtain velocity vectors on the desired regular grid from the randomly distributed particle-tracking data (Fig. 5.15), which brings additional uncertainties to the final results. The ensemble image permits to evaluate the flow at higher resolution than that allowed by cross-correlation analysis of the individual image pair. The underlying hypothesis for this approach is that the flow is stationary and laminar: condition often met in micro fluidics, but rarely applicable within macroscopic flows.
-
Correlation field averaging: the correlation function from each image pair is calculated and averaged along the sequence of recordings. This method is computationally more demanding than the image averaging technique. The benefit is that it is effective both at low and high level of seeding density (Fig. 5.16).
-
Velocity vector averaging: the velocity measurement is calculated for each image pair and then averaged along the sequence.
The performance comparison based on synthetic images is illustrated in Fig. 5.17. The fraction of valid mean velocity vectors is displayed as a function of the number of image pairs considered for the analysis. Clearly the combination of few recordings is very beneficial to the yield of valid vectors. The ensemble average correlation technique shows the fastest rise towards a totality of valid vectors. Averaging the velocity vectors gives a similar result, however with a slower convergence. Finally, averaging the images and then calculating the correlation shows in this case a similar trend. It shall be retained in mind that image averaging reaches an optimum and then the number of valid vectors will decline as a result of the drop of image contrast over long averages. The rms uncertainty of these three techniques indicates again that cross-correlation averaging is more accurate than vector averaging, followed by image averaging. An important advantage of correlation averaging and image averaging is that the interrogation window can be reduced when the number of recording is increased. Therefore correlation averaging and image averaging techniques offer a higher spatial resolution compared to vector averaging, where the interrogation window needs to contain a minimum number of pairs at each recording.
5.3.2.5 Single-Pixel Ensemble-Correlation
Following the discussion in Sect. 5.3.1 the discrete correlation can be performed between a small template (interrogation template or window) to be searched for in a larger template in the second exposure (search window). The limit case is that the interrogation template is represented by a single pixel. In this case the interrogation of a single pair of images cannot yield any reliable information as the multiplication of the search template by a single value is a replica of itself. When the operation is repeated for a larger number of recordings (typically \(10^4\)–\(10^5\)), the ensemble correlation signal will emerge with a distinct peak and an acceptable signal-to-noise ratio.
This evaluation method was first applied by Westerweel et al. [104] for stationary laminar flows in micro-fluidics and by Billy et al. [9] for the analysis of periodic laminar flows. Later, the approach was extended by Kähler and co-workers for the analysis of macroscopic laminar, transitional, and turbulent flows [42], and for the analysis of compressible flows at large Mach numbers [38, 80]. Furthermore, the technique was extended for the analysis of stereoscopic PIV images by Scholz & Kähler [81]. The product of the pixel intensity and the intensity distribution in the second frame over a search region is the instantaneous contribution to forming the ensemble correlation signal. Applications of single pixel correlations both in micro-PIV and aerodynamic flows (combined with long-range microscopy) have demonstrated the ability to measure the time-averaged flow velocity at unprecedented spatial resolution (in the order of a micron or less). Further discussion is given in Chap. 10.
Figure 5.18 summarizes the input and output expected by applying the techniques described above. Figure 5.19 illustrates schematically the properties of the correlation signal resulting from single pair analysis, ensemble cross-correlation and single pixel ensemble correlation. The latter requires typically two to three order of magnitude more samples to achieve the same signal-to-noise ratio than ensemble correlation for instance when a window of \(32\times 32\) pixel is adopted. The position of the maximum in the ensemble correlation map captures only the mean velocity vector. It is possible to extract the velocity probability density function from the analysis of the shape of the correlation peak, if a sufficiently large number of PIV image pairs is available [1, 3, 38, 80, 87, 106].
The correlation function can be regarded as the convolution of the velocity probability density function (pdf) and the image auto-correlation function. De-convolving the correlation signal from the auto-correlation function gives an estimate of the velocity pdf and allows for the estimation of the Reynolds stresses at high spatial resolution, when the single-pixel ensemble correlation is used.
Figure 5.20 shows an example from experiments in a transonic backward-facing step flow where single-pixel correlation is used. The very steep velocity profile is better captured with the single-pixel analysis.
5.3.3 Evaluation of Doubly Exposed PIV Images
Although the current technology allows PIV recording in single exposure/multiple frame mode, in some experiments multiple exposed particle images may still be utilized. This is especially the case when photographic recording with high spatial resolution or digital SLR cameras are used. In the early days of PIV, optical techniques for extracting the displacement information from the photographs were utilized. However, desktop slide scanners also make it possible to digitize the photographic negatives and thus enable a purely digital evaluation. Analysis of double-exposed recordings also arises when frame separation no longer is possible for digital cameras, for instance, for the combination of very high-speed flows with high image magnification.
Essentially the same algorithms utilized in the digital evaluation of PIV image pairs described before can be used with minor modifications to extract the displacement field from multiple exposed recordings. The major difference between the evaluation modes arises from the fact that all information is contained on a single frame – in the trivial case a single sample is extracted from the image for the displacement estimation (Fig. 5.21, case I). From this sample, the auto-correlation function is computed by the same FFT method described earlier. In fact, the auto-correlation can be considered as a special case of the cross-correlation where both samples are identical. Unlike the cross-correlation function computed from different samples the auto-correlation function will always have a self-correlation peak located at the origin (see also the mathematical description in Sect. 4.5). Located symmetrically around the origin, two peaks with less than one fourth the intensity describe the mean displacement of the particle images in the interrogation area. The two peaks arise as a direct consequence of the directional ambiguity in the double (or multiple) exposed/single frame recording method.
To extract the displacement information in the auto-correlation function the peak detection algorithm has to ignore the self-correlation peak, \(R_{\text {P}}\), located at the origin, and concentrate on the two displacement peaks, \(R_{{\text {D}}^{+}}\) and \(R_{{\text {D}}^{-}}\). If a preferential displacement direction exists, either from the nature of the flow or through the application of displacement biasing methods (e.g. image shifting), then the general search area for the peak detection can be predefined. Alternatively a given number of peak locations can be saved from which the correct displacement information can be extracted using a global histogram operator (Sect. 7.15).
The digital evaluation of multiple exposed PIV recordings can be significantly improved by sampling the image at two positions which are offset with respect to each other according to the mean displacement vector. This offers the advantage of increasing the number of paired particle images while decreasing the number of unpaired particle images. This minimization of the in-plane loss-of-pairs increases the signal-to-noise ratio, and hence the detection of the principal displacement peak \(R_{{\text {D}}^{+}}\). However, the interrogation window offset also shifts the location of the self-correlation peak, \(R_{\text {P}}\), away from the origin as illustrated in Fig. 5.21 (Case II – Case V).
The use of FFTs for the calculation of the correlation plane introduces a few additional aliasing artifacts that have to be dealt with. As the offset of the interrogation window is increased, first the negative correlation peak, \(R_{{\text {D}}^{-}}\), and then the self-correlation peak, \(R_{\text {P}}\), will be aliased, that is, folded back into the correlation plane (Fig. 5.21 Case III – Case V). In practice, detection of the two strongest correlation peaks by the procedure described in Sect. 5.3.5.1 is generally sufficient to recover both the positive displacement peak, \(R_{{\text {D}}^{+}}\), and the self-correlation, \(R_{\text {P}}\). The algorithm can be designed to automatically detect the self-correlation peak because it generally falls within a one pixel radius of the interrogation window offset vector.
5.3.4 Advanced Digital Interrogation Techniques
The transition of PIV from the analog (photographic) recording to digital imaging along with improved computing resources prompted significant improvements of interrogation algorithms. The various schemes can roughly be categorized into five groups:
-
single pass interrogation schemes such as presented in Willert & Gharib [107]
-
multiple pass interrogation schemes with integer sampling window offset [99, 103].
-
coarse-to-fine interrogation schemes (resolution pyramid [33, 86, 105]) or (flow-)adaptive resolution schemes
-
schemes relying on the deformation of the interrogation samples according to the local velocity gradient [76]
-
super-resolution schemes and single particle tracking [7, 45, 91]
Especially the combination of the grid refining schemes in conjunction with image deformation have recently found widespread use as they combine significantly improved data yield with higher accuracy compared to first-order schemes (rigid offset of the interrogation sample). The following sections give brief overview of each of these schemes.
5.3.4.1 Multiple Pass Interrogation
The data yield in the interrogation process can be significantly increased by using a window offset equal to the local integer displacement in a second interrogation pass [103]. By offsetting the interrogation windows according to the mean displacement, the fraction of matched particle images to unmatched particle images is increased, thereby increasing the signal-to-noise ratio of the correlation peak (see Sect. 6.2.4). Also, the measurement noise or uncertainty in the displacement, \(\varepsilon \), reduces significantly when the particle image displacement is less than half a pixel (i.e. \(|\varvec{d}| < 0.5\,\text {pixel}\)) where it scales proportional to the displacement [103]. The interrogation window offset can be relatively easily implemented in an existing digital interrogation software for both single exposure/double frame PIV recordings or multiple exposure/single frame PIV recordings described in the previous section. The interrogation procedure could take the following form:
- Step 1::
-
Perform a standard digital interrogation with an interrogation window offset close to the mean displacement in the data.
- Step 2::
-
Scan the data for outliers using a predefined validation criterion as described in Sect. 7.1. Replace outlier data by interpolating from the valid neighbors.
- Step 3::
-
Use the displacement estimates to adjust the interrogation window offset locally to the nearest integer.
- Step 4::
-
Repeat the interrogation until the integer offset vectors converge to \(\pm 1\,\text {pixel}\). Typically three passes are required.
The speed of this multiple pass interrogation can be increased significantly by comparing the new integer window offset to the previous value allowing unnecessary correlation calculations to be skipped. The data yield can be further increased by limiting the correlation peak search area in the last interrogation pass.
As pointed out by Wereley & Meinhart [99] a symmetric offset of the interrogation samples with respect to the point of interrogation corresponds to a central difference interrogation which is second-order accurate in time in contrast to a simple forward differencing scheme that simply adds the offset to the interrogation point (see Fig. 5.22).
5.3.4.2 Grid Refining Schemes
The multiple pass interrogation algorithm can be further improved by using a hierarchical approach in which the sampling grid is continually refined while the interrogation window size is reduced simultaneously. This procedure, originally introduced by Soria [86] and Willert [105], has the added capability of successfully utilizing interrogation window sizes smaller than the particle image displacement. This permits the dynamic spatial range (DSR) to be increased by this procedure. This is especially useful in PIV recordings with both a high image density and a high dynamic range in the displacements. In such cases standard evaluation schemes cannot use small interrogation windows without losing the correlation signal due to the larger displacements (one-quarter rule, [44]). Instead the offset of the interrogation window at subsequent iterations allows to circumvent the above rule and obtain good correlation signal even for interrogation windows smaller than the particle image displacement. However, a hierarchical grid refinement algorithm is more difficult to implement than a standard interrogation algorithm. Such an algorithm may look as follows:
- Step 1::
-
Start with a large interrogation sample that is known to capture the full dynamic range of the displacements within the field of view by observing the one-quarter rule (p. 156).
- Step 2::
-
Perform a standard interrogation using windows that are large enough to obey the one-quarter rule.
- Step 3::
-
Scan for outliers and replace by interpolation. As the recovered displacements serve as estimates for the next higher resolution level, the outlier detection criterion should be more stringent than that for single-step analysis to prevent possible divergence of the new estimate from the true value [82].
- Step 4::
-
Project the estimated displacement data on to the next higher resolution level. Use this displacement data to offset the interrogation windows with respect to each other.
- Step 5::
-
Increment the resolution level (e.g. halving the size of the interrogation window) and repeat steps 1 through 4 until the actual image resolution is reached.
- Step 6::
-
Finally perform an interrogation at the desired interrogation window size and sampling distance (without outlier removal and smoothing). By limiting the search area for the correlation peak the final data yield may be further increased because spurious, potentially stronger correlation peaks can be excluded if they fall outside of the search domain.
In the final interrogation pass the window offset vectors have generally converged to \(\pm 0.5\,\text {pixel}\) of the measured displacement thereby guaranteeing that the PIV image was optimally evaluated. The choice for the final interrogation window size depends on the particle image density. Below a certain number of matched pairs in the interrogation area (typically \(\mathcal{N_{\text {I}}}< 4\)) the detection rate will decrease rapidly (see Chap. 9). Figure 5.23 shows the displacement data of each step of the grid and interrogation refinement.
The processing speed may be significantly increased by down-sampling the images during the coarser interrogation passes. This can be achieved by consolidating neighboring pixel by placing the sum of a block of \(N\times N\) into a single pixel (pixel binning) [105]. This allows the use of smaller interrogation samples that are evaluated much faster. In fact, a constant size of the correlation kernel can be used regardless of the image resolution (e.g. a \(4\times \) down-sampled image interrogated by a \(32 \times 32\,\text {pixel}\) sampling window corresponds to a \(128\times 128\,\text {pixel}\) sample at the initial image resolution).
5.3.4.3 Image Deformation Schemes
The particle image pattern displacement is measured by cross-correlation under the assumption that the motion of the particles image within the interrogation window is approximately uniform. In practice this condition is often violated as most flows of interest produce a velocity distribution with significant velocity gradients due to shear layers and vortices. In these cases the cross-correlation peak produced by image pairs with a different velocity becomes broader and it can split into multiple peaks due to large velocity differences across the window (Fig. 5.24).
As a result, the measurement of velocity in presence of large velocity gradient (e.g. in the core of a vortex, Fig. 5.23) is affected by larger uncertainty and suffers from a higher vector rejection rate. The window deformation technique is meant to compensate the in-plane velocity gradient and the peak broadening effect. Both can be largely reduced when the two PIV recordings are deformed according to an estimation of the velocity field. The technique can be implemented within the multi-grid method [86, 105] described before (see Sect. 5.3.4.2). The main advantage with respect to the multi-grid window shifting method is an increased robustness and accuracy over shear flows such as boundary layers, vortices and turbulent flows in general. The basic principle is illustrated in Fig. 5.25, where a continuous image deformation progressively transforms the images towards two hypothetical recordings taken simultaneously at time \(t+\varDelta t/2\).
In analogy with the discrete window shift technique, this method is referred to as window deformation; however, the efficient implementation of the procedure is based on the deformation of the entire PIV recordings, which is sometimes referred to as image deformation (Fig. 5.26). The two approaches are therefore synonyms of the same concept. The image deformation technique can be summarized as follows:
- Step 1::
-
Standard digital interrogation with an interrogation window complying with the one quarter rule (p. 156)
- Step 2::
-
Low-pass filtering of the velocity vector field. A filter kernel equivalent to the window size is sufficient to smooth spurious fluctuations and suppress fluctuations at sub-window wavelength. Moving average filters or spatial regression with a 2nd order least squares regression are suitable choices [82].
- Step 3::
-
Deformation of the PIV recordings according to the filtered velocity vector field with a central difference scheme. The image resampling scheme influences the accuracy of the procedure [5]. For typical PIV images with particle image diameter of 2 to \(3\,\text {pixel}\), high order schemes (cardinal interpolation, B-splines) yield better results than low order interpolators (bilinear interpolation) [95, 97].
- Step 4::
-
Further interrogation passes on the deformed images with an interrogation window reduced in size, yet containing at least 4 image pairs.
- Step 5::
-
Add the result of the correlation to the filtered velocity field.
- Step 6::
-
Scan the velocity vector field for outliers and replace these by interpolation.
- Step 7::
-
Repeat steps 2 to 6 two or three times. With each iteration the incremental change in the displacement field will decrease.
In analogy to the standard cross-correlation function given in Eq. (5.1) the equation used for the spatial cross-correlation for deformed images reads as follows:
where \(\tilde{I}(i,j)\) and \(\tilde{I}'(i,j)\) are the image intensities reconstructed after deformation using the predicted deformation field \(\varDelta \varvec{s}(\varvec{x})\) in a central difference scheme:
The deformation field \(\varDelta \varvec{s}(\varvec{x})\) is a spatial distribution which generally is not uniform and therefore requires interpolation at each pixel in the image. Here a truncation of the Taylor series at the first order term is commonly sufficient for the reconstruction of the local displacement:
Here \(\varvec{x_0}\) denotes the position of the center of the interrogation window. Since the size of the interrogation window normally exceeds the spacing of the displacement vectors (overlap factor 50–75%) the displacement distribution within the window is a piecewise linear function resulting in a higher order approximation of the velocity distribution within the window. Dedicated literature on the performance of velocity field interpolators indicates that B-splines are an optimal choice in terms of accuracy and computational cost [4].
When steps 2–6 are repeated a number of times, the distance between particle image pairs is minimized and \(\tilde{I}(i,j)\) and \(\tilde{I}'(i,j)\) tend to coincide except for out-of-plane particle motion and shot-to-shot image intensity variation. As a consequence, the correlation function returns a peak located at the center of the correlation plane.
The procedure has the additional benefits of yielding a more symmetrical correlation peak, approximately at the origin of the correlation plane, reducing uncertainties due to distorted peak shape or inaccurate peak reconstruction (see Figs. 5.27 and 5.28). Another advantage is that the spatial resolution is approximately doubled with respect to a rigid window interrogation [82], with the caveat of a selective amplification of wavelengths smaller than the interrogation window (see Fig. 5.29). The latter needs to be compensated for by low-pass filtering the intermediate result [82] (see Fig. 5.30).
5.3.4.4 Image Interpolation for PIV
Due to the continuously varying displacement field, the intensity of the deformed images needs to be resampled (e.g. by interpolation) at non-integer pixel locations, which increases the computational load of the technique. Depending on the choice of interpolator significant bias errors may be introduced as shown in Fig. 5.31. The data was obtained from synthetic images with constant particle image displacement using the Monte Carlo methods described in Sect. 6.2.1. Polynomial interpolation produces a significant bias error up to one fifth of a pixel and are not particularly suited for this purpose.
A review of the topic on the background of medical imaging along with a performance comparison is given by Thévenaz et al. [95]. They suggest the use of generalized interpolation with non-interpolating basis functions such as B-splines or shifted bi-linear interpolation in favor of more commonly used polynomial or bandlimited sinc-based interpolation.
In contrast to many other imaging applications, a properly recorded PIV image generally contains almost discontinuous data with significant signal strength in the shortest wavelengths close to the sampling limit (i.e. strong intensity gradients). Because of this the image interpolator should primarily be capable of properly reconstructing these steep intensity gradients. A concise comparison of various advanced image interpolators for use in PIV image deformation is provided by Astarita & Cardone [5]. In accordance to the findings reported by Thévenaz et al. [95], they suggest the use of B-splines for an optimum balance between computational cost and performance. The bias error for B-splines of third and fifth order shown in Fig. 5.31 clearly supports this. If higher accuracy is required then sinc-based interpolation, such as Whittaker reconstruction [79], or FFT-based interpolation schemes [110] with a large number of points should be used. However, processing time may increase an order of magnitude.
5.3.4.5 Iterative PIV Interrogation and Its Stability
Multi-step analysis of PIV recordings can be seen as comprising of two procedures:
(1) Multi-grid analysis where the interrogation window size is progressively decreased. This process eliminates the one-quarter rule constraint and usually terminates when the required window size (the smallest) is applied.
(2) Iterative analysis at a fixed sampling rate (grid spacing) and spatial resolution (window size). This process further improves the accuracy of the image deformation and enhances the spatial resolution.
In essence the iterative analysis can be described by a predictor-corrector process governed by the following equation:
where \(\varDelta \varvec{s}_{k+1}\) indicates the result of the evaluation at the \(k{\text {th}}\) iteration. The correction term \(\varDelta \varvec{s}_{\text {corr}}\) can be viewed as a residual and is obtained by interrogating the deformed images as calculated by the central difference expression Eq. (5.8). The procedure can be repeated several times, however two to three iterations are already sufficient to achieve a converged result with most of the in-plane particle image motion compensated through the image deformation.
The iterative scheme introduced above appears very logical and its simplicity makes it straightforward to implement, which probably explains why it has been so broadly adopted in the PIV community [21, 35, 78, 79, 99]. However, when such iterative interrogation is performed without any spatial filtering of the velocity field, the process produces spurious oscillations of small amplitude that grow with the number iterations. The instability arises from the sign reversal in the sinc shape of the response function associated to the top-hat function of the interrogation window. For instance, taking two almost identical images except for artificial pixel noise, the displacement field measurement after some iterations begins to oscillate at a spatial wavelength \(\lambda _{\text {unst}} \approx 2/3 D_{\text {I}}\) and yields a wavy pattern [82].
The above result is consistent with the response function of a top-hat weighted interrogation window being \(r_s = sin(x/{D_{\text {I}}})/(x/{D_{\text {I}}})\). Therefore wavelengths in the range with negative values of the sinc function are systematically amplified. The iterative process requires stabilization by means of a low-pass filter applied to the updated result (Fig. 5.30), which damps the growth of fluctuations at wavelengths smaller than the window size. A moving average filter with a kernel size corresponding to that of the interrogation window is more than sufficient to stabilize the process.
However, filtering with a second order least-squares spatial regression allows to both maximize spatial resolution and minimize the noise. Other means of stabilization are based on interrogation window weighting techniques (e.g. Gaussian or LFC [58]). A numerical simulation using a sine-modulated shear flow shows that the single-pass cross-correlation amplitude modulation (empty squares in Fig. 5.29) follows closely that of a sinc function with a 10% cut-off occurring when the window size is about one-quarter the spatial wavelength (\(D_{\text {I}}/ \lambda = 0.25\)). The iterative interrogation (full circles in Fig. 5.29) delays the cut off at \(D_{\text {I}}/ \lambda = 0.65\). This implies that with a window size of \(32\,\text {pixel}\) the single-step cross-correlation is only capable of accurately recovering fluctuations with a wavelength larger than \(120\,\text {pixel}\). The minimum wavelength reduces to \(50\,\text {pixel}\) with the iterative deformation interrogation. The higher response of the iterative interrogation without filter (empty circles in Fig. 5.29) is only hypothetical because the process is unstable and the error is dominated by the amplified wavy fluctuations.
In conclusion the spatial resolution achieved with iterative deformation is about twice as high than that of the single-step or window-shift procedure. It should be retained in mind that the increase in resolution becomes only effective when the velocity field is sampled spatially at a higher rate, that is, increasing the overlap factor to 75% between adjacent windows instead of 50%. Otherwise, the error committed when evaluating for instance the velocity spatial derivatives is dominated by numerical truncation due to the large distance between neighboring vectors.
5.3.4.6 Adaptive Interrogation Techniques
The iterative multi-grid interrogation may help in increasing the spatial resolution by decreasing the final window size. However, in several cases the flow and the flow seeding distribution are not homogeneous over the observed domain. In this case, the optimization rules for interrogation can only be satisfied in an average sense and local non-optimal conditions may occur such as poor correlation signal or too low flow sampling rate. Moreover, the window filtering effect can be minimized when the flow exhibits variations along a preferential direction. For instance, in case of stationary interfaces an adaptive choice of the interrogation volume shape and orientation may contribute to achieve further improvements especially when dealing with shear layers [93] or shock waves [94]. The main rationale behind adaptive choice of the interrogation window is that, maintaining its overall size, one can reduce its length in one direction and compensate it enlarging the window in the orthogonal direction. The parameter governing this choice can be the velocity gradient or higher spatial derivative [77], but also more simply the direction of a solid surface nearby. Spatial adaptivity is particularly attractive for 3D PIV where the interrogation volume can be shaped as flat as a coin or elongated as a cigar depending on the velocity gradient topology [59] (Fig. 5.32).
5.3.4.7 Non-correlation-Based Interrogation Techniques
Other interrogation algorithms exist that do not rely on cross-correlation. Most work has been devoted to image motion estimation, also referred to as optical flow, which is a fundamental issue in low-level vision and has been the subject of substantial research for application in robot navigation, object tracking, image coding or structure reconstruction [6, 34]. These applications are commonly confronted with the problem of fragmented occlusion (i.e. looking through branches of a tree) or depth discontinuities in space, which are analogous to shocks within fluid flows.
Optical flow for the analysis of PIV images was first reported by Quenot et al. who investigated a thermally driven flow of water [67]. Further implementations of optical flow adapted to the evaluation of PIV images have been reported by Ruhnau et al. [72, 73]. The potential of optical flow for achieving high spatial resolution and accurate results in high-gradient regions was demonstrated in the scope of the “International PIV-Challenge” [88,89,90]. One known deficiency of established optical flow techniques available in computer vision is their instability in the presence of out-of-plane motion of particles that is associated with a loss of image correspondence. Therefore additional constraints have to be implemented in the algorithms to successfully apply them to PIV recordings [72, 73]. The latter can potentially be resolved by the application of optical flow techniques to 3D experiments, which remains to date a topic of research.
The degree of image matching can be evaluated not only by intensity multiplication as in the case of cross-correlation, but also by calculating the difference between two patterns. The method of minimum quadratic differences (MQD) is based on the difference between the reference and search template. The application to PIV has been investigated by Gui & Merzkirch [29, 30] among others. The method performs similar to cross-correlation analysis in terms of accuracy of displacement estimation. The potential advantage of the method is that the operation of difference can be more easily accelerated than the multiplication with dedicated microprocessors. One of the shortcomings of MQD is the higher sensitivity to variations in the illumination between the two exposures, which needs to be accounted for with intensity equalization.
The least-squares matching algorithm is based on MQD operations, but the function to be minimized is not defined in the physical space of spatial shift, but in that of the coefficients of affine transformations (shift, rotation, dilation, shear) [46]. The method is therefore also very well suited to the analysis of flows with a variety of length scales where the deformation in between the two recordings cannot be neglected.
Earlier work from Tokumaru & Dimotakis [96] presented the image correlation velocimetry (CIV), followed by a number of optimizations by Fincham & Delerce [21] leading to an algorithm performing iterative window refinement, including the deformation, comparable to that based on cross-correlation.
5.3.5 Cross-Correlation Peak Detection
One of the most crucial, but not necessarily easily understood features of digital PIV evaluation, is that the position of the correlation peak can be estimated to subpixel accuracy. Estimation accuracies of the order of 1 / 10–\(1/20\mathrm{th}\) of a pixel are realistic for \(32 \times 32\,\text {pixel}\) samples from 8-bit digital images. Simulation data such as those presented in Sect. 6.1 can be used to quantify the achievable accuracy for a given imaging setup.
Since the input data itself is discretized, the correlation values exist only for integral shifts. The highest correlation value would then permit the displacement to be determined only with an uncertainty of \(\pm 1/2\,\text {pixel}\). However, with the cross-correlation function being a statistical measure of best match, the correlation values themselves also contain useful information. For example, if an interrogation sample contains ten particle image pairs with slightly varying shift of \(2.5\,\text {pixel}\) on average, then from a statistical point of view, five particle image pairs will, for instance contribute to the correlation value associated with a shift of \(2\,\text {pixel}\), while the other five will indicate a shift of \(3\,\text {pixel}\). As a result, the correlation values for the \(2\,\text {pixel}\) and \(3\,\text {pixel}\) shifts will have the same value. An average of the two shifts will yield an estimated shift of \(2.5\,\text {pixel}\). Although rather crude, the example illustrates that the information hidden in the correlation values can be effectively used to estimate the mean particle image shift within the interrogation window.
A variety of methods of estimating the location of the correlation peak have been utilized in the past. Centroiding, which is defined as the ratio between the first order moment and zeroth order moment, is frequently used, but requires a method of defining the region that comprises the correlation peak. Generally, this is done by assigning a minimum threshold value separating the correlation peak signal from the background noise. The method works best with broad correlation peaks where many values contribute in the moment calculation. Nevertheless, separating the signal from the background noise is not always straightforward.
A more robust method is to fit the correlation data to some function. Especially for narrow correlation peaks, the approach of using only three adjoining values to estimate a component of displacement has become wide-spread. The most common of these three-point estimators are listed in Table 5.1, with the Gaussian peak fit most frequently implemented. The reasonable explanation for this is that the particle images themselves, if properly focused, describe Airy intensity functions which are approximated very well by a Gaussian intensity distribution (see Sect. 2.5.1). The correlation between two Gaussian functions can be shown also to result in a Gaussian function.
The three-point estimators typically work best for rather narrow correlation peaks formed from particle images in the 2–\(3 \,\text {pixel}\) diameter range. Simulations such as those shown in Fig. 6.12 indicate that for larger particle images the achievable measurement uncertainty increases which can be explained by the fact that, while the noise level on each correlation value stays nearly the same, the differences between the three adjoining correlation values become too small to provide a reliable shift estimate. In other words, the noise level becomes increasingly significant while the differences between the neighboring correlation values decrease. In this case, a centroiding approach may be more adequate since it makes use of more values around the peak than a three-point estimator. If in turn, when the particle images become too small (\(d_\tau < 1.5\,\text {pixel}\)), the three-point estimators will also perform poorly, mainly because the values adjoining the peak are hidden in noise, see Chap. 6.
In the remainder we describe the use and implementation of the three-point estimators, which were used for almost all the data sets presented as examples in this book. The following procedure can be used to detect a correlation peak and obtain a subpixel accurate displacement estimate of its location:
- Step 1::
-
Scan the correlation plane \(R = R_{\text {II}}\) for the maximum correlation value \(R_{(i,j)}\) and store its integer coordinates (i, j).
- Step 2::
-
Extract the adjoining four correlation values: \(R_{(i-1,j)}\), \(R_{(i+1,j)}\), \(R_{(i,j-1)}\) and \(R_{(i,j+1)}\).
- Step 3::
-
Use three points in each direction to apply the three point estimator, generally a Gaussian curve. The formulas for each function are given in Table 5.1.
Two alternative peak location estimators also deserve to be mentioned in this context as they provide even higher accuracy than the previously mentioned methods. First, the fit to a two-dimensional Gaussian, as introduced by Ronneberger et al. [70], is capable of using more than the immediate values neighboring the correlation maximum and also recovers the aspect ratio and skew of the correlation peak. Therefore, it is well suited in the position estimation of non-symmetric (e.g. elliptic) correlation peaks.
The expression given in Eq. (5.12) contains a total of six coefficients that need to be solved for: \({d_\tau }_x\) and \({d_\tau }_y\) define the correlation peak widths along x and y respectively, while \(k_{xy}\) describes the peak’s ellipticity. The correlation peak maximum is located at position coordinates \(x_0\) and \(y_0\) and has maximum peak height of \(I_0\). The solution of Eq. (5.12) can usually only be achieved by nonlinear regression methods, utilizing for instance a Levenberg-Marquardt least-squares minimization algorithm [66]. If only \(3 \times 3\) points are used the coefficients in Eq. (5.12) can also be solved for explicitly in a least squares sense [57].
The second estimator is based on signal reconstruction theory and is often referred to as Whittaker or cardinal reconstruction [49, 69]. The underlying function is a superposition of shifted sinc functions whose zeroes coincide with the sample points. Values between the sample points (i.e. correlation values) are formed from the sum of the sinc functions. Since the reconstructed function is continuous, the position of the peak value between the sample points has to be determined iteratively using for instance Brent’s method [66]. In principle all correlation values of the correlation plane could be used for the estimation, but in practice it suffices to perform one-dimensional fits on the row and column intersecting the maximum correlation value.
5.3.5.1 Multiple Peak Detection
To detect a given number of peaks, n, within the same correlation plane, a different search algorithm is necessary which sorts out only the highest peaks. In this case it is necessary to extract local maxima based on neighborhood analysis. This procedure is especially useful for correlation data obtained from single frame/multiple exposed PIV recordings. Also, multiple peak information is useful in cases where the strongest peak is associated with an outlier vector. An easily implemented recipe based on looking at the adjoining five or nine (\(3\times 3\)) correlation values is given here:
- Step 1::
-
Allocate a list to store the pixel coordinates and values of the n highest correlation peaks.
- Step 2::
-
Scan through the correlation plane and look for values which define a local maximum based on the local neighborhood, that is, the adjoining 4 or 8 correlation values.
- Step 3::
-
If a detected maximum can be placed into the list, reshuffle the list accordingly, such that the detected peaks are sorted in the order of intensity. Continue with Step 2 until the scan through the correlation plane has been completed.
- Step 4::
-
Apply the desired three-point peak estimators of Table 5.1 for each of the detected n highest correlation peaks, thereby providing n displacement estimates.
5.3.5.2 Displacement Peak Estimation in FFT-Based Correlation Data
As already described in Sect. 5.3.1.1 the assumption of periodicity of both the data samples and resulting correlation plane brings in a variety of artifacts that need to be dealt with properly.
The most important of these is that the correlation plane, due to the method of calculation, does not contain unbiased correlation values, and results in the displacement to be biased to lower magnitudes (i.e. bias error, p. 156). This displacement bias can be determined easily by convolving the sampling weighting functions, generally unity for the size of the interrogation windows, with each other. For example, the circular cross-correlation between two equal sized uniformly weighted interrogation windows results in a triangular weighting distribution in the correlation plane. This is illustrated in Fig. 5.33 for the one-dimensional case.
The central correlation value will always have unity weight. For a shift value of N / 2 only half the interrogation windows’ data actually contribute to the correlation value such that it carries only a weight of 1 / 2. When a three-point estimator is applied to the data, the correlation value closer to the origin is weighted more than the value further out and hence the magnitude of the estimated displacement will be too small. The solution to this problem is very straightforward: before applying the three-point estimator, the correlation values \(R_{\text {II}}\) have to be adjusted by dividing out the corresponding weighting factors. The weight factors can be obtained by convolving the image sampling function with itself – generally a unity weight, rectangular function – as illustrated in Fig. 5.33a. In the case where the two interrogation windows are of unequal size a convolution between these two sampling functions will yield a weighting function with unity weighting near the center (Fig. 5.33b). The extension of the method to nonuniform interrogation windows is of course alsopossible.
On a related note it should be mentioned that many FFT implementations result in the output data to be shuffled. Often the DC-component is found at index (0) with increasing frequencies up to index (\(N/2-1\)). The next index, (N / 2), is actually both the highest positive frequency and highest negative frequency. The following indices represent the negative frequencies in descending order such that index (\(N-1\)) is the lowest negative frequency component. By periodicity, the DC component reappears at index (N). In order to achieve a frequency spectrum with the DC component in the middle, the entire data set has to be rotated by (N / 2) indices. As illustrated in Fig. 5.34 two-dimensional FFT-data has to be unfolded in a corresponding manner.
For correlation planes calculated by means of a two-dimensional FFT, the zero-shift value (i.e. origin) would initially appear in the lower left corner which would make a similar unfolding of the resulting (periodic) correlation data necessary. Without unfolding, negative displacement peaks will actually appear on the opposite side. However, a careful implementation of the peak finding algorithm allows proper peak detection and shift estimation without having to unscramble to the correlation plane first.
5.3.6 Interrogation Techniques for PIV Time-Series
The availability of high-speed PIV hardware (see Sect. 3.1.5) makes it possible to record several subsequent images with short time separation such that one can analyze the motion of the tracers over more than just two recordings. The interrogation of time-series takes advantage of two fundamental aspects: first, the time between exposures can locally varied, such to avoid the case of too small or too large displacement [31]; secondly, the availability of more time samples enables a time-accurate analysis that takes into account the time-varying behavior of the particles velocity [50]. Some techniques that have exploited these advantages are briefly described below.
Multi-frame cross-correlation: The time separation between images for cross correlation can be varied at choice for instance obtaining an almost constant displacement even for flows with large velocity variations, as outlined in Fig. 5.35. By this technique, the time separation between frames can be chosen freely, such that the particle image displacement remains within a favorable range (typically between 5 and \(10\,\text {pixel}\)) [31]. As a result, also regions at low velocity are represented with large enough displacement and the overall dynamic range of the technique is increased. The separation between frames needs to obey also additional constraints, most notably out-of-plane motion and in-plane velocity gradient [31].
Sliding Average cross-correlation: When multiple frames are used for cross-correlation the random error can be reduced if the correlation maps are combined (see Fig. 5.36). If constant time separation is applied, the method refers to sliding average correlation. Averaging several correlation maps has a beneficial effect similar to the correlation based correction method [32] or the ensemble correlation [75] with a significant increase of signal-to-noise ratio and a reduction of the random error component. A simple criterion to select the maximum number of frames is that the fluid motion during the overall time does not exceed the length of the interrogation window, otherwise spatio-temporal averaging effects will become significant.
Pyramid correlation algorithm: The beneficial aspects of multi-frame analysis and correlation averaging are combined in the pyramid correlation algorithm [84]. In this case, cross-correlation is evaluated between all possible pairs within a chosen group of subsequent frames. First, the correlation maps obtained at fixed time separation are averaged. Subsequently, the combination of correlation maps obtained at different heights of the pyramid (i.e. with different time separation) requires a rescaling (homothetic transformation) before being averaged again. As a result, the method offers an increased signal-to-noise ratio along with a higher dynamic range.
The above methods can be implemented making use of the image deformation technique after a first evaluation step is made to estimate the in-plane displacement and deformation field.
The fluid trajectory correlation (FTC) technique [50] is based on multi-frame analysis. The technique takes a short sequence of recordings. Cross-correlation is applied from a given frame (typically in the middle of the series) to the other frames. With an iterative procedure (predictor-corrector) the path of the fluid element corresponding to the tracers inside the window, is built with cross-correlation. This algorithm offers the advantage of a high velocity dynamic range, based on the longer time of the sequence. Since FTC is based on a Lagrangian cross-correlation approach, it allows a longer time kernel compared to the above techniques, based on a Eulerian (local) analysis scheme. As a result, the measurement precision error scales as \(N^{-3/2}\) where N is the number of considered frames (Fig. 5.37).
A further improvement on this approach is given by the fluid trajectory ensemble evaluation (FTEE) [37] which realizes the objectives of FTC with a more robust ensemble cross-correlation based on the pyramid scheme.
5.4 Particle Tracking Velocimetry
The spatial resolution in PIV evaluation can be even further increased by eventually tracking the individual particle images, a procedure referred to as super-resolution PIV by Keane et al. [44] who applied the technique to double-exposure images. A similar procedure was also implemented for image pairs by Cowen & Monismith [18] for the study of a flat plate turbulent boundary layer.
The working principle of PTV can be briefly by the following operations:
-
1.
Detection of particle images from each recording. This is usually done by eliminating background intensity and analyzing the images in search of a local maximum.
-
2.
Forming a vector of particle positions at each of the two time instants with sub-pixel accuracy. The same procedure as discussed above for the correlation peak fit is usually followed.
-
3.
Pairing particle images corresponding to the same physical tracer. Here the most common criterion for a simple analysis is that the image pair with closest distance corresponds to the correct pair.
Although the above approach works on a pair of images it becomes generally much more reliable when working with image sequences, because the probability of spurious pairing over a sequence becomes much smaller than that for a single pair of images. PTV schemes applied to single image pairs can only rely on additional information and constraints such as the time regularity of particle image intensity.
Predictor-corrector schemes where a-priori knowledge of the velocity field is imposed to match particle images have been applied in numerous versions and they can be used to make the technique more robust. The latter are often based on the availability of more robust PIV analysis for the large scale motions. While many implementations rely on detection and position estimation of particle images prior to matching, other schemes prefer to use cross-correlation of small samples (typ. \(8\times 8\) pixel) centered on the detected individual particle image. The existence of a matched pair is confirmed by applying the procedure in reverse by starting from the second particle image. The main advantage of the correlation- based approach is the increased robustness in presence of overlapping images, and thus more suited for high particle image density data.
The PTV analysis offers a number of advantages. First, the velocity information is obtained with higher spatial localization as a velocity vector pertains to a single particle image, which is significantly smaller than the interrogation window used for cross-correlation. The measurements are therefore not affected by bias errors due to spatial averaging [39]. Consequently, the technique is well suited to analyze flows with strong velocity gradient, such as for instance turbulent boundary layers. Second, PTV is less prone to bias errors in the case of inhomogeneous seeding distributions. This is important for near wall flow measurements for instance, as the seeding concentration drops towards the wall due to the Saffman effect (see Sect. 2.1.3).
Finally, the spatial resolution of mean velocity distribution can be virtually achieve sub-pixel level. This is made possible by determining the particle location with sub-pixel precision. This holds true until the resolution becomes comparable to the particle image diameter. The final resolution of the measurement is ultimately determined by the diameter of the tracer particle and therefore flow structures smaller than the diameter cannot be resolved. However, the latter limit is only reached for high magnification imaging in microfluidics or by using a long-range microscope.
A final interesting feature of the PTV technique is that it can be applied for measurements at low seeding concentration, thereby reducing the contamination of the flow facilities due to excessive seeding. On the other hand, the analysis of particle images with PTV technique is notably less robust than that based on cross correlation due to the possibility of spurious pairing between particle images.
5.4.1 Particle Image Detection and Position Estimation
Generally, the particle tracking technique is well suited for accurate flow field measurements at any magnification, provided the seeding concentration is sufficiently low for a reliable particle image matching between subsequent frames. At high seeding concentrations two major sources of errors can occur. First, the likelihood of matching non-corresponding particle images increases. This problem can be solved by using sophisticated particle tracking approaches or by evaluating time resolved data. However, both strategies require sufficiently smooth flow variations in space or time, such that spatial homogeneity or temporal smoothness assumptions can be applied locally. Second, the random error increases, as the likelihood of overlapping particle images arises with increasing particle image densities. This is caused by larger uncertainties in the particle image location determination, since the model used for the sub-pixel position estimation, normally a Gaussian intensity distribution, is not appropriate in case of overlapping particle image patterns [17]. Furthermore, the correct identification of two slightly overlapping particle images becomes increasingly difficult with increasing overlap between particle images. Finally, even if the positions of both overlapping particle images can be determined, the correct particle track identification becomes ambiguous, yielding larger uncertainties. Maas [51] derived an expression connecting the number of individual particle images \(N_P\) with the number of overlapping particle images \(N_0\) for circular particle images that are randomly distributed on a sensor with size A.
\(A_{\text {crit}}\) is the critical area in which a particle image starts to overlap with the boundaries of another particle image. The boundaries of the particle images are defined to be at the radial location where the intensity has decreased to \(e^{-2}\) of the peak intensity value. Thus, the critical area is \(A_{\text {crit}} = \pi D^2\), since particle images share the same boundary if the centers have a distance of D. If a critical particle image overlap of 50% corresponding to \(L = D/2\) is assumed, the critical area reduces to \(A_{\text {crit}} = \pi L^2\), with L being the distance of particle image centers that can be separated. Figure 5.38 illustrates the ratio of overlapping particle images as a function of L for different particle image densities \(N_{\!\,\mathrm {ppp}}\).
For particle image diameters of \(2.5\,\text {pixel}\), this results in a fraction of about 20% overlapping particle images. At a diameter of \(5\,\text {pixel}\), the overlap ratio would reach 80% for comparison. Compared to PIV processing, the particle image density can be reduced by a factor of 6–10 to get almost the same number of vectors. In this case, only 5% of the particle images overlap at a diameter of \(2.5\,\text {pixel}\) and \(0.005\,\mathrm {ppp}\). One can either accept these 5% as a loss of information by detecting and rejecting them or one can use special algorithms like the one of Lei et al. [48] that showed a reliable position determination for particle images that overlap up to 50% (see Cierpka et al. [16] for details).
5.4.2 Particle Pairing and Displacement Estimation
The probability of correct particle pairing \(R_{12}\) (Fig. 5.39-left, right axis) decreases rapidly with the particle image density, whereas, the RMS uncertainty (left axis) achieves a stable value beyond an image density \(A_\mathrm {p} / A_\mathrm {i} = 0.4\) [40]. The uncertainty of the displacement measurement for two different particle image detection methods is compared to single pixel ensemble correlation [40] showing that for high-quality imaging conditions, the uncertainty of the particle tracking is rather low and little dependent upon the particle image diameter as long as \(D > 2\) pixel.
5.4.3 Spatial Resolution
One of the commonly adopted tests for spatial resolution is the step-response. Figure 5.40(left) shows that strong flow gradients can be nicely resolved using particle tracking even in case of large particle images, which are typical in microfluidics. Although the raising uncertainty with increasing particle image diameter is visible, bias errors due to spatial averaging do not exist, unlike for PIV methods. The right plot illustrates that correct measurements can be obtained over a large range of particle image diameters. In the case of single pixel ensemble-correlation the spatial resolution is limited by the size of the particle image and in case of spatial cross-correlation analysis by the size of the interrogation window.
It is evident that the precise determination of the particle image location is an important aspect of all PTV techniques. To achieve sub-pixel accuracy the discrete particle image distribution is typically approximated by a continuous Gaussian fit function, where the maximum denotes the particle image center precisely. This approach is well suited for macroscopic imaging. However, in microscopic domains with large magnifications different models might be more suitable [71]. A comparison of different center determination methods indicated that a Gaussian fit provides the best trade-off between accuracy and processing time. For a signal to noise ratio of SNR \(=\) 10, the center determination yielded an uncertainty of about 0.05 pixel [15]. Peak locking is largely avoided when the particle image diameter is larger than \(2\,\text {pixel}\) (see Sect. 6.3 for further details).
5.4.4 Performance of Particle Tracking
Once the positions of the particle images are determined, the challenge of identifying the correct partners in subsequent frames has to be solved. The most straightforward method to match corresponding particle images is a nearest neighbor PTV algorithm [52]. This algorithm is suited for very low particle image densities, since the particle displacements must be smaller than the inner distance between neighboring particles. More elaborate approaches allow increasing the particle image densities and in turn the information density. These methods comprise artificial neural networks or relaxation methods that minimize a local or global cost function [65]. Alternatively, Okamoto et al. [61] presented a spring force model, where particle pairs were identified by searching for the smallest spring force calculated over particles in a certain neighborhood. Probabilistic approaches that take the motion of neighboring particles into account show a very high vector yield at larger particle image densities, however at the expense of spatial resolution as the motion of neighboring particles must be correlated. Another method to improve the detection of corresponding particle pairs is the use of a predictor for the displacement. A predictor can significantly decrease the search area in the second frame and thus improve the match probability of particles. In general, such predictors can be based on theoretically known velocity distributions or experimentally obtained PIV evaluations [13, 18, 45, 92]. Brevis et al. [11] combined a PIV predictor with a relaxation PTV algorithm to further enhance the performance. However, in comparison with PIV, the gain in resolution is only minor and does not justify the effort in many cases. A fully PTV-based algorithm was presented by Ohmi & Li [60], where a case sensitive search radius in the second frame is used to identify possibly matching particles. This is done for all particles detected reliably in the first frame. For each possible match, the algorithm updates the probabilities of similar neighbor vectors iteratively. The threshold for the common motion of the neighboring particles is another parameter that needs to be specified. To address this drawback, Fuchs et al. [24] introduced a robust and user-friendly tracking algorithm, where only the displacement limits need to be specified, while all other parameters do not require any adjustment.
For the identification of the correct displacement of a certain particle, the histograms of all possible displacements of the particle of interest and its neighbors lying within the specified displacement range are analyzed. The resulting displacement to the subsequent frame showing the lowest deviation from the maximum values of the histogram in each spatial direction is considered to be the most probable displacement for the particle of interest. The algorithm is computationally efficient, since it does not need to iteratively update probabilities.
Moreover, this non-iterative tracking method (NIT) is capable of yielding reliable tracking results for large particle image densities. Thus, the method is assessed by means of the analysis of the first image pair of the synthetic data set 301 (\(\,\mathrm {ppp}= 0.06\)), provided by Okamoto et al. [62]. Figure 5.41 gives an overview of the displacement field of the using the NIT method and clearly only few outliers are yielded, even in regions where the displacements are significantly larger than the distance between the particles. A closer look on the tracking performance is given Table 5.2, comparing a nearest neighbor (NN) algorithm, the NIT algorithm, and the iterative tracking (IT) algorithm of Ohmi & Li [60]. The nearest neighbor algorithm fails to provide reliable tracking results at this high particle image density, as it can only identify 868 out of 4042 actual tracks. Furthermore, the number of invalid and not detected tracks is large. Both, the IT and the NIT algorithm, show a good tracking performance with a large number of valid vectors (3846 and 3940, respectively) and only a low number of invalid and not detected tracks. Thus, the NIT algorithm comes close to the IT algorithm in terms of performance, while offering the advantage being less user dependent, which is an important feature for new and inexperienced users.
5.4.5 Multi-frame Particle Tracking
The accuracy and the robustness of double-frame methods are limited by the fact that only two recordings are available. Approaches to further enhance the precision in estimating the flow velocity are based on multi-pulse or multi-frame techniques, which rely upon the temporal smoothness of the particle trajectory and regularity of the image signal. Multi-pulse or multi-frame PTV techniques improve the probability for correct particle matching by tracking particles over more than two successive frames [16]. Furthermore, trajectory curvature can be accounted for considering curved particle path fitting the particle image positions (see Fig. 5.42). The same holds for tangential acceleration along the particle trajectory. The accurate determination of the local acceleration of the flow is important for unsteady flows and for pressure estimation from the velocity field (see Sect. 7.6.4).
In a multi-frame algorithm the displacement between the first two particle images can be reliably obtained making the time interval between frame one and two small enough (for instance 4–8\(\,\text {pixel}\)). The estimated velocity vector is used as predictor to point to frame three and similarly towards frame four. The approach can be extended to more frames when the hardware allows recording longer sequences. The large observation time interval and thus large displacement yield a high dynamic velocity range.
References
Adrian, R.J.: Statistical properties of particle image velocimetry measurements in turbulent flow. In: 4th International Symposia on Laser Techniques to Fluid Mechanics, (Lisbon, Portugal, 11–14 July) (1988)
Agüí, J.C., Jiménez, J.: On the performance of particle tracking. J. Fluid Mech. 185, 447–468 (1987). DOI 10.1017/S0022112087003252. URL http://journals.cambridge.org/article_S0022112087003252
Arnold, W., Hinsch, K.D., Mach, D.: Turbulence level measurement by speckle velocimetry. Appl. Opt. 25(3), 330–331 (1986). DOI 10.1364/AO.25.000330. URL http://ao.osa.org/abstract.cfm?URI=ao-25-3-330
Astarita, T.: Analysis of velocity interpolation schemes for image deformation methods in PIV. Exp. Fluids 45(2), 257–266 (2008). DOI 10.1007/s00348-008-0475-7. URL http://dx.doi.org/10.1007/s00348-008-0475-7
Astarita, T., Cardone, G.: Analysis of interpolation schemes for image deformation methods in PIV. Exp. Fluids 38(2), 233–243 (2005). DOI 10.1007/s00348-004-0902-3. URL http://dx.doi.org/10.1007/s00348-004-0902-3
Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12(1), 43–77 (1994). DOI 10.1007/BF01420984. URL http://dx.doi.org/10.1007/BF01420984
Bastiaans, R.J.M., van der Plas, G.A.J., Kieft, R.N.: The performance of a new PTV algorithm applied in super-resolution PIV. Exp. Fluids 32(3), 346–356 (2002). DOI 10.1007/s003480100363. URL http://dx.doi.org/10.1007/s003480100363
Bendat, J.S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 4th edn. Wiley, New York (2012). DOI 10.1002/9781118032428. URL http://dx.doi.org/10.1002/9781118032428
Billy, F., David, L., Pineau, G.: Single pixel resolution correlation applied to unsteady flow measurements. Meas. Sci. Technol. 15(6), 1039 (2004). DOI 10.1088/0957-0233/15/6/002. URL http://stacks.iop.org/0957-0233/15/i=6/a=002
Bracewell, R.N.: The Fourier Transform and Its Applications, 3rd edn. Electrical Engineering Series. McGraw Hill, New York (1999)
Brevis, W., Niño, Y., Jirka, G.H.: Integrating cross-correlation and relaxation algorithms for particle tracking velocimetry. Exp. Fluids 50(1), 135–147 (2011). DOI 10.1007/s00348-010-0907-z. URL http://dx.doi.org/10.1007/s00348-010-0907-z
Brigham, E.O.: The Fast Fourier Transform. Prentice-Hall Signal Processing Series. Prentice-Hall, Englewood Cliffs (1974)
Cardwell, N.D., Vlachos, P.P., Thole, K.A.: A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows. Meas. Sci. Technol. 22(10), 105,406 (2011). DOI 10.1088/0957-0233/22/10/105406. URL http://stacks.iop.org/0957-0233/22/i=10/a=105406
Cenedese, A., Querzoli, G.: PIV for lagrangian scale evaluation in a convective boundary layer. In: Tanida, Y., Miyashiro, H. (eds.) Flow Visualization VI, pp. 863–867. Springer, Berlin (1992). DOI 10.1007/978-3-642-84824-7_155. URL http://dx.doi.org/10.1007/978-3-642-84824-7_155
Cierpka, C., Kähler, C.J.: Cross-correlation or tracking - comparison and discussion. In: 16th International Symposium on Applications of Laser Techniques to Fluid Mechanics Lisbon, Portugal, 09–12 July (2012). http://ltces.dem.ist.utl.pt/lxlaser/lxlaser2012/upload/299_paper_wupzup.pdf
Cierpka, C., Lütke, B., Kähler, C.J.: Higher order multi-frame particle tracking velocimetry. Exp. Fluids 54(5), 1533 (2013). DOI 10.1007/s00348-013-1533-3. URL http://dx.doi.org/10.1007/s00348-013-1533-3
Cierpka, C., Scharnowski, S., Kähler, C.J.: Parallax correction for precise near-wall flow investigations using particle imaging. Appl. Opt. 52(12), 2923–2931 (2013). DOI 10.1364/AO.52.002923. URL http://dx.doi.org/10.1364/AO.52.002923
Cowen, E.A., Monismith, S.G.: A hybrid digital particle tracking velocimetry technique. Exp. Fluids 22(3), 199–211 (1997). DOI 10.1007/s003480050038. URL http://dx.doi.org/10.1007/s003480050038
Dracos, T.: Particle tracking in three-dimensional space. In: Dracos, T. (ed.) Three-Dimensional Velocity and Vorticity Measuring and Image Analysis Techniques. ERCOFTAC Series, vol. 4, pp. 209–227. Springer, Netherlands (1996). DOI 10.1007/978-94-015-8727-3_10. URL http://dx.doi.org/10.1007/978-94-015-8727-3_10
Dracos, T.: Particle tracking velocimetry (PTV). In: Dracos, T. (ed.) Three-Dimensional Velocity and Vorticity Measuring and Image Analysis Techniques. ERCOFTAC Series, vol. 4, pp. 155–160. Springer, Netherlands (1996). DOI 10.1007/978-94-015-8727-3_7. URL http://dx.doi.org/10.1007/978-94-015-8727-3_7
Fincham, A., Delerce, G.: Advanced optimization of correlation imaging velocimetry algorithms. Exp. Fluids 29(1), S013–S022 (2000). DOI 10.1007/s003480070003. URL http://dx.doi.org/10.1007/s003480070003
Frigo, M., Johnson, S.G.: FFTW: an adaptive software architecture for the FFT. In: Proceedings 1998 IEEE International Conference Acoustics Speech and Signal Processing, vol. 3, pp. 1381–1384. IEEE (1998). DOI 10.1109/ICASSP.1998.681704. URL http://dx.doi.org/10.1109/ICASSP.1998.681704
Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005). DOI 10.1109/JPROC.2004.840301. URL http://dx.doi.org/10.1109/JPROC.2004.840301 (Special issue on “Program Generation, Optimization, and Platform Adaptation”)
Fuchs, T., Hain, R., Kähler, C.J.: Non-iterative double-frame 2D/3D particle tracking velocimetry. Exp. Fluids 58(199). (2017). DOI 10.1007/s00348-017-2404-0. URL http://dx.doi.org/10.1007/s00348-017-2404-0
Gharib, M., Willert, C.E.: Particle tracing: revisited. In: Gad-el Hak, M. (ed.) Advances in Fluid Mechanics Measurements. Lecture Notes in Engineering, vol. 45, pp. 109–126. Springer, Berlin (1989). DOI 10.1007/978-3-642-83787-6_3. URL http://dx.doi.org/10.1007/978-3-642-83787-6_3
Goodman, J.W.: Introduction to Fourier Optics, 4th edn. Macmillan Learning (2017). http://www.macmillanlearning.com/Catalog/product/introductiontofourieroptics-fourthedition-goodman
Grant, I., Liu, A.: Method for the efficient incoherent analysis of particle image velocimetry images. Appl. Opt. 28(10), 1745–1748 (1989). DOI 10.1364/AO.28.001745. URL http://ao.osa.org/abstract.cfm?URI=ao-28-10-1745
Guezennec, Y.G., Brodkey, R.S., Trigui, N., Kent, J.C.: Algorithms for fully automated three-dimensional particle tracking velocimetry. Exp. Fluids 17(4), 209–219 (1994). DOI 10.1007/BF00203039. URL http://dx.doi.org/10.1007/BF00203039
Gui, L.C., Merzkirch, W.: A method of tracking ensembles of particle images. Exp. Fluids 21(6), 465–468 (1996). DOI 10.1007/BF00189049. URL http://dx.doi.org/10.1007/BF00189049
Gui, L., Merzkirch, W.: Generating arbitrarily sized interrogation windows for correlation-based analysis of particle image velocimetry recordings. Exp. Fluids 24(1), 66–69 (1998)
Hain, R., Kähler, C.J.: Fundamentals of multiframe particle image velocimetry (PIV). Exp. Fluids 42(4), 575–587 (2007). DOI 10.1007/s00348-007-0266-6. URL http://dx.doi.org/10.1007/s00348-007-0266-6
Hart, D.P.: PIV error correction. Exp. Fluids 29(1), 13–22 (2000). DOI 10.1007/s003480050421. URL http://dx.doi.org/10.1007/s003480050421
Hart, D.P.: Super-resolution PIV by recursive local-correlation. J. Vis. 3(2), 187–194 (2000). DOI 10.1007/BF03182411. URL http://dx.doi.org/10.1007/BF03182411
Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981). DOI 10.1016/0004-3702(81)90024-2. URL http://dx.doi.org/10.1016/0004-3702(81)90024-2
Huang, H.T., Fiedler, H.E., Wang, J.J.: Limitation and improvement of PIV, part II. Particle image distortion, a novel technique. Exp. Fluids 15(4–5), 263–273 (1993). DOI 10.1007/BF00223404. URL http://dx.doi.org/10.1007/BF00223404
Jähne, B.: Digital Image Processing and Image Formation, 7th edn. Springer, Berlin (2018). http://www.springer.com/us/book/9783642049491
Jeon, Y.J., Chatellier, L., David, L.: Fluid trajectory evaluation based on an ensemble-averaged cross-correlation in time-resolved PIV. Exp. Fluids 55(7), 1766 (2014). DOI 10.1007/s00348-014-1766-9. URL http://dx.doi.org/10.1007/s00348-014-1766-9
Kähler, C.J., Scholz, U.: Transonic jet analysis using long-distance micro-PIV. In: 12th International Symposium on Flow Visualization - ISFV 12, Göttingen, Germany (2006)
Kähler, C.J., Scholz, U., Ortmanns, J.: Wall-shear-stress and near-wall turbulence measurements up to single pixel resolution by means of long-distance micro-PIV. Exp. Fluids 41(2), 327–341 (2006). DOI 10.1007/s00348-006-0167-0. URL http://dx.doi.org/10.1007/s00348-006-0167-0
Kähler, C.J., Scharnowski, S., Cierpka, C.: On the uncertainty of digital PIV and PTV near walls. Exp. Fluids 52(6), 1641–1656 (2012). DOI 10.1007/s00348-012-1307-3. URL http://dx.doi.org/10.1007/s00348-012-1307-3
Kähler, C.J., Scharnowski, S., Cierpka, C.: On the resolution limit of digital particle image velocimetry. Exp. Fluids 52(6), 1629–1639 (2012). DOI 10.1007/s00348-012-1280-x. URL http://dx.doi.org/10.1007/s00348-012-1280-x
Kähler, C.J., Astarita, T., Vlachos, P.P., Sakakibara, J., Hain, R., Discetti, S., La Foy, R., Cierpka, C.: Main results of the 4th International PIV Challenge. Exp. Fluids 57(6), 97 (2016). DOI 10.1007/s00348-016-2173-1. URL http://dx.doi.org/10.1007/s00348-016-2173-1
Keane, R.D., Adrian, R.J.: Optimization of particle image velocimeters. I. double pulsed systems. Meas. Sci. Technol. 1(11), 1202 (1990). DOI 10.1088/0957-0233/1/11/013. URL http://stacks.iop.org/0957-0233/1/i=11/a=013
Keane, R.D., Adrian, R.J.: Theory of cross-correlation analysis of PIV images. Appl. Sci. Res. 49(3), 191–215 (1992). DOI 10.1007/BF00384623. URL https://dx.doi.org/10.1007/BF00384623
Keane, R.D., Adrian, R.J., Zhang, Y.: Super-resolution particle imaging velocimetry. Meas. Sci. Technol. 6(6), 754 (1995). DOI 10.1088/0957-0233/6/6/013. URL http://stacks.iop.org/0957-0233/6/i=6/a=013
Kitzhofer, J., Westfeld, P., Pust, O., Nonn, T., Maas, H.G., Brücker, C.: Estimation of 3D deformation and rotation rate tensor from volumetric particle data via 3D least squares matching. In: 15th International Symposium on Applications of Laser Techniques to Fluid Mechanics Lisbon, Portugal, 05–08 July 2010 (2010). http://ltces.dem.ist.utl.pt/lxlaser/lxlaser2010/upload/1677_lfplgn_3.1.4.Full_1677.pdf
Lauterborn, W., Kurz, T.: Coherent Optics - Fundamentals and Applications, 2nd edn. Springer, Berlin (2003). DOI 10.1007/978-3-662-05273-0. URL https://dx.doi.org/10.1007/978-3-662-05273-0
Lei, Y.C., Tien, W.H., Duncan, J., Paul, M., Ponchaut, N., Mouton, C., Dabiri, D.: Rösgen, T., Hove, J.: A vision-based hybrid particle tracking velocimetry (PTV) technique using a modified cascade correlation peak-finding method. Exp. Fluids 53(5), 1251–1268 (2012). DOI 10.1007/s00348-012-1357-6. URL http://dx.doi.org/10.1007/s00348-012-1357-6
Lourenco, L., Krothapalli, A.: On the accuracy of velocity and vorticity measurements with PIV. Exp. Fluids 18(6), 421–428 (1995). DOI 10.1007/BF00208464. URL http://dx.doi.org/10.1007/BF00208464
Lynch, K., Scarano, F.: A high-order time-accurate interrogation method for time-resolved PIV. Meas. Sci. Technol. 24(3), 16 (2013). DOI 10.1088/0957-0233/24/3/035305. URLhttp://stacks.iop.org/0957-0233/24/i=3/a=035305
Maas, H.G.: Digitale Photogrammetrie in der dreidimensionalen Strömungsmesstechnik. Ph.D. thesis, ETH Zürich (1992)
Malik, N.A., Dracos, T., Papantoniou, D.A.: Particle tracking velocimetry in three-dimensional flows. Exp. Fluids 15(4), 279–294 (1993). DOI 10.1007/BF00223406. URL http://dx.doi.org/10.1007/BF00223406
Meinhart, C.D., Wereley, S.T.: Optimum particle size and correlation strategy for sub-micron spatial resolution. In: Joint International PIVNET II/ERCOFTAC Workshop on Micro PIV and Applications in Microsystems, 7–8 April, Delft (the Netherlands) (2005). http://ahd.tudelft.nl/~mpiv/prog.html
Meinhart, C.D., Wereley, S.T., Santiago, J.G.: A PIV algorithm for estimating time-averaged velocity fields. J. Fluids Eng. 122(2), 285–289 (2000). DOI 10.1115/1.483256. URL http://dx.doi.org/10.1115/1.483256
Mejia-Alvarez, R., Christensen, K.T.: Robust suppression of background reflections in PIV images. Meas. Sci. Technol. 24(2), 027,003 (2013). DOI 10.1088/0957-0233/24/2/027003. URL http://stacks.iop.org/0957-0233/24/i=2/a=027003
Mendez, M.A., Raiola, M., Masullo, A., Discetti, S., Ianiro, A., Theunissen, R., Buchlin, J.M.: POD-based background removal for particle image velocimetry. Exp. Thermal Fluid Sci. 80, 181–192 (2017). DOI 10.1016/j.expthermflusci.2016.08.021. URL http://www.sciencedirect.com/science/article/pii/S0894177716302266
Nobach, H., Honkanen, M.: Two-dimensional gaussian regression for sub-pixel displacement estimation in particle image velocimetry or particle position estimation in particle tracking velocimetry. Exp. Fluids 38(4), 511–515 (2005). DOI 10.1007/s00348-005-0942-3. http://dx.doi.org/10.1007/s00348-005-0942-3
Nogueira, J., Lecuona, A., Rodríguez, P.A.: Identification of a new source of peak locking, analysis and its removal in conventional and super-resolution piv techniques. Exp. Fluids 30(3), 309–316 (2001). DOI 10.1007/s003480000179. URL http://dx.doi.org/10.1007/s003480000179
Novara, M., Ianiro, A., Scarano, F.: Adaptive interrogation for 3D-PIV. Meas. Sci. Technol. 24, 024012 (2013). DOI 10.1088/0957-0233/24/2/024012. URL http://dx.doi.org/10.1088/0957-0233/24/2/024012
Ohmi, K., Li, H.Y.: Particle-tracking velocimetry with new algorithms. Meas. Sci. Technol. 11(6), 603 (2000). DOI 10.1088/0957-0233/11/6/303. URL http://stacks.iop.org/0957-0233/11/i=6/a=303
Okamoto, K., Hassan, Y.A., Schmidl, W.D.: New tracking algorithm for particle image velocimetry. Exp. Fluids 19(5), 342–347 (1995). DOI 10.1007/BF00203419. URL http://dx.doi.org/10.1007/BF00203419
Okamoto, K., Nishio, S., Saga, T., Kobayashi, T.: Standard images for particle-image velocimetry. Meas. Sci. Technol. 11(6), 685 (2000). DOI 10.1088/0957-0233/11/6/311. URL http://stacks.iop.org/0957-0233/11/i=6/a=311
Papoulis, A.: Signal Analysis. McGraw-Hill Inc., New York (1981)
Papoulis, A., Pillai, S.U.: Probability, Random Variables, and Stochastic Processes, 4th edn. McGraw-Hill Education Ltd., New York (2002). http://www.mhhe.com/engcs/electrical/papoulis/
Pereira, F., Stüer, H., Graff, E.C., Gharib, M.: Two-frame 3D particle tracking. Meas. Sci. Technol. 17(7), 1680 (2006). DOI 10.1088/0957-0233/17/7/006. URL http://stacks.iop.org/0957-0233/17/i=7/a=006
Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: The Art of Scientific Computing, 3rd edn. Cambridge University Press, New York (2007). http://numerical.recipes/
Quénot, G.M., Pakleza, J., Kowalewski, T.A.: Particle image velocimetry with optical flow. Exp. Fluids 25(3), 177–189 (1998). DOI 10.1007/s003480050222. URL http://dx.doi.org/10.1007/s003480050222
Reynolds, G.O., DeVelis, J.B., Parrent, G.B., Thompson, B.J.: The New Physical Optics Notebook: Tutorials In Fourier Optics. Optical Engineering Press, Bellingham (1989). DOI 10.1117/3.2303. URL http://dx.doi.org/10.1117/3.2303
Roesgen, T.: Optimal subpixel interpolation in particle image velocimetry. Exp. Fluids 35(3), 252–256 (2003). DOI 10.1007/s00348-003-0627-8. URL http://dx.doi.org/10.1007/s00348-003-0627-8
Ronneberger, O., Raffel, M., Kompenhans, J.: Advanced evaluation algorithms for standard and dual plane particle image velocimetry. In: 9th International Symposium on Applications of Lasers to Fluid Mechanics, Lisbon (Portugal) (1998)
Rossi, M., Segura, R., Cierpka, C., Kähler, C.J.: On the effect of particle image intensity and image preprocessing on the depth of correlation in micro-PIV. Exp. Fluids 52(4), 1063–1075 (2012). DOI 10.1007/s00348-011-1194-z. URL http://dx.doi.org/10.1007/s00348-011-1194-z
Ruhnau, P., Schnörr, C.: Optical Stokes flow estimation: an imaging-based control approach. Exp. Fluids 42(1), 61–78 (2007). DOI 10.1007/s00348-006-0220-z. URL http://dx.doi.org/10.1007/s00348-006-0220-z
Ruhnau, P., Kohlberger, T., Schnörr, C., Nobach, H.: Variational optical flow estimation for particle image velocimetry. Exp. Fluids 38(1), 21–32 (2005). DOI 10.1007/s00348-004-0880-5. URL http://dx.doi.org/10.1007/s00348-004-0880-5
Sage, D.: Local normalization - filter to reduce the effect on a non-uniform illumination. Technical Report, Biomedical Image Group, EPFL, Switzerland (2011). http://bigwww.epfl.ch/sage/soft/localnormalization/
Santiago, J.G., Wereley, S.T., Meinhart, C.D., Beebe, D.J., Adrian, R.J.: A particle image velocimetry system for microfluidics. Exp. Fluids 25(4), 316–319 (1998). DOI 10.1007/s003480050235. URL http://dx.doi.org/10.1007/s003480050235
Scarano, F.: Iterative image deformation methods in PIV. Meas. Sci. Technol. 13(1), R1 (2002). DOI 10.1088/0957-0233/13/1/201. URL https://dx.doi.org/10.1088/0957-0233/13/1/201
Scarano, F.: Theory of non-isotropic spatial resolution in PIV. Exp. Fluids 35(3), 268–277 (2003). DOI 10.1007/s00348-003-0655-4. URL http://dx.doi.org/10.1007/s00348-003-0655-4
Scarano, F., Riethmuller, M.L.: Iterative multigrid approach in PIV image processing with discrete window offset. Experiments in Fluids 26(6), 513–523 (1999). DOI 10.1007/s003480050318. URL http://dx.doi.org/10.1007/s003480050318
Scarano, F., Riethmuller, M.L.: Advances in iterative multigrid PIV image processing. Exp. Fluids 29(1), S051–S060 (2000). DOI 10.1007/s003480070007. URL http://dx.doi.org/10.1007/s003480070007
Scharnowski, S., Hain, R., Kähler, C.J.: Reynolds stress estimation up to single-pixel resolution using PIV-measurements. Exp. Fluids 52(4), 985–1002 (2012). DOI 10.1007/s00348-011-1184-1. URL http://dx.doi.org/10.1007/s00348-011-1184-1
Scholz, U., Kähler, C.J.: Dynamics of flow structures on heaving and pitching airfoils. In: 13th International Symposium on Applications of Laser Techniques to Fluid Mechanics, Lisbon, Portugal (2006). http://ltces.dem.ist.utl.pt/lxlaser/lxlaser2006/downloads/papers/40_4.pdf
Schrijer, F.F.J., Scarano, F.: Effect of predictor-corrector filtering on the stability and spatial resolution of iterative PIV interrogation. Exp. Fluids 45(5), 927–941 (2008). DOI 10.1007/s00348-008-0511-7. URL http://dx.doi.org/10.1007/s00348-008-0511-7
Sciacchitano, A., Scarano, F.: Elimination of PIV light reflections via a temporal high pass filter. Meas. Sci. Technol. 25(8), 084,009 (2014). DOI 10.1088/0957-0233/25/8/084009. URL http://stacks.iop.org/0957-0233/25/i=8/a=084009
Sciacchitano, A., Scarano, F., Wieneke, B.: Multi-frame pyramid correlation for time-resolved PIV. Exp. Fluids 53(4), 1087–1105 (2012). DOI 10.1007/s00348-012-1345-x. URL http://dx.doi.org/10.1007/s00348-012-1345-x
Shavit, U., Lowe, R.J., Steinbuck, J.V.: Intensity capping: a simple method to improve cross-correlation PIV results. Exp. Fluids 42(2), 225–240 (2007). DOI 10.1007/s00348-006-0233-7. URL http://dx.doi.org/10.1007/s00348-006-0233-7
Soria, J.: An investigation of the near wake of a circular cylinder using a video-based digital cross-correlation particle image velocimetry technique. Exp. Thermal Fluid Sci. 12(2), 221–233 (1996). DOI 10.1016/0894-1777(95)00086-0. URL http://www.sciencedirect.com/science/article/pii/0894177795000860
Soria, J., Willert, C.E.: On measuring the joint probability density function of three-dimensional velocity components in turbulent flows. Meas. Sci. Technol. 23(6), 065,301 (2012). DOI 10.1088/0957-0233/23/6/065301. URL http://dx.doi.org/10.1088/0957-0233/23/6/065301
Stanislas, M., Okamoto, K., Kähler, C.J.: Main results of the first international PIV challenge. Meas. Sci. Technol. 14(10), R63 (2003). DOI 10.1088/0957-0233/14/10/201. URL http://stacks.iop.org/0957-0233/14/i=10/a=201
Stanislas, M., Okamoto, K., Kähler, C.J., Westerweel, J.: Main results of the second international PIV challenge. Exp. Fluids 39(2), 170–191 (2005). DOI 10.1007/s00348-005-0951-2. URL http://dx.doi.org/10.1007/s00348-005-0951-2
Stanislas, M., Okamoto, K., Kähler, C.J., Westerweel, J., Scarano, F.: Main results of the third international PIV challenge. Exp. Fluids 45(1), 27–71 (2008). DOI 10.1007/s00348-008-0462-z. URL http://dx.doi.org/10.1007/s00348-008-0462-z
Stitou, A., Riethmuller, M.L.: Extension of PIV to super resolution using PTV. Meas. Sci. Technol. 12(9), 1398 (2001). DOI 10.1088/0957-0233/12/9/304. URL http://stacks.iop.org/0957-0233/12/i=9/a=304
Takehara, K., Adrian, R.J., Etoh, G.T., Christensen, K.T.: A kalman tracker for super-resolution PIV. Exp. Fluids 29(1), S034–S041 (2000). DOI 10.1007/s003480070005. URL http://dx.doi.org/10.1007/s003480070005
Theunissen, R., Scarano, F., Riethmuller, M.L.: On improvement of PIV image interrogation near stationary interfaces. Exp. Fluids 45(4), 557–572 (2008). DOI 10.1007/s00348-008-0481-9. URL http://dx.doi.org/10.1007/s00348-008-0481-9
Theunissen, R., Scarano, F., Riethmuller, M.L.: Spatially adaptive PIV interrogation based on data ensemble. Exp. Fluids 48(5), 875–887 (2010). DOI 10.1007/s00348-009-0782-7. URL http://dx.doi.org/10.1007/s00348-009-0782-7
Thévenaz, P., Blu, T., Unser, M.: Interpolation revisited. IEEE Trans. Med. Imaging 19(7), 739–758 (2000). DOI 10.1109/42.875199. URL http://dx.doi.org/10.1109/42.875199
Tokumaru, P.T., Dimotakis, P.E.: Image correlation velocimetry. Exp. Fluids 19(1), 1–15 (1995). DOI 10.1007/BF00192228. URL http://dx.doi.org/10.1007/BF00192228
Unser, M.: Splines: a perfect fit for signal and image processing. IEEE Signal Process. Mag. 16(6), 22–38 (1999). DOI 10.1109/79.799930. URL http://dx.doi.org/10.1109/79.799930 (IEEE Signal Processing Society’s 2000 magazine award)
Virant, M., Dracos, T.: Establishment of a videogrammetric PTV system. In: Dracos, T. (ed.) Three-Dimensional Velocity and Vorticity Measuring and Image Analysis Techniques. ERCOFTAC Series, vol. 4, pp. 229–254. Springer, Netherlands (1996). DOI 10.1007/978-94-015-8727-3_11. URL http://dx.doi.org/10.1007/978-94-015-8727-3_11
Wereley, S.T., Meinhart, C.D.: Second-order accurate particle image velocimetry. Exp. Fluids 31(3), 258–268 (2001). DOI 10.1007/s003480100281. URL http://dx.doi.org/10.1007/s003480100281
Wereley, S.T., Gui, L., Meinhart, C.D.: Advanced algorithms for microscale particle image velocimetry. AIAA J. 40(6), 1047–1055 (2002). DOI 10.2514/2.1786. URL http://arc.aiaa.org/doi/abs/10.2514/2.1786
Wernet, M.P.: Symmetric phase only filtering: a new paradigm for DPIV data processing. Meas. Sci. Technology 16(3), 601 (2005). DOI 10.1088/0957-0233/16/3/001. URL http://stacks.iop.org/0957-0233/16/i=3/a=001
Westerweel, J.: Digital particle image velocimetry: theory and application. Ph.D. thesis, Mechanical Maritime and Materials Engineering, Delft University of Technology, 1993. http://repository.tudelft.nl/islandora/object/uuid:85455914-6629-4421-8c77-27cc44e771ed/datastream/OBJ/download
Westerweel, J., Dabiri, D., Gharib, M.: The effect of a discrete window offset on the accuracy of cross-correlation analysis of digital PIV recordings. Exp. Fluids 23(1), 20–28 (1997). DOI 10.1007/s003480050082. URL http://dx.doi.org/10.1007/s003480050082
Westerweel, J., Geelhoed, P., Lindken, R.: Single-pixel resolution ensemble correlation for micro-PIV applications. Exp. Fluids 37(3), 375–384 (2004). DOI 10.1007/s00348-004-0826-y. URL http://dx.doi.org/10.1007/s00348-004-0826-y
Willert, C.E.: Stereoscopic digital particle image velocimetry for application in wind-tunnel flows. Meas. Sci. Technol. 8, 1465–1479 (1997). DOI 10.1088/0957-0233/8/12/010. URL http://stacks.iop.org/0957-0233/8/i=12/a=010
Willert, C.E.: Adaptive PIV processing based on ensemble correlation. In: 14th International Symposium on Applications of Laser Techniques to Fluid Mechanics, Lisbon (Portugal) (2008). http://ltces.dem.ist.utl.pt/lxlaser/lxlaser2008/papers/02.1_5.pdf
Willert, C.E., Gharib, M.: Digital particle image velocimetry. Exp. Fluids 10(4), 181–193 (1991). DOI 10.1007/BF00190388. URL https://dx.doi.org/10.1007/BF00190388
Willert, C.E., Jarius, M.: Planar flow field measurements in atmospheric and pressurized combustion chambers. Exp. Fluids 33(6), 931–939 (2002). DOI 10.1007/s00348-002-0515-7. URL http://dx.doi.org/10.1007/s00348-002-0515-7
Yaroslavsky, L.P.: Digital picture processing: an introduction. Information Sciences, vol. 9. Springer, Berlin (1985). DOI 10.1007/978-3-642-81929-2. URL http://dx.doi.org/10.1007/978-3-642-81929-2
Yaroslavsky, L.P.: Signal sinc-interpolation: A fast computer algorithm. Bioimaging 4(4), 225–231 (1996). DOI 10.1002/1361-6374(199612)4:4<225::AID-BIO1>3.0.CO;2-G. URL http://dx.doi.org/10.1002/1361-6374(199612)4:4<225::AID-BIO1>3.0.CO;2-G
Zhu, Y., Yuan, H., Zhang, C., Lee, C.: Image-preprocessing method for near-wall particle image velocimetry (PIV) image interrogation with very large in-plane displacement. Meas. Sci. Technol. 24(12), 125,302 (2013). DOI 10.1088/0957-0233/24/12/125302. URL http://stacks.iop.org/0957-0233/24/i=12/a=125302
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Raffel, M., Willert, C.E., Scarano, F., Kähler, C.J., Wereley, S.T., Kompenhans, J. (2018). Image Evaluation Methods for PIV. In: Particle Image Velocimetry. Springer, Cham. https://doi.org/10.1007/978-3-319-68852-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-68852-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68851-0
Online ISBN: 978-3-319-68852-7
eBook Packages: EngineeringEngineering (R0)