1 Introduction

Over the past two decades, particle image velocimetry (PIV) and particle tracking velocimetry (PTV) have advanced from planner velocimetry to volumetric velocimetry that can measure three-dimensional (3D) three-component (3C) velocity vector fields in fluid flows. Such a full 3D–3C flow measurement has contributed to experimental fluid mechanics as well as fluid engineering applications. It has also enabled direct comparison with direct numerical simulation (DNS) results. To realize volumetric PIV/PTV, a number of different optical principles have been proposed to date, including multi-camera 3D PTV (Walpot et al. 2006), tomographic PIV (Scarano 2013), plenoptic PIV (Fahringer et al. 2015), defocusing PTV (Barnkob et al. 2015), holographic PIV (Lee et al. 2019), and rainbow PTV (Xiong et al. 2017). It should be noted that there have been many other publications on these individual techniques in various journals depending on the measurement target. Overall, we can classify these techniques into two groups: those using multiple cameras to capture 3D particle positions and those using a single camera with additional optical characteristics introduced to estimate the particle depth coordinate. In the former group, tomographic PIV is regarded as the best example in the present generation of tools. This method uses more than three cameras to accurately reconstruct 3D particle positions. One drawback to it is the difficulty in setting up the optical configuration for complex measurement targets, such as those in fluid machinery. For instance, all the elementary procedures of PIV need to be controlled precisely for all the cameras, considering the depth of field, refraction, reflection, seeding, and illumination at different angles for each camera. Another option is to use an approach from the latter group of single-camera techniques. Since these approaches deal with a single image, time and cost both for the hardware and software components are significantly reduced. Even though accuracy and precision are limited to a lower level compared with those obtained by tomographic PIV, the development of single-camera volumetric PIV/PTV is desirable in fluid engineering applications where multi-directional optical access is highly restrained.

In this study, we focused on two PTV techniques, color PTV and defocusing PTV, to develop a single-camera volumetric PTV technique with higher accuracy and precision than the other current methods. Color PTV is a method based on single-camera volumetric velocimetry. In particular, it makes use of the color-coded volumetric illumination of tracer particles captured by a color camera with three charged coupled devices (CCD) or a complementary metal–oxide–semiconductor (CMOS) sensors. This idea has a long history of being examined (Post et al. 1994; Brucker 1996; Gogineni et al. 1998). Because of simplicity in setting up, many past researchers adopted several different kinds of color PIV/PTV to examine their measurement performances of 3D velocity vector fields. In the present setup, we use the experimental instruments similarly to that used for conventional 2D–2C planer PIV/PTV systems. Difference from them is employing of a color illumination device and a color camera. This setup for 3D–3C velocimetry allows a larger measurement volume compared with the case of using multiple cameras. It enables to utilize a full range of depth of field of a single camera. Kanda et al. (2007) tried to investigate 3D–3C velocity vector field of wind blowing on a tennis court using soup bubbles and a color liquid crystal display (LCD) projector as a demonstration of color PTV for a large-scale flow. However, color PIV/PTV has not yet become a widespread tool because sensitivity and image size are considerably limited to resolve exact color of the particles. Brightness of color particle image must normally be maintained at a darker than that of monochrome particle image due to a need to avoid saturation in RGB components. This dark recording condition can conserve hue information, i.e., linear sensitivity to the three primary colors is kept only in dark brightness level. Monochrome PIV/PTV does not require such a condition since linearity of brightness level does not matter for implementing particle tracking or image correlation analysis. In early stage of color PTV trials in 1990s, the selection of methodologies for color-to-depth conversion was severely restricted by videotape recording of an analog TV signal. Based on these limitations, development was limited in those days and the academic spotlight moved away from color PIV/PTV until there caused widespread use of digital cameras. For example, in the famous review by Adrian (2005), he did not mention color PIV/PTV. However, there was still the possibility to overcome its limitations, and the next year the review by Prenel and Bailly (2006) discussed the potential of color volumetric velocimetry. Currently, the availability of highly sensitive high-speed color digital cameras with megapixel resolutions has overcome these issues and allowed for quantitative analysis with reliable reproducibility. Our group has previously reported the effective use of color-coded volumetric illumination for 3D–3C PTV (Watamura et al. 2013) and the 3D location detection of microbubbles (Park et al. 2019). Our understanding is that the development of color PTV is now in a revival stage, as made evident by the obvious increase in publications on the topic since 2010. For example, to perform color PTV, Matsushita et al. (2004) and McGregor et al. (2007) used prism-split rainbow illumination, Bendicks et al. (2011) used color-painted particles, Tien et al. (2014) used color-coded pinholes, Xiong et al. (2017) used rainbow color coupled with diffractive optical element (DOE)-lens imaging, Wang et al. (2018) used a two-camera color-coded sequence, Menser et al. (2018) used a 3C LED with time chart control, and Schultz et al. (2019) proposed the generation of multi-cycle rainbow illumination using a Sanderson prism. There have even been reports aimed at color PTV using a single-lens reflex (SLR) camera (Funatani et al. 2013) or a smartphone (Aguirre-Pablo et al. 2017).

Another technique for single-camera 3D–3C velocimetry is defocusing PTV, the first example of this being reported by Willert and Gharib (1992). This method measures shape distortion and size variation of defocused particle images to estimate the depth coordinate with a controlled depth of focus in the measurement volume. To judge the exact particle positions with regard to depth, tracer particles with uniform shape and size are required. However, particles have some distribution in their shape and size, which can lead to poor accuracy and precision in the depth of defocusing PTV. Although the accuracy and precision have been much improved by the help of large imaging sizes (Barnkob et al. 2015; Barnkob and Rossi 2020), these limitations remain in the present generation of tools.

In the present study, color PTV and defocusing PTV are combined to improve two aspects on 3D–3C vector acquisition realized by a single camera: enlargement of the measurable depth and improvement of the estimation accuracy of particles’ depth coordinates. First, we extend the measurable depth by including the particle images that exist outside the depth of field. Such defocused particles are also collected in the labeling process of PTV by considering the defocusing principle of the lens optics. Next, aperture on camera lens is fully opened in the present approach to intentionally defocus the particles so that color components can be stably captured with large number of pixels. The judging of color is relatively easy on these particle images comparing to in-focus particle images. In particular, we use the color and size information of particle images simultaneously so that the uncertainty of the depth coordinate is significantly reduced. In this paper, the improvement of the estimation accuracy is precisely discussed. Among various color-coding patterns proposed for color PTV, we apply a rainbow-type volumetric illumination with gradually changing hue in the depth coordinate. Here, hue is defined as one of color appearance parameters such as with brightness, chroma, and saturation. It expresses color as a degree from 0° to 360°. For example, red, green and blue are expressed as 0° (= 360°), 120° and 240°, respectively. In principle, continuous change of hue like a rainbow allows a high spatial resolution in the depth direction compared with that of stepwise or split color patterns. Such a way is called rainbow PTV as a nickname of color PTV using a rainbow-type illumination. This should be clearly distinguished from three-layer color PTV that uses only three primary colors. Rainbow PTV deals with many intermediate colors (mixed from RGB components) to determine the particles’ depth coordinates. In an ideal situation, the spatial resolution of rainbow PTV is excellent, as hue is given continuously in the depth coordinate. For example, when three primary colors are resolved as three 8-bit signals (one for each), the hue resolution becomes 360°/(3×28) ~ 0.47°, and the measurement volume is divided by 768 layers in the depth direction. Unfortunately, this resolution cannot be achieved because of false colors in actual optical configurations caused by the following five factors: (1) light source characteristics for rainbow illumination, (2) wavelength-dependent light scattering characteristics of tracer particles, (3) overlapping of particles in the imaging plane, (4) color contamination in RGB sensors, and (5) digital compression of the image/movie. Among these factors, color contamination has the greatest effect and depends on the image sensor array adopted in the digital camera (Busin et al. 2008; Pick and Lehmann 2009; Charonko et al. 2014). The concept of color contamination is briefly explained using Fig. 1. The color sensor array most commonly used on cameras is the so-called Bayer sensor (Fig. 1a). Since the sensor has a one-color receptor for each pixel, the color of the pixel is interpolated using information given by the receptors around the pixel to form color images. This interpolation generally causes no problems for human vision but causes a problem in the case of color PTV, which requires quantified colors. The interpolation leads to false color, especially in regions with high-gradient RGB components, i.e., near the edge of individual particles (Fig. 1b). Since PTV can only be used to analyze particle images composed of 5–20 pixels, most of the particles have a false color that deviates significantly from the true one.

Fig. 1
figure 1

Cause of false color on the Bayer sensor. a RGBG mosaic-type Bayer sensor normally used in a digital camera. b Process of false color generation on a particle caused by the Bayer sensor. The color in the reconstructed image is modified to be a different color. This effect is called color contamination

Watamura et al. (2013) attempted to solve this problem using a saturation-weighted average of hue in individual particle images. They also introduced two kinds of rainbow illumination switching alternatively in time for a commercial liquid crystal display (LCD) projector. With this technique, a depth resolution equivalent to 256 divisions of a single measurement volume was successfully achieved. Aguirre-Pablo et al. (2019) reported the use of time–space structured illumination, realizing single-camera 3D PTV. They applied four kinds of illumination in cyclic repetition by an LCD projector. However, the switching frequency for the LCD projector was lower than 60 Hz, and therefore the measurement was limited to very slow flows. This can be overcome in future with the latest LCD projectors, which realize a projection frame rate higher than 1000 fps (Kagami and Hashimoto 2018; Ishikawa 2019). Until further development, the brightness of projection images from these high-speed projectors will be low, and it is thus difficult to actually use them for rainbow PTV.

As a method to improve the accuracy of hue recognition by removing the false color on particle images and improve the spatial precision in the depth direction by multi-cycle rainbow lighting without switching, the defocusing technique is in this paper applied to rainbow PTV (called defocusing rainbow PTV). To make this principle applicable, we examine how the defocused particle images are generated on the imaging plane and propose a method to accurately detect various kinds of particle information with high accuracy (i.e., in-plane coordinate, defocused size, and effective hue). The methodology of defocusing rainbow PTV is explained in the next section, and the technique is then demonstrated in Sect. 3.

2 Color particle imaging

2.1 Defocusing to remove false colors

False colors are generated at the edges of the individual particle images due to the Bayer sensor arrangement. Defocusing can suppress this effect so that the correct colors can be extracted. Figure 2a, b shows in-focus particle images, while (c) shows a defocused particle image illuminated by green-color illumination. These images were taken by a high-speed color digital video camera (FASTCAM Mini AX50, Photron) having Bayer sensor with resolving each primary color as 12-bit, i.e., 4096 levels. Each 12 × 12 pixels image is enlarged for the sake of comparison. In the in-focus image, the corresponding color information of the green particle is contaminated by orange, red, magenta, and cyan pixels around the edges of the particle. In the defocused condition, approximately pure green pixels exist within the particle image.

Fig. 2
figure 2

Image of a scattered particle with green illumination. a Focused particle image in grayscale. b Focused particle image generating false colors. c Defocused particle image in which the false colors are reduced

The most significant information used in rainbow PTV is the hue of the particle images (McGregor et al. 2007; Watamura et al. 2013; Xiong et al. 2017). To examine how much the precision of color recognition is improved by the defocusing technique, the hue of the particle images illuminated by volumetric color-coded light was measured, as shown in Fig. 3a, b. The illumination light, which changes hue from 0° to 360° over time, was generated by an LCD projector (EB-W420, Epson) and refracted by a convex lens to irradiate parallel to the x axis. Particles (HP20, Mitsubishi Chemical Co.) 300–700 μm in diameter and 1020 kg/m3 in density were suspended neutrally in a transparent viscoelastic fluid (0.2 wt% polyacrylamide aqueous solution), which enabled them to maintain their initial positions.

Fig. 3
figure 3

Improved identification of particle scattering colors achieved by defocusing. a Picture and b schematic diagram of experimental setup for hue calibration. c, d Hue calibration curves with c focused images and d defocused images, where gray error bars indicate the standard deviation

For estimation of the hue, we adopted a saturation-weighted averaged hue inside the particle images, defined as follows:

$$\overline{H} = \arctan \left( {\frac{{\sum {S\cos H} }}{{\sum {S\sin H} }}} \right),$$
(1)

where H and S are the hue and saturation in each pixel of the image, respectively. The effectiveness of this formula for rainbow PTV has been confirmed by Watamura et al. (2013). The relationship between the illuminated and measured color in terms of hue is plotted in Fig. 3c for the in-focus condition and Fig. 3d for the defocused condition. The plots reveal a single meandering curve caused by the different sensitivity spectrums among the RGB sensors. The flat regions around 0° (= 360°; red), 120° (green), and 240° (blue) in the illuminated hue are caused by overlapping of the spectra among the three bands. Similar results have also been reported by Park et al. (2019) for microbubbles illuminated by rainbow color. Although the curves are not approximated by a linear function, they maintain monotonic functions based on the increase of the illuminated hue. This deterministically achieves regression of the illuminated hue from the measured hue. However, its accuracy is determined by the standard deviation of the plots as applied to rainbow PTV, which requires the hues of individual particles hue but not an average. The resolvable number M of the depth coordinate by a single rainbow illumination is estimated by the following:

$$M = \frac{360}{{\tilde{\sigma }}},\;\tilde{\sigma } = \left( {\frac{1}{360}\int_{0}^{360} {\frac{1}{\sigma (\theta )}{\text{d}}\theta } } \right)^{ - 1} ,$$
(2)

where σ(θ) is the standard deviation as a function of the illuminated hue. M becomes a function of the harmonic mean of σ(θ), with small deviations in σ(θ) dominantly contributing to the mean value. Based on the data of the standard deviations shown in the bottom profile in Fig. 3c, d, the resolvable number is calculated to be M = 15 for the in-focus image and M = 75 for defocused image. Approximately five-times improved accuracy can be confirmed.

As one of the demonstrations of the rainbow PTV incorporating the defocusing technique, we measured a flow under a rotating impeller in a rectangular water container, as shown in Fig. 4a. A volumetric light with gradually changing hue in the z direction was irradiated parallel to the horizontal xy plane. In this setup, we produced a single-cycle rainbow color, and all the particles were equally defocused to remove false colors (note that we will introduce multi-cycle rainbows in Sect. 2.3). The number of instantaneous 3D velocity vectors had an average of 150 when a two-frame nearest neighbor search was applied for particle tracking. A sample of the velocity vector field is shown in Fig. 4b, to which Laplace equation rearrangement (LER; Ido et al. 2002) was applied in spatiotemporal 4-D domain to obtain the flow on a regular grid system. Here, U stands for the tip speed of the impeller. We will not elaborate the flow structure in this paper. However, a change in the swirling flow in the z direction was reliably measured, as highlighted by the iso-surface of the vorticity at |rot (u/U)|= 0.1, for example.

Fig. 4
figure 4

Demonstration of rainbow volumetric PTV with defocusing technique. a Experimental setup. b Instantaneous 3D–3C velocity vector field, where the gray surface is the iso-surface indicating |rot (u/U)|= 0.1

2.2 Detection of particle positions from distorted particle images

Particle images under defocusing conditions are unavoidably distorted (Barnkob et al. 2015). The distortion becomes significant in the region away from the center of the imaging plane due to lens characteristics. This worsens the accuracy of particle detection as well as the identification of particles in comparison with a focused image. To predict how significant distortion occurs, we simulated particle images using a ray analysis for a simple single-lens geometry, as illustrated in Fig. 5a. In the ray analysis, the defocusing effect was realized by an imaging plane shifted toward the lens at a small distance, ld. Light sources were distributed on the object surface, which radiated rays in all 3D directions. Only the rays that reached the lens contributed to the formation of images. Table 1 shows the parameters used for the ray analysis.

Fig. 5
figure 5

Particle images distorted by defocusing. a Schematic diagram of ray tracing with a convex lens and simulated particle images at the defocused plane. b Images without consideration of the aberration caused by the lens. c Images with consideration of spherical aberration

Table 1 Characteristics of PIV techniques

First, we show a simulated result without considering any optical aberration (Fig. 5b). In the figure, three characteristics can be identified: the finite size of the light source image, local brightness gradients in individual particle images, and a global brightness gradient in the imaging plane. Here, the former two characteristics originate from defocusing, while the latter is independent of the defocusing effect. When the light source was located far from the lens axis, the number of rays reaching the lens decreased, and the average brightness became lower outside of the imaging plane. This was caused by the use of a lens with a finite size regardless of focusing control. The other two characteristics appeared only in the defocused situation. The finite size of the light source image results in rays not accumulating at a single point, as illustrated in Fig. 5a. This causes both a local brightness gradient and a global brightness gradient. The imaging plane was on the front side of the focus in this simulation, and therefore the brightness in the image was darker toward the outside from the center of the image. In the case that the imaging plane was located on the backside, a reverse brightness gradient was produced.

Next is an explanation of particle image distortion, which is mainly caused by aberrations of the lens. To simulate the effects of aberration, we added spherical aberration in the ray analysis. Because the influence of aberrations varies depending on the lens and the cindering, aberrations make it difficult to conduct ray analysis. Thus, we selected spherical aberration as the simplest case. In particular, a spherical glass lens following Snell's law was considered. That is, only the refraction of light on the lens surface was computed. A simulated result is shown in Fig. 5c. The particle images are distorted to have asymmetric brightness patterns, including bright spots with outward tails and circular rims. If other types of aberration were added in the ray analysis, the particle shape would be changed. In real cameras composed of multiple lenses, the particle shape in the defocused condition becomes much more complex. As for this demonstration, we examined three kinds of commercially available cameras, shown in Fig. 6. Light was projected from the right side in each picture and the aperture of lens was fully opened. In these lens-mounting units, multiple lenses are combined in line. The particles were illuminated by volumetric rainbow light and recorded in the defocused condition. In these results, the particle shape and local gradient varied significantly depending on the unit. An inward gradient was found for unit (a), an outward gradient for unit (b), and split circles for unit (c). This suggests that particle images will be analytically unpredictable using simple ray analysis, and that we thus need to apply an adaptive algorithm in the detection of the particles.

Fig. 6
figure 6

Shape-dependence of defocused particle images on a lens with fully opened aperture. The upper, middle, and bottom of the figure show the lenses used for the visualization, pictures taken of the particles, and enlarged samples of the particle images from the pictures, respectively

Tracer particles and their centers have to be accurately detected for PTV. The Gaussian mask algorithm is well-known and widely applied for this purpose (e.g., Takehara and Etoh 1999). However, in the case of defocused images, the applicability of the mask algorithm is limited because of the distortion leading to large deviations from Gaussian brightness patterns. Further, the relationship between the center of the particle image and the actual center position of the particle needs to be investigated. For these reasons, we employed a pattern-adaptive mask algorithm for particle detection.

First, a picture of particle images illuminated by rainbow color is shown in Fig. 7. The picture was taken using the experimental setup shown in Fig. 3a and one of lenses (AI Nikkor 35 mm F/1.4S, Nikon) introduced in Fig. 6a. In the picture, although light was projected from the right side, the particle images in the picture have moon-phase patterns and orientation dependent on the location of the particle in the image. From Figs. 6 and 7, we found that there is little effect of the lighting direction when size of particles is sufficiently small and they are observed as spherical particle images on a focused picture. The particles located in the center of the picture are projected as a full moon (i.e., a circular shape), while the particles on the outer edges become crescent shapes with a loss of brightness on their outer sides. We modeled these shapes as masks to detect particle images. The variety of moon-shaped masks are defined by subtracting a small circular mask from a large circular mask as follows:

$$I_{{{\text{moon}}}} = aI_{{{\text{main}}}} - I_{{{\text{sub}}}} ,\;I_{{\text{main or sub}}} = \sqrt {r^{2} - \left( {x^{2} + y^{2} } \right)} ,\;a = 1.25,\;I \ge 0.$$
(3)
Fig. 7
figure 7

Distorted particle images obtained by defocusing and moon-shaped masks imitating distorted images for detecting the center coordinates of each image

Here, I and a are the intensity of the mask and a coefficient for intensity control, respectively. As shown in Fig. 8, the center locations of the masks are described as follows:

$$\left( {x_{{{\text{main}}}} ,y_{{{\text{main}}}} } \right) = \left( {l\cos \theta ,l\sin \theta } \right),\;\left( {x_{{{\text{sub}}}} ,y_{{{\text{sub}}}} } \right) = \left( {(l + r_{{{\text{sub}}}} )\cos \theta ,(l + r_{{{\text{sub}}}} )\sin \theta } \right),$$
(4)

where l and θ are a length from the center of the picture and angle from the horizontal axis of the picture, respectively. In the case of the presented example, the radii of the masks are set as a constant rmain = 9 pixel. Here, the radius of the subtraction mask is given by rsub = rmain l/lmax. For these moon-shaped masks, distorted particle images were robustly captured by searching for the maximum cross-correlation between the target particle image and the mask properties.

Fig. 8
figure 8

Parameters for generation of the moon-shaped mask. a The mask in a picture. b Coordinates of each mask forming the moon-shaped mask

Figure 9 shows a defocused image of a single particle, with the white square representing the actual center location of the particle. The actual center was detected from a different picture taken under in-focus conditions obtained by minimizing the aperture. As seen in the figure, the brightest points of individual particle images are displaced from the actual centers with a deviation that depends on the position in the picture. In this experimental case, the direction is toward the center of the picture but not affected by the direction of the illumination light. To realize accurate particle tracking, the particle center needs to be defined within the mask region. Based on the figure, it can be confirmed that the center of the outer circular rim does not represent the actual particle center. Instead, the particle center is located close to the highest intensity area. Figure 10 shows to what extent the particle detection ability and accuracy of center identification were improved by the moon-shaped mask, whose center position was modified. Note that this figure is not taken from Fig. 7 but is taken from a different picture for evaluating statistics. The Gaussian mask and moon-shaped mask were adopted for particle image detection and center identification, respectively, in the sample picture. Symbols are used to indicate the error in the distance between the actual center and the center identified based on the Gaussian and moon-shaped masks. The number of particles detected in the case of the Gaussian mask was approximately 60% lower than that detected for the moon-shaped mask because the Gaussian mask does not match the shape of the particle image. By using the moon-shaped mask, the accuracy of center identification was improved by 40% compared with the Gaussian mask.

Fig. 9
figure 9

The actual center location in a defocused particle image. a A defocused particle image and a focused image described by white cells, where this particle image is located on the right-upper corner on the full picture. Other particle images in the right column are sampled on the b left, c center, and d right of the picture, with white squares indicating the central location of each particle

Fig. 10
figure 10

Comparison of the values calculated by the algorithm to detect particles and their center locations. ac Particles detected using the a Gaussian and b moon-shaped masks. c The probabilities of error based on the actual particle centers

In the present paper, we made subjective masks, i.e., the moon-shaped mask, for particle image detection and center identification as a test case. The shape of the particle image depends on the particular lens to utilize, thus predicting the shape before testing is generally difficult. The moon-shaped mask introduced in this paper does not cover wide situation of defocusing rainbow PTV that utilizes lens different from the present case. For example, the particle image in Fig. 6c is not moon-shaped and our mask does not properly work in this case. Toward the general use of defocusing rainbow PTV, it is expected to build up an automatic mask generation algorithm with help of methods such as machine learning of the defocused color image patterns.

2.3 Multi-cycle rainbow illumination in the depth

Employing the defocusing technique allows for the application of multi-cycle rainbow illumination in determination of particle depth coordinates. In particular, the 3D position is given by a combination of the size and color of individual particle images. Figure 11 illustrates various possible combinations to explain this principle. In a case in which the defocusing technique is not used (Fig. 11a), the depth z of the particle is simply estimated recursively based on the hue of a single-cycle rainbow illumination. Figure 11b shows a case in which defocusing is applied together with single-cycle rainbow illumination. We can determine the depth independently by either the measured particle diameter or the hue. In the present paper, depth is determined by hue because the precision of the hue is improved by defocusing. Further, it is difficult to estimate the size correctly since the shape of the particle image is distorted by the defocusing. If it is possible to estimate the size correctly, taking an average of these two depths will better estimate the true depth of the particle. A combination of defocusing imaging and two-cycle rainbow illumination is shown in Fig. 11c. In this case, we cannot judge the depth using only the hue because it presents two distinct possibilities. However, because the size gives an approximation of the depth, it is possible to define the depth using the hue and size simultaneously. An advantage of this combination is an improvement in the accuracy of hue-to-depth conversion based on the large gradient in the hue, dH/dz. This leads to errors in the hue measurement, such as random and systematic hue fluctuation, being relaxed during depth estimation. Since multi-cycle rainbow illumination is easily producible using a commercial LCD projector, defocusing imaging can be successfully combined with it. As shown in Fig. 11d, a case of three-cycle illumination would further improve the spatial resolution. However, its combination with defocusing technique is ineffective because defocusing has a limitation to classify the size of the particle image into more than two layers. In order to increase the number of cycles, it is necessary to suppress the deviation in the size distribution of tracer particles and use a camera with a larger number of pixels.

Fig. 11
figure 11

Possible patterns in the combination of defocusing and rainbow PTV. a Normal rainbow PTV. b Defocusing rainbow PTV with one-cycle, c two-cycle, and d three-cycle illumination. Red, blue, and white circles indicate the measured diameter, measured hue, and measured depth, respectively. Gray region indicates an effective cycle of the color, to which the particle depth belongs with information of the particle image size

Two figures are presented to help in understanding this principle. First, Fig. 12a shows an optical setup for two-cycle rainbow illumination combined with defocusing imaging. Using this setup in a water flow seeded with particles, the color particle images shown in Fig. 12b were obtained. Here, particle images of the same color with different sizes can be seen; one is relatively small and the other is relatively large. Figure 13 illustrates the algorithm used to determine the depth coordinates of individual particles. For example, particles A and B make blue images at time t1, but their sizes have different projections. The size of particle image B is smaller than that of particle image A when the camera's focal plane is close to particle B. Small movements of these particles caused changes in color from blue to cyan at t2. Further motion caused emergence, disappearance, and change in size at t3. On the one hand, this procedure is unaffected by deviation of real particle size since the particle image size is mostly determined based on the defocusing degree. Further, the color changes sharply with the introduction of multi-cycle rainbow illumination. This combination makes the proposed technique feasible for wide flow conditions. On the other hand, the overlapping of particle images becomes frequent in defocusing imaging, restricting the upper limit of detectable particle image densities. Roughly, the upper limit is estimated to be around 200 particles/(500 × 500 pixels) ~ 0.001 particles per pixel (ppp). Similar issues have been reported in the defocus imaging of bubbles and droplets (Murai et al. 2001; Kawaguchi et al. 2002). Reducing the defocusing level or using an image processing which separates multiple overlapping particle images is a possible solution to raise the ppp value.

Fig. 12
figure 12

Two-cycle rainbow color PTV with defocusing technique. a Schematic diagram of facility setup, where divergence of light was eliminated by inserting convex lens. b Part of a picture obtained from the camera

Fig. 13
figure 13

Principle for recognizing particle location. a Situation in which three particles pass in the measurement area. b Particle images obtained at each time

3 An example application to 3D flow measurement

As an experimental demonstration, we selected the investigation of a 3D flow in the downstream region of a twisted Savonius turbine. Several researchers have reported that twisted turbines have better performance than normal straight-type Savonius turbines (Saha and Rajkumar. 2006; Damak et al. 2013). One of the reasons for this is the reduction of large periodic vortex shedding, which releases large amounts of kinetic energy downstream. Before the investigation applying the defocusing rainbow PTV, the flow was measured by a hot-wire anemometer, as shown in Fig. 14a. A turbine 150 mm in height and 75 mm in the diameter (D) with a form twisted 180° was examined. The main flow velocity in the wind tunnel was U = 3.5 m/s, and the tip speed ratio of the turbine was fixed at 0.4 by a stepping motor. In these experimental conditions, the Reynolds number defined by D and U was approximately 1.8 × 104. The hot-wire anemometer was set at 2D in the region downstream from the turbine. Time-averaged velocity and turbulence intensity are shown in Fig. 14b, c, respectively. To compare the effects of twisted blades, measurement data regarding a straight-type Savonius turbine was also plotted in the figures. The average velocity with the straight-type turbine gradually increases in the vertical z direction due to the ground effect, while that with the twisted turbine has a uniform distribution with approximately 50% of the main flow velocity in the vertical direction. We expect that this is explained by contribution to vertical flow induced by the twisted blades. The turbulence intensity of the twisted turbine was relatively low, although its average velocity was relatively high at z/H < 0.9. To find the answer of what was kind of 3D flow structures which modified these wake characteristics, it was sought using the present multi-cycle defocusing rainbow PTV.

Fig. 14
figure 14

Effect of the twisted blade of a Savonius turbine on flow in the downstream region, where the tip speed ratio of the turbine is 0.4. a Experimental setup, where x- and z-axes are set as the streamwise direction of main flow and the rotating axis of turbine, respectively. b Time-averaged streamwise velocity. c Turbulence intensity

Figure 15 shows the experimental facilities used to measure the downstream flow structure of the twisted turbine. A towing tank containing tap water was used, in which the turbine was towed horizontally at a constant speed together with a camera and an LCD projector. The turbine was installed upside down in the towing tank, and its end plate was located at the water surface to avoid the ground effect. The towing speed was set to U = 0.3 m/s, and the corresponding Reynolds number was approximately Re = UD/ν = 1.8 × 104, where ν is the kinematic viscosity of water. The frame rate of the camera was set to 750 fps, and the spatial resolutions in the picture were 0.2 mm/pixel in the xz plane and 0.15 mm per 1° of hue in the y direction. With a given accuracy regarding the particle center detection and a given precision regarding the hue recognition, the bias error of particle location was estimated to be within 1 mm in all directions for the 3D measurement volume.

Fig. 15
figure 15

Twisted Savonius turbine experiments performed in a towing tank. a Picture of facility setup. b Top and c side views of measurement area, where D and H are the diameter and height of the turbine, respectively

Samples of the visualization results are shown in Fig. 16. In a camera picture shown in Fig. 16a, tracer particles are projected as a variety of colors and sizes. As the first step for the PTV, particle locations in the xz plane of the measurement volume were determined using image masking correlation based on moon-shaped masks. Then, individual particle locations in the y direction were computed using the size and hue of the color particle images. All the 3D particle coordinates were tracked in four consecutive frames to obtain an instantaneous velocity vector with three components, u = (u, v, w), as presented in Fig. 16b. The number of velocity vectors captured was 120 among the ~ 500 particles identified in the original image. A reduction in the number was caused by particles’ partial overlapping and unsuccessful tracking of particles due to the finite hue resolution. Considering sub-pixel processing to define particle locations, accuracy of the present velocity vectors is about 0.013U (Udrea et al. 1996). The instantaneous velocity vector distribution in the figure does not mean much in identifying the flow structure; however, the particle position z and the velocity component in z direction are secured. This allows the data to be interpolated to see the 3-D wake structure in more detail. For preparation of evaluating various contours inside the wake, we converted these PTV data to regular grid vector field as shown in Fig. 16c. Here, we employed Lagrangian-to-Eulerian formatting of the scattered vector field in spatiotemporal four-dimensional domain (x, y, z, t) using biquadratic ellipsoidal rearrangement (BER) algorithm proposed by Ido and Murai (2006). This interpolation allows to estimate fine individual vortices from a limited number of velocity vectors per vortex. According to their paper, 12 vectors around a single vortex can reconstruct the original vortical structure at 0.95 in vector cross-correlation coefficient (Ido et al. 2002).

Fig. 16
figure 16

Processing to obtain 3D–3C instantaneous velocity field. a Snap picture of particles illuminated by two-cycle rainbow illumination in the depth direction. b Instantaneous velocity vector u(u, v, w) obtained by the PTV. c Interpolated velocity vector field obtained by converting PTV data to a regular grid format using the algorithm proposed by Ido and Murai (2006)

Figure 17 show iso-surfaces of a scalar distribution computable from the measured velocity vector distribution. Figure 17a shows a vertical velocity contour at y = 0 and an iso-surface of u = 0.5U in red color. The white iso-surface in Fig. 17b represents helicity density at u·ω/|u||ω|= 0.9. Helicity is one of the conservative quantities that can be used to visualize 3D vortical structures (Kelvin 1987; Kasagi et al. 1995; Janke et al. 2017). From the results, two specific flow structures were identified to explain the vertically more uniform streamwise velocity profiles recovered by the twisted turbine. One was a vertical flow reaching half of the turbine’s height from the upper and the bottom region, and the other was a streamwise vortical structure released downstream. These do not occur in the case of a normal straight turbine because the original 2D flow is maintained (Murai et al. 2007). Vertical flows supply kinetic energy toward the center area, while the streamwise vortex equalizes the energy by momentum transfer. As a result, velocity in the downstream region of the twisted turbine was recovered quickly in this case compared with that of a normal straight Savonius turbine. This fact also tells that turbine drag of the twisted turbine is smaller than the straight one while torque increases with twisting the blades.

Fig. 17
figure 17

Sample results. a Vertical velocity w interpolated by BER. b Visualized streamwise vortex, where ω is vorticity

In more detail, unlike the case of lift-driven turbines, the twisted Savonius turbine relies on flow separation behind rotating buckets in power generation. Kinetic energy loss in the wake does not immediately explain the correlation to the power. To understand the reason why twisting obtains better performance, 3D–3C velocity vector fields need to be investigated, from which intrinsic coherent structures can be extracted as well as pressure field and torque fluctuation in the next step. Although the present rainbow-defocusing PTV technique did not have significantly high accuracy and resolution of velocity fields to perform such analysis, we here offered the flow structure information directly obtained experimentally with the PTV technique in the demonstration. Of course, CFD simulations supply 3D–3C velocity vector fields with very good quality possible to perform the analysis. Simulations, however, are subject to several assumptions such as turbulent flow model and 3D boundary layer resolutions along rotating bucket surfaces. Thus, it is required to confirm the validity of simulations by experimental data. We expect that our findings will contribute to their validation.

4 Conclusion

In this paper, we proposed a method that combines rainbow PTV and defocusing PTV to improve the spatial resolution of 3D particle coordinates. We demonstrated that the method is able to prevent false color generation in individual particle images. This leads to a high precision in hue definition in comparison with in-focus particle imaging. Further, it allows for multi-cycle rainbow illumination, as the particle image size becomes a function of the depth coordinate. The multi-cycle technique led to a steep change in the hue of the individual particle images and improved the accuracy in the hue-to-depth recursive estimation. The combination of these two kinds of information (color and size) reduced the uncertainty of the depth coordinate so that 3D Lagrangian particle tracking could be successfully realized. At the same time, distortion of the image occurred due to the defocused imaging depended strongly on lens adopted on the camera. This was overcome by introducing an adaptive mask correlation technique designed for the lens, with which the centers of the moon-shaped particle images were reconstructed.

For a demonstration of the defocusing rainbow PTV, we investigated the 3D structure of a wake behind a twisted Savonius turbine. 120 velocity vectors were obtained in every consecutive frame using a four-frame tracking algorithm without any smoothing process applied. Helicity density and other quantities revealed that the twisted turbine induced vertical flow while shedding streamwise vortices in the wake, revealing the reason that the loss of kinetic energy was suppressed in comparison with a straight turbine. Based on the demonstration, the feasibility of the proposed defocusing rainbow PTV as a tool for experimental fluid engineering research was confirmed.