1 Introduction

Measurement of the velocity field of microflows, which have a characteristic length of several hundred micrometers or less, is becoming increasingly important in the development of microfluidic devices and in the understanding of biological mechanisms [1]. Particle image velocimetry (PIV) and particle tracking velocimetry (PTV) are widely used as velocity field measurement techniques for microflows and are called micro-PIV/PTV (\(\upmu \hbox {PIV}/\upmu \hbox {PTV}\)) in this context [2, 3]. In \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\), the diffracted light from tracer particles that were placed in the flow is expanded by the optical system and the velocity distribution is then measured by analyzing the motion of the particles. \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) techniques can measure flows with high spatial resolution when compared with the hot wire anemometer, the hot film anemometer, and the laser Doppler velocimeter. However, conventional \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) methods have a deep depth of field (DOF) [4, 5], and it is difficult to apply these methods to high-precision three-dimensional three-component (3D3C) measurements.

Holographic PIV/PTV (HPIV/HPTV) [6,7,8,9], in which digital holography (DH) is applied to PIV/PTV, is used to perform 3D3C microflow measurements. In HPIV/HPTV, the interference fringes between light scattered from tracer particles and reference light are recorded using an image sensor, and the three-dimensional positions of the particles are specified via a numerical reconstruction process. Although it is possible to use this method to perform 3D3C measurements using a single light source and a single image sensor, the spatial resolution along the optical axis direction is low. This is because HPIV/HPTV methods reconstruct the scattered light field and the particle image then spreads greatly in the depth direction.

In this study, an HPTV technique in which a compressed sensing (CS) methodology has been adopted is applied to the measurement of microflows and the effectiveness of the approach is verified. CS is a technique that is used to estimate sparse solutions for underdetermined linear systems [10,11,12]. CS techniques can effectively remove unnecessary components from the digitally recorded hologram [13,14,15,16] and have been applied to 3D3C HPTV [17]. The purpose of this study was to establish CS-based 3D3C HPTV for microflow measurements. We focus on the spatial sparseness of the tracer particles in the flow field and estimate the three-dimensional distributions of particles based on CS. The proposed method can measure a 3D3C microflow with high accuracy using a simple optical system.

2 Principle

In the \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) techniques used to perform microflow measurements, diffracted light from tracer particles that have been injected into the flow field is enlarged by an optical system and the movement of this light is analyzed. The DOF of a particle in \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) is given by [3]:

$$\begin{aligned} \mathrm {DOF}=\frac{n\lambda _{0}}{\mathrm {NA}^{2}}+\frac{n\varDelta }{\beta \,\mathrm {NA}}\approx \frac{3n\lambda _{0}}{\mathrm {NA}^{2}}+\frac{2.16\,d_{p}}{\tan \theta }+d_{p}, \end{aligned}$$
(1)

where n is the refractive index of the medium, \(\lambda _{0}\) is the wavelength of the light source in free space, \(\varDelta\) is the sampling pitch of the image sensor, and NA is the numerical aperture of the lens; \(\beta\) is the lateral magnification of the optical system, \(d_{p}\) is the particle diameter, and \(\theta\) is the light collection angle. In the optical system used in this study, \(n = 1.33\) (water), \(\lambda _{0} = 532\,\hbox {nm}\), \(\mathrm {NA} = 0.14\), \(\varDelta = 4.8\,\upmu \hbox {m}\), \(\beta = 5.0\), and, at this time, \(\mathrm {DOF} = 45.22\,\upmu \hbox {m}\). In the case where \(\mathrm {DOF}\gg d_{p}\), a large number of particles exists within the DOF, and this results in a reduction in the spatial resolution. A confocal \(\upmu \hbox {PIV}\) approach has been proposed to reduce the DOF [18], but the time resolution of the method is low because of the use of laser beam scanning. Additionally, it is difficult to apply the \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) techniques described above to 3D3C measurements. An example of a \(\upmu \hbox {PIV}/\upmu \hbox {PTV}\) technique that is capable of performing 3D3C measurements is HPIV/HPTV, which uses digital holography. In HPIV/HPTV, the scattered light field is reconstructed numerically and the three-dimensional particle position is then specified. However, HPIV/HPTV methods have the low resolution in the optical axis direction because the scattered field by the particles, not the particle distributions, is just reconstructed. In this study, we focus on the spatial sparseness of the particle distribution and then establish a method for 3D3C microflow measurements with high spatial resolution by applying CS to HPTV.

Fig. 1
figure 1

Optical system model of in-line holography

Because CS assumes that the measurement process is linear, it is necessary to model the interference between light scattered by a particle and the reference light as a linear transformation. Figure 1 shows an optical system model of in-line holography. The figure shows how laser light is scattered by the particles and then reaches the camera plane. Let \(\mathbf{x} = [\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{N_{z}}]^{\intercal }\) be the three-dimensional scattering density distribution of the particle field, and let \(\mathbf{u}\) be the scattering wave on the camera plane; then

$$\begin{aligned} \mathbf{u}=\mathbf{F}^{*}\mathbf{H}\mathbf{F}_{\mathrm {B}}\mathbf{x}, \end{aligned}$$
(2)

where \(\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{N_{z}}\) are 2D scattering density distributions along the optical axis, \(\mathbf{F}^{*}\) is a 2D inverse Fourier transform, \(\mathbf{F}_{\mathrm {B}} = {\text {diag}}(\mathbf{F}, \mathbf{F}, \dots , \mathbf{F})\) is a block-diagonal matrix of a 2D Fourier transform, and \(\mathbf{H} = [\mathbf{H}_{1}, \mathbf{H}_{2}, \dots , \mathbf{H}_{N_{z}}]\) is a matrix that consists of transfer functions from each layer to the camera plane. Equation (2) represents the wavefront propagation produced by the angular spectrum, and \(\mathbf{u}\) is the superimposed wavefront of the diffracted light from each layer \(\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{N_{z}}\). The transfer function \(\mathbf{H}_{i} (i = 1, \dots , N_{Z})\) then is expressed as [19]:

$$\begin{aligned} \begin{aligned} {\mathbf{H}}_{i}&={\text {diag}}\left[ H_{i}(u,v;z_{i})\right] \\ H_{i}(u,v;z_{i})&=\exp \left( -2\pi \mathrm {i}z_{i}\sqrt{\frac{1}{\lambda ^{2}}-u^{2}-v^{2}}\right) , \end{aligned} \end{aligned}$$
(3)

where u and v are the spatial frequencies along the x and y directions, respectively, and \(z_{i}\) is the distance from \(\mathbf{x}_{i}\) to the sensor. The in-line hologram g acquired on the camera plane can be expressed as

$$\begin{aligned} \mathbf{g}=\left| \mathbf{u}+\mathbf{r}\right| ^{2}=2\,\mathrm{Re}\left\{ \mathbf{u}\odot \mathbf{r}\right\} +\left| \mathbf{u}\right| ^{2}+\left| \mathbf{r}\right| ^{2}, \end{aligned}$$
(4)

where \(\mathbf{r}\) is a reference wave and \(\odot\) expresses the Hadamard product (element-wise product). If the reference beam is a plane wave propagating along the optical axis, \(\mathbf{r} = \mathbf{1}\), and we obtain the following by removing the mean value from the hologram:

$$\begin{aligned} \mathbf{y}=\mathbf{g}-\mathbf{1}=2\,\mathrm{Re}\left\{ \mathbf{u}\right\} +\left| \mathbf{u}\right| ^{2}, \end{aligned}$$
(5)

Here, by setting \(|\mathbf{u}|^{2}\) as the model error \(\mathbf{e}\) [13], the hologram observation process can be expressed in the form of a linear model:

$$\begin{aligned} \mathbf{y}=2\,\mathrm{Re}\left\{ \mathbf{u}\right\} =2\,\mathrm{Re}\left\{ \mathbf{F}^{*}\mathbf{H}\mathbf{F}_{\mathrm {B}}\mathbf{x}\right\} \equiv \mathbf{A}\mathbf{x}, \end{aligned}$$
(6)

\(\mathbf{x}\) is reconstructed by solving the following minimization problem with a regularization term based on the proximal gradient method [20, 21]:

$$\begin{aligned} \min _{\mathbf{x}}\left\{ \frac{1}{2}\left\| \mathbf{A}\mathbf{x}-\mathbf{y}\right\| ^{2}_{2}+\mathfrak {R}(\mathbf{x})\right\} \equiv \min _{\mathbf{x}}\left\{ f(\mathbf{x})+g(\mathbf{x})\right\} , \end{aligned}$$
(7)

where \(\left\| \cdot \right\| ^{2}\) is the \(L_{2}\) norm, and \(\mathfrak {R}\) is a regularization term. The proposed method uses a combination of total variation (TV) regularization and \(L_{1}\)-norm regularization:

$$\begin{aligned} \mathfrak {R}(\mathbf{x})=\lambda _{{\text {TV}}}\sum _{i=1}^{N_{z}}{\text {TV}}(\mathbf{x}_{i})+\lambda _{L_{1}}\left\| \mathbf{x}\right\| _{1}, \end{aligned}$$
(8)

where \(\lambda _{{\text {TV}}}\) and \(\lambda _{L_{1}}\) are the regularization parameters. The TV regularization term is defined as

$$\begin{aligned} {\text {TV}}(\mathbf{x})=\sum _{j=1}^{N_{y}}\sum _{i=1}^{N_{x}}\sqrt{\left| X_{i,j}-X_{i+1,j}\right| ^{2} + \left| X_{i,j}-X_{i,j+1}\right| ^{2}}, \end{aligned}$$
(9)

where \(X_{i,j}\) is the (ij) element of the matrix representation of \(\mathbf{x}\), and \(N_{x}\) and \(N_{y}\) are the number of elements in the x and y directions, respectively. Use of TV regularization allows noise to be removed while the edge components are preserved [22, 23] and \(L_{1}\) norm regularization makes \(\mathbf{x}\) sparse. To solve the minimization problem expressed in Eq. (7), we use the fast-iterative shrinkage-thresholding algorithm (FISTA). FISTA performs a gradient descent operation to minimize the differentiable term \(f(\mathbf{x})\) and also minimizes the nondifferentiable term \(g(\mathbf{x})\) using a proximal operator. The update procedure of FISTA can be expressed as follows [20]:

$$\begin{aligned} \begin{aligned} {\mathbf{x}}^{(n+1)}&={\text {prox}}_{\gamma g}\left[ \mathbf{z}^{(n)}-\gamma \nabla f\left( \mathbf{z}^{(n)}\right) \right] \\ s^{(n+1)}&=\frac{1+\sqrt{1+4\left( s^{(n)}\right) ^{2}}}{2}\\ \mathbf{z}^{(n+1)}&=\mathbf{x}^{(n+1)}+\left( \frac{s^{(n)}-1}{s^{(n+1)}}\right) \left( \mathbf{x}^{(n+1)}-\mathbf{x}^{(n)}\right) , \end{aligned} \end{aligned}$$
(10)

where \({\text {prox}}_{\gamma g}\) is the scaled proximal operator, which is defined as

$$\begin{aligned} {\text {prox}}_{\gamma g}(\mathbf{z})\equiv \underset{\mathbf{x}}{\arg \min }\left[ g(\mathbf{x})+\frac{1}{2\gamma }\left\| \mathbf{x}-\mathbf{z}\right\| _{2}^{2}\right] , \end{aligned}$$
(11)

where \(\gamma > 0\) is the step size. The gradient of the \(f(\mathbf{x})\) is can be expressed as follows:

$$\begin{aligned} \begin{aligned} \nabla f(\mathbf{x})&=\mathbf{A}^{*}(\mathbf{A}\mathbf{x}-\mathbf{y}),\\&=2\mathbf{F}^{*}_{\mathrm {B}}\mathbf{H}^{*}\mathbf{F}\left[ 2\,\mathrm{Re}\left\{ \mathbf{F}^{*}\mathbf{H}\mathbf{F}_{\mathrm {B}}\mathbf{x}\right\} -\mathbf{y}\right] . \end{aligned} \end{aligned}$$
(12)

The proximal operator of the \(L_{1}\)-norm \({\text {prox}}_{L_{1}}\) can be expressed as a soft-thresholding function, which is given by

$$\begin{aligned} {\text {prox}}_{\gamma L_{1}}\left( x_{i}\right) = {\left\{ \begin{array}{ll} x_{i}-\gamma \lambda _{L_{1}}&{}\left( x_{i}\ge \gamma \lambda _{L_{1}}\right) \\ 0&{}\left( -\gamma \lambda _{L_{1}}<x_{i}<\gamma \lambda _{L_{1}}\right) \\ x_{i}+\gamma \lambda _{L_{1}}&{}\left( x_{i}\le -\gamma \lambda _{L_{1}}\right) , \end{array}\right. } \end{aligned}$$
(13)

where \(x_{i}\) represents each element of \(\mathbf{x}\). The proximal operator for TV regularization cannot be expressed using a analytic form. We, therefore, use the fast gradient projection (FGP) method for TV regularization. The algorithm of the proposed method is shown in Algorithm 1.

figure a

Mallery and Hong proposed a similar minimization approach for 3D particle tracking [17]. They evaluated the accuracy of their method by the turbulent flow simulation and applied the method to the analysis of swimming microorganisms and rotating microfibers in a flow field. Their results show that CS-based HPTV is very effective for 3D particle tracking. We apply the CS-based HPTV to the measurement of microflow based on these previous studies.

3 Results

This section describes the results of measurement of the flow field in the microchannel using the proposed method. Figure 2 shows the experimental setup for in-line digital holographic PTV. A diode-pumped solid-state (DPSS) laser with a center wavelength of \(532\,\hbox {nm}\) (Cobolt 06-DPL, \(532\,\hbox {nm}\)) was used as the light source. A spatial filter (SF) was then used to remove any noise or distortion from the wavefront. The light transmitted through the microchannel is magnified five times using a magnifying optical system and recorded using a high-speed camera. The camera was a Katokoken k8-USB with a pixel size of \(4.8\upmu \hbox {m}\) and the recording conditions were an exposure time of \(10\upmu \hbox {s}\), a frame rate of 800 fps, and resolution of \(800\times 600\). The output laser power is \(1.0\,\hbox {mW}\). A Synvivo linear channel with a width of \(500\,\upmu \hbox {m}\) and a depth of \(100\,\upmu \hbox {m}\) was used as the microchannel, and polymer particles (Kanomax 0456) with a diameter of \(4.1\,\upmu \hbox {m}\) were injected using a microsyringe. Figure 3 shows an example of a recorded hologram and a background eliminated hologram. The background was generated by averaging 500 holographic images. Interference fringes are formed concentrically between the light scattered by the tracer particles and the reference light. Figure 4 shows particle images that were reconstructed from hologram images using the proposed method and backpropagation calculations with the angular spectrum method (ASM). The numerical parameters used for the reconstruction were \(\lambda _{{\text {TV}}} = 200\), \(\lambda _{L_{1}} = 150\), and \(\gamma = 0.01\); the number of FGP iterations was \(N_{\mathrm {FGP}} = 10\), and the total number of iterations N was \(N = 100\). The reconstruction volume was sliced into 250 planes, and the total number of voxels was \(800\times 600\times 250\) (\(1.2\times 10^{8}\)). Assuming that the sampling pitches in the numerical calculation are \((\delta _{x}, \delta _{y}, \delta _{z})\) and in the actual optical system are \((\varDelta _{x}, \varDelta _{y}, \varDelta _{z})\), the relation between each other is expressed as

$$\begin{aligned} \begin{bmatrix} \delta _{x} \\ \delta _{y} \\ \delta _{z} \end{bmatrix} = \begin{bmatrix} \beta &{} 0 &{} 0\\ 0&{}\beta &{}0\\ 0&{}0&{}\alpha \\ \end{bmatrix} \begin{bmatrix} \varDelta _{x} \\ \varDelta _{y} \\ \varDelta _{z} \end{bmatrix} \end{aligned},$$
(14)

where \(\alpha = n\beta ^{2}\) is the longitudinal magnification. In this study, \((\varDelta _{x}, \varDelta _{y}, \varDelta _{z})\), \(\beta\), and \(\alpha\) were \((\varDelta _{x}, \varDelta _{y}, \varDelta _{z})=(0.96\,\upmu \hbox {m}, 0.96\,\upmu \hbox {m}, 0.60\,\upmu \hbox {m})\), \(\beta =5.0\) and \(\alpha =33.25\), respectively. In the reconstruction results obtained using the ASM, speckle noise and defocused particle images are observed as shown in Fig. 4b and the scattered light from the particles spreads in a conical shape as shown in Fig. 4d. In contrast, the proposed method removes speckle noise and defocused images by TV and \(L_{1}\)-norm regularization as shown in Fig. 4c and d. This result means the proposed method can reconstruct the particle distribution directly and specify the positions of particles locally. Each particle image region in the reconstructed field was numbered by applying a labeling algorithm and the position coordinates of the particles were identified as the center of their brilliance. The position coordinates of the ith particle \(\mathbf{R}_{i}\) is obtained by

$$\begin{aligned} \mathbf{R}_{i}=\frac{1}{N_{i}}\sum _{j\in D_{i}}m_{i,j}\mathbf{r}_{i,j}, \end{aligned}$$
(15)

where \(N_{i}\) is the number of voxels forming the particle image, \(D_{i}\) is the existence domain of the particle, j is the index of the voxel, m is the brilliance, and \(\mathbf{r}\) is the position coordinates. The particle positions were tracked using the algorithm of Crocker and Grier [24] and the velocity vectors were then obtained. Figures 5 and 6 show the spatial velocity vector distribution and their projections onto the y- and z-axis, respectively. These results were obtained from 350 hologram images. The results in Fig. 5 show that trajectories of each particle can be tracked using the proposed method. The laminar flow peculiar to the microflow can be confirmed based on the reconstructed velocity field. Figure 6 shows that the flow velocity becomes slower as closer to the wall. The spatial velocity distributions can be approximated with quadratic curves, and these results indicate the flow is Hagen–Poiseuille flow-like. Figure 7 shows the histogram of the velocity distribution. The velocity distribution is Gaussian-like, with a mean of \(1.096\,\hbox {cm}\,\hbox {s}^{-1}\) and standard deviation of \(0.362\,\hbox {cm}\,\hbox {s}^{-1}\). The flow velocity is \(1.089\,\hbox {cm}\,\hbox {s}^{-1}\) on average. Above-described results indicate that the proposed method will be effective for 3D3C measurements of microflows.

Fig. 2
figure 2

Experimental setup for the microflow measurements. SF: spatial filter; OL: object lens, with \(f = 40\,\hbox {mm}\) and \(\mathrm {NA} = 0.14\); IL: imaging lens with \(f = 200\,\hbox {mm}\). The lateral magnification \(\beta\) is \(\beta = 5.0\)

Fig. 3
figure 3

Experimentally obtained images. a Recorded hologram. b Background eliminated hologram

Fig. 4
figure 4

Reconstructed fields. a, b Particle image in the xy plane reconstructed by the proposed method and ASM, respectively. c, d Particle image in the zx plane along the red dashed line in a and b reconstructed by the proposed method and ASM, respectively

Fig. 5
figure 5

Velocity vector distribution of particles in linear microflow

Fig. 6
figure 6

Spatial velocity distributions of particles projected onto the y-axis (a) and z-axis (b). Solid curves express quadratic approximation curves

Fig. 7
figure 7

Histogram of the velocity distribution

4 Conclusions

In this study, we proposed an HPTV technique based on CS for measurement of the velocity fields of microflows and verified the technique’s effectiveness experimentally. In the proposed method, particle coordinates are reconstructed directly by CS using the spatial sparseness of the particle distribution and these position coordinates can be estimated with high accuracy. Because 3D3C measurements can be performed with only a single light source and a camera, the optical system can be configured simply. In the evaluation experiments, the velocity field of a micro linear channel was measured, and a three-dimensional velocity vector distribution and a velocity distribution were obtained. The experimental results suggest that the proposed method is effective for 3D3C measurement of microflows. The proposed method is expected to provide an effective approach for the development of microfluidic devices and to aid in understanding of biological mechanisms .