1 Introduction (Tatsuki Tahara)

This roadmap article focuses on spatially incoherent digital holography termed incoherent digital holography (IDH) [1,2,3,4]. Representative IDH methods and various applications have been introduced. In the ‘60s, various holography methods were invented. Mertz and Young have proposed a holography method using a Fresnel zone aperture (FZA), focusing on the similarity between a Gabor’s hologram and a shadow pattern generated from the plate [5]. After the invention of laser, off-axis holography was proposed. Methods for recording of holograms using spatially incoherent light were independently proposed by Stroke and Restrick [6] and Lohmann [7]. Experimental demonstrations of incoherent holography have been performed by several research groups using a rotational shearing interferometer [6, 8], a Sagnac interferometer [9, 10], a lateral shearing interferometer [11], a modified Linnik interferometer [12], and a radial shearing interferometer using a birefringent plate termed conoscopic holography [13]. Holography with temporally incoherent light has also been discussed and presented [14,15,16] and recording of an interference fringe image [14] and holographic imaging [16] with spatially and temporally incoherent light have been performed. Techniques for electronic recording of a hologram [17] and digital reconstruction of a hologram [18] have opened up a research area nowadays known as digital holography (DH). IDH systems that employ the concept of self-interference along with a birefringent dual focus lens [2], a birefringent plate [19,20,21] an FZA [4], or wave-splitting methods [1, 3, 22,23,24,25] have been suggested. It should be noted that IDH for hyperspectral two-dimensional (2D) imaging with a lamp has been successfully and experimentally performed by Itoh et al. using the combination of a modified rotational shearing interferometer and the Fourier spectroscopy [22]. Optical scanning holography (OSH) [1], Fresnel incoherent correlation holography (FINCH) [2], coded aperture correlation holography (COACH) [3], IDH with an FZA [4], and IDH with an interferometer coupled with state-of-the-art optical devices such as an array of micro-optical elements and spatial light modulators are recognized as famous IDH methods.

Poon et al. have proposed a single-pixel DH system with structured illumination and temporal heterodyning. The technique is termed optical scanning holography (OSH) [1, 26, 27]. OSH performing three-dimensional (3D) imaging with a single photodetector and speckle-less holographic imaging with a laser has been achieved. OSH also has achieved holographic imaging of fluorescent specimens and holographic fluorescence microscopy was invented with OSH in the 1990’s [28]. Single-pixel DH is now an actively researched theme and OSH is regarded as pioneering work on both single-pixel DH with structured illumination and IDH. In the 1990’s, hyperspectral 2D imaging utilizing IDH that adopts a rotational shearing interferometer [22] and phase-shifting IDH based on conoscopic holography [19,20,21] have been presented as IDH techniques with an image sensor. The IDH techniques have also been experimentally performed. After that, Rosen and Brooker have proposed FINCH as another IDH technique [2]. FINCH is an IDH method that adopts an image sensor, a phase-only spatial light modulator (SLM), and a phase-shifting interferometry (PSI) system [29, 30]. FINCH exploits a phase-only SLM as a two-wave generator and a phase shifter. Several phase-shifted incoherent digital holograms are sequentially recorded with an image sensor, and a 3D image of incoherent light is obtained with the recorded holograms. Holographic fluorescence microscopy has also been proposed with FINCH and acceleration of measurement speed was performed [31]. Subsequently, improvement of point spread function (PSF) in incoherent imaging by using FINCH has been theoretically clarified and experimentally demonstrated [32]. After the improvement of PSF, an IDH method that adopts the use of random phase masks rather than Fresnel phase lenses has been proposed as COACH [3]. Using the random phase masks, depth resolution is much improved at the cost of the PSF improvement in the in-plane direction [33]. COACH has then been modified to become an incoherent 3D imaging technique with PSF-engineering not based on interference but on scattering. The technique is called interferenceless COACH (I-COACH) [34]. In IDH, instead of diffractive or scattering phase masks, the use of an amplitude mask such as an FZA is also an effective method. Single-shot incoherent 3D imaging without undesired-order diffraction waves has been performed by IDH with an FZA along with compressive sensing and machine learning [4, 35]. IDH has also been implemented with the combination of a laser DH technique and a classic interferometer. Simultaneous multidimensional imaging has been demonstrated with an IDH system that exploits the state-of-the-art techniques in laser DH [36].

Research on the applications of IDH has been actively conducted. IDH has the potential to dramatically extend the applicability of DH. Using IDH, laser light is no more mandatory. Any light source including self-luminous light and natural light can be recorded as a hologram. In particular, an application to microscopy termed incoherent digital holographic microscopy (IDHM) is highly expected and researched [28, 31]. Holographic fluorescence microscopy methods were frequently proposed and applied to the fields of biology, biomedical imaging, and fluid mechanics. Furthermore, applications in IDHM have not been limited to fluorescence microscopy. IDHM with spatially incoherent white light and multiple light-emitting diodes (LEDs) has also been proposed to conduct full-color holographic 3D imaging in microscopy [37,38,39]. Using a self-reference DH setup, holographic 3D imaging of cells with a single-path interferometer and an LED has been performed [40, 41]. As another application, a holographic 3D imager with a single sensor is actively researched [42,43,44,45,46]. Speckle-less holographic 3D imaging is a strong merit in IDH and high-quality holographic imaging has been demonstrated by OSH [42], FINCH [43], COACH [3], and IDH with an FZA [4]. Full-color holographic 3D imaging of natural light has been also performed using a Michelson-type self-interference optical system [44]. 3D imaging with changing phase masks displayed on the SLM for deep-focus imaging [47] and PSF improvement in 3D space [48] have been also proposed. Such proposals improve the specifications of incoherent holographic imagers. Incoherent holographic 3D imaging with various light-wave information has also been researched to extend the applicability of IDH for structural investigation of an object. IDH for invisible light such as infrared and ultraviolet light is also highly expected to extend the possibility of holographic imaging. As a remarkable application of IDH, it is reported that thermal information of a 3D object is measured using IDH with near infrared light [49].

IDH has emerged as a digital holographic imaging method for spatially incoherent light including natural light as described above. Such a merit enables us to conduct quantitative and holographic 3D imaging of any light wave with a single sensor. Representative IDH methods are reviewed and applications with IDH are introduced in this roadmap article. In Sect. 2, OSH, FINCH, COACH, IDH with an FZA, and IDH methods combined with techniques invented in laser DH are described as modern IDH methods. In Sect. 3, applications to microscopy, 3D imagers, and holographic measurement with incoherent light are introduced.

2 Incoherent digital holography (IDH)

2.1 Optical scanning holography (OSH) (Yaping Zhang and Ting-Chung Poon)

2.1.1 Introduction

OSH was invented as a single-pixel digital holographic technique for coherent as well as incoherent holographic imaging [1, 26, 27]. The principles of OSH involve laser scanning and heterodyning. Laser scanning leads to the flexibility of choosing spatial coherence in the optical system, and heterodyning has the capability of phase-sensitive detection, reminiscent of phase-preserving capability of holography. Laser scanning has already found one of the most important imaging modalities in laser scanning confocal microscopy. On the other hand, heterodyning has been used in radio for over a hundred years. Its preservation of optical phase lends itself to being considered for holographic applications. Another virtue of heterodyning is its ability of noise reduction to shot noise limit. Indeed the term “scanning holography” implies a phase-preserving procedure [50]. Conventional coherent digital holographic systems employ two-dimensional sensing array such as the CCD or CMOS sensors. As a result, one of the detrimental aspects is its inability to capture 3D objects of large size. Also, coherent holographic imaging technique has a serious drawback in that it is extremely susceptible to coherent noise corruption. OSH operating in an incoherent mode provides a radical departure from conventional digital holography in that optical detection is with a single-pixel detector such as a photodiode or photomultiplier and at the same time taking the advantage of being incoherent in imaging.

2.1.2 Basic principle of OSH and milestones

Figure 1 shows a typical OSH setup, which is basically a two-pupil system [51]. \(p_{1} \left( {x,y} \right)\) and \(p_{2} \left( {x,y} \right)\) are the two pupils located at the front focal plane of Lens L1 of focal length f, and they are illuminated by plane waves at frequencies \(\omega_{0}\) and \(\omega_{0} + {\Omega }\), respectively, where \(\omega_{0}\) is the frequency of a laser. The frequency shift of one of the plane waves from \(\omega_{0}\) to \(\omega_{0} + {\Omega }\) can be conveniently done by the use of a frequency shifter, such as an acousto-optic or electro-optic modulator. The beamsplitter is used to combine the two beams exiting from the two pupils and the combined beam is projected onto the 3D object through the x–y scanner for a raster scan. The 3D object is located z0 away from the focal plane of Lens L1. For high-resolution applications such as in microscopy, beam scanning can be replaced with object scanning where the object or the specimen is placed on an x–y high-precision mechanical stage. Scattered light from the 3D object passing through Fourier transform lens L2 of focal length f is modified by the mask in front of the photodetector (If the 3D object is diffusely reflecting, we would pick up the reflected light by a photomultiplier). Because of the two optical scanning beams of different temporal frequencies from the two pupils, optical heterodyning is done at the photodetector to give a heterodyne current at frequency \({\Omega }\) as an output from the bandpass filter, which is tuned at the heterodyne frequency and rejects the baseband current from the photodetector. The heterodyne current carries the scanned and processed signal of the 3D object. The processing elements of the system are the two pupil functions, and the mask in front of the photodetector can affect the coherence of the optical system. By choosing one of the pupil as a delta function and the other being a uniform function, we have holographic recording of the 3D object. For a point-detector, the mask is a pin hole and the optical system becomes coherent, i.e., the optical system processes complex distribution of the object. When the mask is uniform, we have an integrating photodetector and the optical system is incoherent, i.e., the system manipulates intensity distribution of the object. The very first experimental verification of OSH as its first milestone was demonstrated in 1990 [52]. As a second milestone, holograms of fluorescent specimens were recorded for the first time in 1997 and the term “holographic fluorescence microscopy was coined [28].” Fig. 2 shows the reconstructions of fluorescent beads at different depths.

Fig. 1
figure 1

Adapted from T.-C. Poon, Optical Scanning Holography with MATLAB, Springer (2007) [27]

Typical setup of optical scanning holography.

Fig. 2
figure 2

Reprinted with permission from [28] ©. The Optical Society

Reconstruction of holograms of fluorescent beads of 15 μm. a and b are the holographic reconstructions at different depths, and the arrows show individual fluorescent beads that are in focus.

Next, incoherent digital holography demonstrating speckle-less imaging was performed by Kim et al. [42]. This milestone is particularly a distinctive contribution as coherent digital holography has been progressing all along since 2005 [53]. Figure 3a and b shows the in-phase and quadrature outputs [see Fig. 1], called the cosine hologram, \(H_{cos} \left( {x,y} \right)\), and the sine hologram, \(H_{sin} \left( {x,y} \right)\), of the original object [27]. From the two real holograms, a complex hologram of the form \(H^{c} \left( {x,y} \right) = H_{cos} \left( {x,y} \right) + jH_{sin} \left( {x,y} \right);j = \sqrt { - 1}\) is computed. Figure 3c shows the reconstruction of the complex hologram. In Fig. 3d, for the purpose of comparison, we show CCD imaging of a laser-illuminated dice, which shows speckle noise due to coherent illumination. While we have shown results of incoherent imaging using OSH, coherent imaging has also been demonstrated in the context of quantitative phase imaging (QPI) [54, 55].

Fig. 3
figure 3

Reprinted with permission from [42] © The Optical Society

a Cosine Hologram, b Sine Hologram, c Reconstruction of a complex hologram; d CCD imaging of a laser-illuminated dice.

2.1.3 Some recent works and future challenges


Partial coherence in digital holography [56]: By varying the size of the open mask [see Fig. 1], coherent, partial coherent, and incoherent digital holographic recordings have been demonstrated experimentally. There is much work to be done in the investigation of partial coherent recoding in digital holography as it may offer new insights into novel phase imaging techniques.


Pre-processing and PSF-engineering in digital holography: Digital holographic recording of edge information as pre-processing was first proposed in 1995 [57]. The idea was to manipulate the pupil functions, thereby modifying the point spread function (PSF). This is known as PSF-engineering. In general, this leads to the concept of feature extraction in digital holography [58, 59]. Holograms containing feature-extraction information would play an important role in 3D object recognition by holography [60, 61], which remains a formidable challenge to date.

Computational OSH: Optical sectioning is one of the most important topics for 3D imaging in microscopy. Since the first reported use of the \(l_{2}\) norm in holography to perform sectioning [62], other optimization methods, such as the \(l_{1}\) norm or through complex convolution, have been proposed [63, 64]. The limitations of these methods remain computationally extensive.

Compressive OSH: Since OSH requires optical scanning mechanism, it is rather lengthy to acquire widefield 3D scenes with high resolution. Compressive OSH incorporated a near computational-free and non-iterative method to select the hologram pixels to be included in the optical scanning process, which would allow the capturing system be realized to operate at video rate with simple hardware or software [65]. Actual video-rate OSH systems remain to be demonstrated.

Off-axis OSH: While heterodyning has numerous advantages, it requires complicated and expensive electronic processing. Off-axis approach to OSH has been investigated recently. In the technique, an off-axis plane wave and a diffracted pupil field along the optical axis are used to scan the object. The technique eliminates the use of heterodyning for phase retrieval. Optical scanning theory for holographic imaging has been developed; coherent and incoherent holographic recording have been demonstrated in simulations. Experimental works are currently being underway [66].

2.2 Fresnel incoherent correlation holography (FINCH) (Joseph Rosen and Vijayakumar Anand)

2.2.1 Architectures

The first version of FINCH was published while there were two other main methods of generating incoherent digital holograms. The more known technique has been optical scanning holography [28], where the other has been termed multiple view projection digital incoherent holography [67]. Both techniques are based on different ways of time-consuming scanning to produce the result of a two-dimensional correlation between an object and a quadratic phase function. However, it is well-known that correlation or convolution in incoherent optics can be done without any scanning (see for example [10]). Indeed, the first design of FINCH was proposed as a way to do the above-mentioned correlation and to record an incoherent digital hologram without any kind of scanning [2]. FINCH was realized using an SLM, but its principles of operation have been implemented later by many different configurations with [68] and without [69] SLM. The advantage of using SLM for digital holography is that almost all operations needed for generating an incoherent hologram can be done by the same phase SLM. In self-interference incoherent digital holography, there is a need to split the light coming from any object point to two beams, to spatially modulate the two beams differently, and to shift the phase of one beam at least three times to eliminate the twin image and the bias terms [2]. In FINCH all these operations are performed by the same SLM, making the entire optical setup compact and simple. Besides the lack of the non-scanning issue and the compactness of using SLM, FINCH has other advantages due to its realization on a single-path, on-axis, self-reference incoherent interferometer. Using a single-path interferometer makes the system immune from mutual vibration between the two channels of the interferometer. Consequently, calibrating the two-beam interference and controlling the optical path difference is easier than in the case of an interferometer with two separated channels. Nevertheless, the cost of SLMs encourages some research groups to use the principles of FINCH for recording incoherent digital holograms but with more classical interferometers and without using SLMs [69].

There are many different versions of FINCH and other Fresnel self-interference incoherent hologram recorders [70], so naturally, it is impossible to mention all of them in this short section. However, the principles of operation of most of them are similar. The light emitted from any point on the object is split into two mutually coherent spherical beams, whereas each beam is focused at a different axial location (The infinity is also a possible axial location). One possible FINCH configuration is shown in Fig. 4 [71]. Splitting of each beam emitting from any object point to two beams is done by Polarizer 1 oriented at 45° to the active orientation of the SLM. Hence, only the light polarized in the y-direction (red) is focused to the focus point F, by the diffractive lens displayed on the SLM, whereas the light polarized in the x-direction (blue) propagates unaffected by the SLM as a plane wave. The digital camera located somewhere (but not at any of the image planes of the object point) records the interference pattern of the two spherical beams. Following the phase-shifting procedure, the obtained complex-valued hologram of the source point is a quadratic phase function of the form exp{-[(x-xr)2 + (y-yr)2]/λzr}. A procedure of Fresnel backpropagation applied on this hologram reconstructs a single point at (xr,yr,zr). Since the incoherently illuminated object is an ensemble of statistically independent source points, there is no wave interference between light beams originated from different object points. Consequently, the complex-valued hologram of the object is a sum of quadratic phase functions with different vectors (xr,yr,zr), each of which yields a different reconstructed image point in the 3D space.

Fig. 4
figure 4

Optical scheme of FINCH for recording incoherent digital holograms

Another unique property of FINCH is the violation of the Lagrange invariant, a rule that is rarely violated by most imaging systems. The practical meaning of this property is that the magnification of each image point can be different than the magnification of the distance between the points [72]. Consequently, under specific conditions, the image resolution of FINCH can be better by about 50% than the resolution of a regular incoherent imaging system. However, the penalty of this advantage appears in the form of a reduced axial resolution of FINCH [3]. The axial resolution of FINCH is compared with that of regular imaging in Fig. 5. Element 7.1 lp/mm of the National Bureau of Standards target and elements 2 (4.49 lp/mm) and 3 (5.04 lp/mm) of group 2 in United States Air Force target were mounted in two planes and the distance was varied from 0 to 3 cm in step of 1 cm. The reconstruction results for FINCH and regular imaging results of Fig. 5 show a lower axial resolution in FINCH compared to regular imaging. There are several other aspects of the technology that can improve the performance of FINCH but because of space limitation, they are only mentioned here briefly. For example, the available reflective SLM has been replaced by several other transmissive components to improve the power efficiency of FINCH [73]. Other improved versions of FINCH include different configurations of a single-shot recorder instead of the traditional three-shot FINCH [74, 150]. Another aspect investigated deeply lately is the FINCH that can record color digital holograms [75]. Recently, the traditional Fresnel backpropagation has been replaced by a nonlinear reconstruction algorithm to improve the signal-to-noise ratio of reconstructed images [76].

Fig. 5
figure 5

Adapted from [3]

Imaging of results of FINCH and regular imaging for a two-plane object made up of the NBS and the USAF resolution targets, when the location of the USAF chart relative to the NBS chart was varied from Δ = 0 to 3 cm in steps of 1 cm. The object and its axial location that is in focus or reconstructed is indicated by a red color box.

2.2.2 Applications

In parallel with the structural evolution of FINCH, the diversity of applications using these systems is growing rapidly. The natural application for any digital holography is the 3D imaging of incoherently illuminated objects shown in the first demonstration of FINCH [2]. Soon after, the 3D imaging has been extended to fluorescence microscopy [31] which was later demonstrated in FINCH systems with transmission liquid crystal GRIN lens [73]. The images shown in Fig. 6 are the results of the FINCH microscope described in [73]. In the same world of optical microscopy, there are two other applications that appear with structural modifications over the original FINCH. One of them is microscopy with axial sectioning demonstrated with two successive SLMs, one is used to modulate differently the two beams and the other is positioned at the front image plane for sectioning [77]. A closely related technique called confocal incoherent correlation holography was demonstrated using a spinning disk [78]. Other microscopy-related techniques are the super-resolution methods. FINCH with synthetic aperture [79] and structured illumination in FINCH [80] are examples of methods adopted from conventional imaging to improve the resolution of FINCH-based microscopes. A super-resolution technique invented for FINCH is the method with a scattering mask between the observed objects and the hologram recorder [81].

Fig. 6
figure 6

Adapted from [73]

a Widefield and bd reconstructed FINCH images of pollen grains captured using a 20 × (0.75 NA) objective, showing the ability of FINCH to refocus at depths that were out-of-focus under widefield conditions.

2.2.3 Discussion and future perspectives

FINCH has played an important role by inspiring other SLM-based incoherent digital holography systems, such as Fourier incoherent single-channel holography [82], and COACH [3]. The interested reader can find more about FINCH and its applications and other incoherent digital holography methods in the review article [70]. Given these and other efforts, it will be interesting to see if the FINCH-based fluorescence microscope will become a significant player in the world of biological microscopy in the next coming years. Because FINCH is more complicated and less power-efficient than most conventional imaging systems, its future success is probably dependent on finding the applications where its performance is definitely superior to other options.

2.3 Coded aperture correlation holography (COACH) (Vijayakumar Anand and Joseph Rosen)

2.3.1 Technology

COACH was developed in 2016 to record 3D information of events and objects illuminated by spatially incoherent and temporally coherent light [3]. In COACH, the light from an object is split into two beams and one of the object beams is modulated by a quasi-random phase mask and coherently interfered with the unmodulated object beam as shown in Fig. 7. COACH, being an in-line system, requires phase-shifting and complex hologram synthesis [83, 84]. Unlike other well-established incoherent holography systems like FINCH [2] and self-interference incoherent digital holography (SIDH) [85], the COACH system does not have an image plane where a wave can numerically propagate from the hologram to the reconstructed image. Therefore, in COACH, a modified recording and reconstruction procedure was introduced. COACH consists of a two-step recording procedure: training and imaging. In the training phase, a pinhole is axially scanned, and a point spread hologram (PSH) is recorded at every axial location and stored in a library. After the training procedure, an object hologram (OH) is recorded under identical conditions as the PSH recording. The 3D image of the object at different planes is reconstructed by cross-correlating the OH with the corresponding elements of the PSH library.

Fig. 7
figure 7

Optical configuration for recording PSH and object holograms. Images of the point spread holograms (IPSH) and object holograms (IO) recorded with a phase shift of θ = 0°, 120°, and 240° are shown in the upper side. The magnitude and phase of the complex point spread holograms |HPSH|, Arg{HPSH} and object holograms |HO|, Arg{HO} are also shown. The reconstruction result IR in the lower side is obtained by cross-correlating HPSH with HO using a matched filter, phase-only filter, and nonlinear filter. SLM Spatial light modulator, CPM Coded phase mask. The red arrows indicate polarization orientation

The recording and reconstruction phenomenon of COACH can be explained as follows. The light emitted from a point object is split into two waves using the polarization multiplexing scheme [3]. The polarizer P1 is oriented at 450 with respect to the active axis of the SLM. Consequently, the SLM can only modulate part of the incoming light while the remaining part remains unmodulated. On the SLM, a random phase mask is displayed which generates a complex amplitude G(u,v) at the sensor plane. G(u,v) interferes with the unmodulated plane wave A. The recorded intensity distributions are given as \(I_{PSF\left( k \right)} \left( {u,v} \right) = \left| {A + G\left( {u,v} \right)e^{{i\theta_{k} }} } \right|^{2}\), where \(\theta_{k = 1,2,3}\) = 0°, 120° and 240°. The complex hologram is obtained by projecting the recorded intensity distributions in complex space as \(H_{{{\text{PSH}}}} \left( {u,v} \right) = I_{PSF\left( 1 \right)} \left( {u,v} \right)\left( {e^{{ - i\theta_{3} }} - e^{{ - i\theta_{2} }} } \right) + I_{PSF\left( 2 \right)} \left( {u,v} \right)\left( {e^{{ - i\theta_{1} }} - e^{{ - i\theta_{3} }} } \right) + I_{PSF\left( 3 \right)} \left( {u,v} \right)\left( {e^{{ - i\theta_{2} }} - e^{{ - i\theta_{1} }} } \right)\) = G(u,v). Therefore, the complex hologram for object O can be represented as \(H_{O} = O*G\), where “\(*\)” is a 2D convolution operator. The image of the object can be reconstructed by cross-correlating HO with G(u,v). Hence, the image of the object is \(I_{R} = O*G \otimes G = O*{\Lambda }\), where Λ is a delta-like function whose width is given as 1.22 λ f0/D in the object plane where D is the diameter of the Lens-2 and “ ⊗ ” is a 2D correlation operator. Since the reconstruction is carried out using cross-correlation, the lateral and axial resolving powers of COACH cannot be better than that of direct imaging. The cross-correlation between the complex object hologram and point spread hologram can be done using a matched filter [3], phase-only filter [86], nonlinear filter [87], and an integrated filter consisting of a nonlinear filter, median filter, low pass filter and correlation filter [88]. It must be noted that any random phase mask is sufficient for implementing COACH. However, to reduce the correlation noise, the random phase mask is replaced by a quasi-random coded phase mask (CPM) often designed using the Gerchberg–Saxton algorithm to obtain a phase-only function in the spectrum domain [3].

An experiment demonstrating COACH was carried out using an LED (Thorlabs LED631E, 4 mW, λ = 635 nm, Δλ = 10 nm), SLM (Holoeye PLUTO, 1920 × 1080 pixels, 8 μm pixel pitch, phase-only modulation) and an image sensor (Hamamatsu ORCA-Flash4.0 V2 Digital CMOS, 2048 × 2048 pixels, 6.5 μm pixel pitch, monochrome). The training phase was conducted using a pinhole with a diameter of 25 μm. Then, a USAF resolution chart was mounted in place of the pinhole, and elements 4 (11.31 lp/mm), 5 (12.70 lp/mm), and 6 (14.25 lp/mm) of group 3 were illuminated. Images of the point spread holograms (IPSH) and object holograms (IO) recorded with a phase shift of θ = 0°, 120°, and 240° are shown in Fig. 7. The magnitude and phase of the complex point spread holograms |HPSH|, Arg{HPSH} and object holograms |HO|, Arg{HO} are also shown in Fig. 7. The reconstruction results, IR, obtained by cross-correlating HPSH with HO using a matched filter, phase-only filter, nonlinear filter, and an integrated filter are shown in the same figure. Since all the above methods have used complex holograms involving three camera shots, the differences in the results are not huge as in the case of correlating positive functions.

Comparing FINCH [2] and COACH [3], COACH has a higher axial resolution than FINCH, whereas FINCH has a higher lateral resolution than COACH and direct imaging. Since the optical configurations of COACH and FINCH are identical and consist of an active device (SLM), it is possible to switch easily between FINCH and COACH just by switching between the quadratic phase mask and CPM and by maintaining the distance conditions of FINCH. A method was developed to access the intermediate states of a system between COACH and FINCH to bring mixed characteristics of both and tunability between them [33]. This was achieved by engineering the phase mask displayed on the SLM. The hybrid CPM is given as \(t = e^{{i\left( { - \pi br^{2} /\lambda + \alpha \phi_{r} + \theta_{k} } \right)}}\), where \(\phi_{r}\) is the random phase function. When α = 1, the system is COACH, and when α = 0, the system is FINCH and for any other values, the system will exhibit mixed characteristics. The experiment is carried out for two planes consisting of the USAF and NBS (10 lp/mm) resolution charts in two channels combined into one using a beamsplitter. The distance between the two objects was varied to generate 3D objects with different thicknesses. The distance between the two objects was varied from 0 to 3 cm and different systems: direct imaging, FINCH, COACH, and hybrid FINCH-COACH were studied. The reconstruction results are shown in Fig. 8. As seen in Fig. 8, when α increases, axial resolution and noise increase. In principle, the lateral resolution also should decrease when α is increased but due to the use of a pinhole larger than the resolution limit in the object plane, the changes in lateral resolution are minor.

Fig. 8
figure 8

Imaging characteristics of the direct imager and the hybrid system for α = 0, 0.2, 0.4, 0.6, 0.8, and 1 when the spacing between the NBS and USAF resolution target was increased from 0 to 3 cm in steps of 1 cm

2.3.2 Discussion and future perspectives

In one study, the axial and spectral resolution of COACH was enhanced by replacing Lens-1 of Fig. 7 with a diffractive lens [89]. This system is called 4D COACH showing the capability to convert a 2D monochrome imaging into a color 3D one. In another study in 2017, FINCH and COACH systems were integrated to develop a super-resolution imaging system [81]. The light from an object is scattered by a quasi-random phase mask before it is input into the FINCH system. The quasi-random mask diffuses the higher spatial frequencies, which are usually lost, back into the system resulting in a resolution improvement. This type of COACH-FINCH integration resulted in a resolution enhancement of about five times that of a direct imaging system.

In 2017, COACH evolved into interferenceless COACH (I-COACH), following the discovery that the 3D information of the object is present not only in the phase but also in the amplitude of the scattered wave [34]. Hence, two-beam interference is no longer necessary to record 3D information. I-COACH has replaced COACH in many applications, owing to the simplified configuration without two-beam interference, to the improved power efficiency, and to the higher signal-to-noise ratio. However, there are still certain areas of study and applications where COACH is still relevant. For instance, incoherent holographic systems in which the two-beam interference is essential for the required application [76, 77, 90]. Another application that emphasizes the superiority of COACH over I-COACH, and other techniques, is the recently proposed system termed one-channel-at-a-time incoherent synthetic aperture imager (OCTISAI) [91].

Looking to the future, COACH and I-COACH will probably continue to evolve in two main directions of improving the imaging techniques on one hand, and searching for new applications on the other hand. Examples of recently proposed new techniques are I-COACH with a modified nonlinear reconstruction [92], and with annular sparse PSH [93]. In the aspect of new applications, the above-mentioned OCTISAI [91] is a typical example of a new use for COACH, and imaging with engineered depth-of-field [94] is an example of a new application for I-COACH. These and other examples indicate that COACH is an active field of research with a promising future.

2.4 FZA lensless imaging with information science (Liangcai Cao and Jiachen Wu)

2.4.1 Introduction

Traditional optical imaging architectures follow a point-to-point imaging model by using a set of lenses. This way to collect and converge light limits the freedom of imaging parameters. By introducing computing capability into the imaging system, the burden of imaging can be transferred from bulky and expensive hardware to computation, which enables new architectures for low-cost cameras. Lensless imaging as a typical branch of computational imaging adopts illumination-modulated manner or utilizes flat optical elements instead of lenses to build a simple, lightweight imaging systems, which shows a huge development potential in sensing and photography.

In the past few years, various lensless imaging techniques have been proposed with coherent systems, such as an on-chip microscope [95, 96], coherent diffractive imaging [97], and a series of learning-based methods [98,99,100]. These methods have strict requirement for the coherence of illumination, which limits the applications. The mask-based lensless imaging utilizes a flat optical element to modulate the incident light so that it could be worked under incoherent illumination. The classical masks, which include uniform redundant array (URA) [101] and modified URA (MURA) [102], have been used in X-ray and γ-ray imaging. However, they are susceptible to diffraction at visible wavelengths. To address the problem, various masks, such as phase masks [103, 104], separable masks [105, 106], and diffusers [107], are designed to obtain the robust reconstruction. Moreover, benefiting by development of machine learning technology, several learning-based methods could achieve both fast reconstruction and high-quality image [108,109,110]. In this chapter, the principles and reconstruction methods related to Fresnel zone aperture (FZA) are illustrated.

2.4.2 Principles

2.4.2.1 Zone plate coded imaging

Soon after Gabor’s invention of holography [111], Rogers noted that the pattern of the Fresnel zone plate coincides with a point source hologram [112]. Inspired by this, Mertz and Young proposed zone plate coded imaging [5], and extended the concept and application of holography to the field of incoherent light. Zone plate coded imaging has traditionally been used in X-ray and γ-ray imaging [113, 114], because the energy of hard X-rays and γ-rays is too high to be reflected or refracted, and simply passes through the lenses and mirrors. Recent works have demonstrated the imaging capability of the Fresnel zone plate, also called the FZA, at visible wavelengths [4, 35, 115, 116].

The FZA pattern is modeled in intensity transmissivity as

$$T\left( r \right) = \frac{1}{2} + \frac{1}{2}\cos \left( {\frac{\pi }{{r_{1}^{2} }}r^{2} } \right),$$
(1)

where r1 is the radius of first zone called FZA constant, and r is radial coordinate in the FZA plane. Each point in the object surface diffuses the light and could be considered as a point light source. The sketch of FZA imaging is shown in Fig. 9. Assuming the object is placed at a distance of z1 from the FZA and the FZA is placed in front of an image sensor at distance z2. The shadow of FZA illuminated by the point source on the sensor is expanded by the magnification factor \({{\left( {z_{1} + z_{2} } \right)} \mathord{\left/ {\vphantom {{\left( {z_{1} + z_{2} } \right)} {z_{1} }}} \right. \kern-\nulldelimiterspace} {z_{1} }}\). When \(z_{1} \gg z_{2}\), the FZA shadow has almost the same size of the FZA pattern. In this case, the imaging process could be formulated as

$$I\left( {\mathbf{r}} \right) = \frac{1}{2}\sum\limits_{k}^{N} {I_{k} \left[ {1 + \cos \left( {\frac{\pi }{{r_{1}^{2} }}\left| {{\mathbf{r}} - {\mathbf{r}}_{k} } \right|^{2} } \right)} \right]} + e\left( {x,y} \right),$$
(2)

where \(I\left( {\mathbf{r}} \right)\) is the intensity distribution on the sensor plane; Ik is the intensity of the kth point source; the vector r is the arbitrary position vector, and rk is the translation vector of the kth shadow in the sensor plane. Each FZA shadow can be considered a point source hologram that encodes the intensity and the location of the point source. All these elementary holograms synthesize the final measurement. Then, the reconstruction can be performed by coherent propagation, which can be formulated as

$$O_{R} \left( {r_{o} } \right) = \frac{{\exp \left( {{{i2\pi d} \mathord{\left/ {\vphantom {{i2\pi d} \lambda }} \right. \kern-\nulldelimiterspace} \lambda }} \right)}}{i\lambda d}\iint {I\left( r \right)\exp \left[ {\frac{i\pi }{{\lambda d}}\left| {r - r_{o} } \right|^{2} } \right]dS},$$
(3)

where λ and d represent the wavelength of the reconstructed wavefront and the reconstructed distance, respectively. To ensure correct reconstruction, the two parameters should satisfy \(r_{1}^{2} = \lambda d\). Substituting Eqs. (2) into (3) while ignoring the constant coefficient, Eq. (3) becomes

$$\begin{aligned} O_{R} \left( {r_{o} } \right) &= \frac{1}{2}\iint {\exp \left( {\frac{i\pi }{{r_{1}^{2} }}\left| {r - r_{o} } \right|^{2} } \right)dS} \cdot \sum\limits_{k}^{N} {I_{k} } \\ &\quad + \frac{1}{4}\sum\limits_{k}^{N} {I_{k} } \iint {\exp \left[ {\frac{i\pi }{{r_{1}^{2} }}\left( {\left| {r - r_{o} } \right|^{2} - \left| {r - r_{k} } \right|^{2} } \right)} \right]dS} \\ &\quad + \frac{1}{4}\sum\limits_{k}^{N} {I_{k} } \iint {\exp \left[ {\frac{i\pi }{{r_{1}^{2} }}\left( {\left| {r - r_{o} } \right|^{2} + \left| {r - r_{k} } \right|^{2} } \right)} \right]dS} \\ & = \frac{{ir_{1}^{2} }}{2}\sum\limits_{k}^{N} {I_{k} } + \frac{{r_{1}^{4} }}{4}\sum\limits_{k}^{N} {I_{k} \delta \left( {r_{o} - r_{k} } \right)} \\ &\quad + \frac{{ir_{1}^{2} }}{8}\sum\limits_{k}^{N} {I_{k} \exp \left( {\frac{i\pi }{{2r_{1}^{2} }}\left| {r_{o} - r_{k} } \right|^{2} } \right)} . \\ \end{aligned}$$
(4)
Fig. 9
figure 9

The sketch of FZA imaging. a The object is illuminated by incoherent light, then the FZA encodes the incident light as in-line holographic pattern. b Each point source in the scene casts an FZA shadow on the sensor plane. These shadows are superimposed and form the pattern with the same form as the in-line hologram. c Applying coherent propagation for the recorded pattern can reconstruct the point sources

The first term is a constant term that is proportional to the total intensity of the object. The second term is a set of points appearing at the same locations as the geometrical imaging point, which makes up the image of the original object. The third term is the superposition of spherical waves propagating from distance 2d. It can be considered an out-of-focus image, which is the so-called twin image. How to eliminate twin image is the key to improve imaging quality.

2.4.2.2 Compressive reconstruction

Compressive sensing (CS) is a powerful signal reconstruction framework especially for image reconstruction. The CS algorithm has been applied in holography and successfully solved the crosstalk and twin-image issue [117, 118]. Let us denote the number of image samples by \(N_{x} \times N_{y} = N_{xy}\). The constant term in Eq. (4) could be removed by filtering out the DC component (zeroth order). Then the observed image \(I \in {\mathbb{R}}^{{N_{xy} }}\) is expressed as a function related to \(O \in {\mathbb{R}}^{{N_{xy} }}\), which is the forward transform model based on angular spectrum method:

$$\begin{aligned} I &= \frac{1}{2}{\text{Re}} \left\{ {\left( {{\mathcal{F}}^{ - 1} H{\mathcal{F}}} \right)O} \right\} + e \\ &= \frac{1}{2}{\text{Re}} \left\{ {GO} \right\} + e \\ &= KO + e. \\ \end{aligned}$$
(5)

where the \({\mathcal{F}} \in {\mathbb{C}}^{{N_{xy} \times N_{xy} }}\) is the 2D Fourier transform matrix, and \({\mathcal{F}}^{ - 1} \in {\mathbb{C}}^{{N_{xy} \times N_{xy} }}\) is the corresponding inverse Fourier transform matrix; \(H \in {\mathbb{C}}^{{N_{xy} \times N_{xy} }}\) is a diagonal matrix consisting of transfer function of Fresnel propagation; K is the combined operator of forward transform G and taking the real part of the complex field. Solving O with known I and forward transform K is a typical inverse problem. To eliminate the twin image, a regularizer based on the prior knowledge should be introduced into the image reconstruction model. Since the twin image is in essence a defocused image, and the gradient of focused image is much sparser than the gradient of defocused image, total variation (TV) regularization is introduced to enforce sparsity constraint in the gradient domain. The reconstruction could be represented in the form of the following TV minimization problem:

$$\hat{O} = \arg \mathop {\min }\limits_{O} \left \{ \frac{1}{2}\left\| {I - KO} \right\|_{2}^{2} + \tau \left\| O \right\|_{{{\text{TV}}}} \right \}, $$
(6)

where τ is the regularization parameter which controls the relative weight of the two terms. By using optimization algorithm to solve Eq. (6), such as two-step iterative shrinkage /thresholding (TwIST) algorithm [119], the twin-image noise could be substantially eliminated.

2.4.2.3 Learning-based reconstruction

Machine learning approaches belong to data-driven methods that learning potential laws from large data sets to fit complex models. For reconstruction of FZA imaging, a deep network could be trained to decode the sensor image to scene image. Assuming the mapping is learned with a deep neural network \({\mathcal{N}}\), parameterized by θ, the network is trained to minimize the object function:

$$\hat{\theta } = \arg \mathop {\min }\limits_{\theta } \sum\limits_{{X,Y \in {\mathcal{D}}}} {{\mathcal{L}}\left( {{\mathcal{N}}\left( {X;\theta } \right),Y} \right)} ,$$
(7)

where \({\mathcal{D}}\) represents a dataset of input sensor images X and output scene images Y. Loss function \({\mathcal{L}}\) describes how close the network output is to the ground truth. The dataset can be acquired either experimentally or computationally. The experiment acquisition requires stable setup and eliminating ambient light interference, which sometime is impractical to implement. Numerically generation is flexible. In order to make the generated images as close as possible to the real distribution, the diffraction effect should be taken account of in the forward model.

When the object is far away from the camera, namely \(z_{1} \gg z_{2}\), the point spread function (PSF) of FZA imaging is a diffraction pattern of the FZA under the plane wave illumination:

$$U\left( {x,y;\lambda } \right) = \left| {{\mathcal{F}}^{ - 1} \left\{ {{\mathcal{F}}\left\{ {M\left( {x,y} \right)} \right\} \cdot H\left( {\xi ,\eta ;\lambda ,d} \right)} \right\}} \right|^{2} .$$
(8)

Here, M is the amplitude mask pattern, and H is the angular spectrum transfer function. For a broadband light source, the PSF is calculated by integrating the diffracted intensities of multiple wavelengths. Since the image sensor has a different sensitivity to different wavelengths of light, the integration should be weighted by the spectral responsivity \(Q_{c} \left( \lambda \right)\)

$$PSF\left( {x,y} \right) = \int {Q_{c} \left( \lambda \right)U\left( {x,y;\lambda } \right)d\lambda } .$$
(9)

Finally, the sensor image X is calculated by the convolution of scene image Y and PSF

$$Y = X * PSF.$$
(10)

Due to the rapid prediction of DNN, the trained model could be integrated into the hardware to build a real-time DNN-FZA camera. The image acquisition and reconstruction pipeline of the DNN-FZA camera are shown in Fig. 10.

Fig. 10
figure 10

Image acquisition pipeline and reconstruction for the DNN-FZA camera

2.4.3 Implementation

The FZA imaging system consists only of a sensor and a tightly attached mask plate. In our experiments, a custom-made photomask is placed in front of the sensor so that the photomask touches the protective glass of the sensor, as shown in Fig. 11. For mass production, the FZA pattern could be directly deposited on the cover glass of the sensor. The outward side of the photomask is covered with the FZA pattern defined using a thin chrome film. The transparent regions of the mask transmit light, while the chrome film regions of the mask block light. The thickness of the photomask is 2 mm, and the thickness of the protective glass is 1 mm. Thus, the distance between the mask and the sensor is 3 mm. A liquid crystal display monitor is placed ~ 30 cm from the photomask. The test images are displayed on the screen, and the sensor records the rays from the screen which are modulated by the mask. For each test image, we adjusted exposure time according to the scene intensity to avoid overexposure or underexposure.

Fig. 11
figure 11

Experimental apparatus

The comparison of CS and DNN reconstruction results is shown in Fig. 12. By using the diffracted PSF instead of geometrical PSF, the reconstructed image quality could be significantly improved. The DNN method provides comparable reconstruction quality with the CS method while the computing speed is improved by 2 orders of magnitude.

Fig. 12
figure 12

Comparison of different reconstruction methods

2.4.4 Outlook

The prospect of FZA imaging using such a relatively thin setup can lead to all kinds of applications. One is a portable camera to image various objects from a person in a room to the landscape outdoors. This approach can fulfill a range of recognition tasks, such as object detection, character recognition, and face recognition. Furthermore, the proposed camera can be monolithically fabricated by depositing the FZA pattern on the cover glass of the sensor so that it can be readily integrated with portable devices or any flat surface. It is suggested that this ultrathin and low-cost camera has great potential with the development of computational imaging. This easy-to-build and lightweight architecture has promising applications in miniaturized devices for surveillance and biomedicine. The imaging resolution of this mask-based camera also has much room for further improvements by modifying the mask so that the diffracted pattern coincides with the expected pattern.

2.5 Phase-shifting interferometry techniques applied to incoherent digital holography (Tatsuki Tahara, Takako Koujin, Atsushi Matsuda, Ayumi Ishii, Yuichi Kozawa, Ryo Okamoto, and Ryutaro Oi)

2.5.1 Single-shot phase-shifting holography applied to Fresnel incoherent correlation holography (FINCH)

FINCH [70] adopts PSI to conduct the high-quality 3D imaging of spatially incoherent light and can be incorporated in single-shot phase-shifting (SSPS) holography [120,121,122] to achieve single-shot 3D imaging without undesired-order diffraction waves. Figure 13 illustrates implementations of SSPS holography. Three types of representative implementation have been proposed until now: implementations with diffractive optical element(s) and transformation optics [120], a quarter wave plate and a micro-polarizer array [121], and an array composed of micro-glass cells and micro-waveplates and a micro-polarizer array [122]. Multiple phase-shifted holograms are simultaneously recorded on a single image sensor with a single-shot exposure. Three types of IDH with SSPS [123,124,125] have also been proposed as shown in Fig. 14a–c: (a) an implementation with a polarizer, a spatial phase retarder, a quarter wave plate, and a polarization-imaging sensor [123], (b) a two-arm interferometer with checkerboard pattern gratings and an image sensor [124], and (c) a polarizer, a geometric phase lens, a refractive lens, and a polarization-imaging sensor [125, 126]. The implementation shown in Fig. 14a corresponds to the combination of FINCH and SSPS. Although the implementation shown in Fig. 14b is composed of a two-arm interferometer, tolerance against external vibrations can be much improved by attaching the gratings to the beam combiner. The IDH system using the setup shown in Fig. 14c is also introduced in Sect. 3.2. Using IDH with SSPS, the single-shot holographic 3D imaging of spatially incoherent light has been demonstrated. This technique is applicable to single-shot full-color 3D imaging even with white light [74].

Fig. 13
figure 13

Three types of representative implementation in SSPS holography. Implementations a based on diffraction grating holography [120], b with a quarter wave plate (QWP) and an image sensor with a micro-polarizer array [121], and c with an image sensor with arrays of phase retarders and polarizers [122]. The phase-retarder array consists of glass cells and QWPs

Fig. 14
figure 14

Three types of implementation in SSPS incoherent holography. Implementations a based on FINCH [123], b with checkerboard gratings [124], and c with geometric phase and refractive lenses [125, 126]

2.5.2 Computational coherent superposition applied to incoherent holography

The computational coherent superposition (CCS) scheme [127,128,129] is used in multidimension-multiplexed PSI. Before the proposal of CCS, single-path polarization-based spectroscopy was proposed by Ueda and Takuma [130], and the CCS scheme is an advance on the work in ref. [39] with the 3D imaging ability of CCS-IDH newly incorporated. Using multidimension-multiplexed PSI, object waves at multiple wavelengths are simultaneously recorded on a monochrome image sensor and selectively retrieved from the recorded wavelength-multiplexed phase-shifted holograms. The key is the phase encoding of the respective object waves to demodulate them with PSI. By introducing wavelength-dependent phase shifts when recording the holograms, object waves at multiple wavelengths are separately extracted after the calculation in PSI. When using the CCS scheme, no absorption filter to separate multiple wavelengths is required to conduct full-color 3D imaging with a monochrome image sensor. In the CCS scheme, two types of representative implementation have thus far been considered as shown in Fig. 15a, b: implementations with (a) two polarizers, a birefringent lens, a polarization-sensitive phase modulator, and a monochrome image sensor [39], and (b) a polarizer, a birefringent lens, and a monochrome image sensor with a wavelength-dependent polarization-sensitive phase-modulation (WPP) array and a polarizer [36]. In the former, the full-color 3D imaging of spatially incoherent light is realized without the loss of the space-bandwidth product at the cost of some loss of temporal resolution. In the latter, the single-shot color 3D imaging of spatially incoherent light is demonstrated experimentally at the cost of a reduced space-bandwidth product. Figure 15c shows experimental results obtained using the implementation in Fig. 15a and the optical system shown in Refs. [131, 132]. Fluorescence-stained HeLa cells are imaged as holographic color 3D images.

Fig. 15
figure 15

CCS applied to incoherent holography. CCS-IDH systems with a a single-path polarization-based spectrometer and b SSPS. c Photograph of a color-multiplexed fluorescence digital holographic microscopy system. d Example of experimental results obtained by CCS-IDH with multiple exposures (a) when applied to color-multiplexed fluorescence digital holographic microscopy. Details are described in ref. [131]. The brightness of (c) is enhanced

2.5.3 Discussion and conclusions

PSI techniques are applied to incoherent holography. In OSH [1], PSI is applied to various optical systems [133,134,135]. CCS is also applicable to other types of incoherent holography such as COACH [3]. FINCH with SSPS is now useful for the single-shot full-color 3D imaging of sunlight and white light generated by a halogen lamp, owing to the development of a commercially available color polarization image sensor with more than 5 megapixels and high pixel density. High-speed multidimensional imaging for nanoscopic specimens by IDH with PSI has been reported [136, 137]. PSI techniques contribute to both laser holography and IDH.

3 Applications of incoherent digital holography

3.1 Bimodal incoherent digital holography—3D and infinite depth-of-field imaging (Teruyoshi Nobukawa)

3.1.1 Principle

IDH enables passive 3D imaging under spatially incoherent light sources [44, 69]. IDH’s core functionality is based on self-interference. It is possible to create a Fresnel hologram that contains the 3D information about an object to be captured by introducing shear between lights during self-interference. Self-interference can be applied to infinite depth-of-field (DOF) imaging by modifying the shear condition [25, 138]. In this case, a Fourier hologram is created, and the object’s axial information is lost. Therefore, self-interference with appropriate shear conditions allows us to implement 3D imaging or infinite DOF imaging. Both 3D and infinite DOF imaging offer a novel means of expression in photography and broadcasting by providing images that are more informative than those obtained using a conventional 2D camera.

A bimodal IDH system [47] was proposed to implement and switch the functions 3D imaging and infinite DOF imaging in a common geometry. Figure 16 shows the schematic of the bimodal IDH system. To implement self-interference, a spatially incoherent light from an object is split into two lights. One of the lights is modulated with a varifocal lens, and the other with a rotating prism. The varifocal lens with a changeable focal length is used for switching imaging functions 3D or infinite DOF imaging. This lens can be performed with a liquid crystal lens or an SLM. The rotating prism is necessary to implement infinite DOF imaging. The modulated lights are recombined after the modulation, resulting in a self-interference hologram. When the phase pattern on the varifocal lens is a spherical phase, the self-interference hologram exhibits the Fresnel hologram’s property. When the phase on the varifocal lens is unity, the self-interference hologram exhibit the Fourier hologram’s property. It is possible to reconstruct 3D images and infinite DOF images by applying numerical propagation based on an angular spectrum method [139] and the Fourier transform to Fresnel and Fourier holograms, respectively. The proof-of-principle experiment was performed to verify the feasibility of the system using the optical setup shown in Fig. 17a. The object was two metal plates-shaped paper and scissors in a hand game. A phase-only SLM and a pair of dove prisms [140] were used to create the varifocal lens and the rotating prism. The imaging functions were switched by changing the phase pattern on the phase-only SLM. Figure 17b and c shows the reconstructed images using 3D imaging. The images of scissors and paper are in focus individually. Figure 17d shows the reconstructed image using infinite DOF imaging. Both images are in focus unlike the reconstructed images in Fig. 17b and c. The results reveal that the bimodal IDH system can implement and switch 3D imaging and infinite DOF imaging by electronically changing the phase pattern in a common geometry. Self-reference holograms are recorded without rotating the angle of light in conventional studies of IDH or FINCH. However, the holograms are recorded with the light rotated in the bimodal IDH system. This additional degree of freedom is useful in changing imaging properties, such as spatial resolution, DOF, and image magnification [141].

Fig. 16
figure 16

Schematic of switchable computational imaging based on self-interference for 3D and infinite DOF imaging

Fig. 17
figure 17

Proof-of-principle experiment—a experimental setup; b, c reconstructed images by 3D imaging; and d reconstructed image by infinite DOF imaging

As demonstrated in the above experiments, IDH exhibits attractive imaging capabilities; however, the technical difficulty of IDH is the low-quality reconstructed images. This is mainly due to the low-contrast hologram on IDH, which makes the hologram sensitive to detector noises. The contrast is relatively high for sparse objects, and the effect of noise is limited. However, when capturing the dense hologram, it is necessary to consider the presence of noise. A simple yet effective solution is improving light usage efficiency. IDH requires a narrow bandpass filter to enhance temporal coherence, thereby creating holograms with high spatial resolution. Although there is a tradeoff between the light usage efficiency affected by a narrow bandpass filter and spatial resolution [46, 142, 143], theoretical investigation suggests that there are potentially better choices for an optical setup that can improve the light usage efficiency by suppressing the degradation in spatial resolution [46]. Optimizing the optical setup for improving light usage efficiency without loss of spatial resolution becomes crucial by further exploring the operation of IDH in detail. The use of a digital-based solution in addition to such an analog-based solution is significant. The IDH system would perform well in capturing high-quality 3D images if the noise is adequately reduced by digital postprocessing, though the light usage efficiency is reduced due to the bandpass filter. Because the noise is randomly generated, temporal averaging of holograms is effective [144]. Alternatively, noise can be effectively reduced by spatial averaging with careful condition selection [145]. Moreover, recent computer science technologies such as compressive sensing [146] and deep learning would improve the performance of IDH if the forward model of an IDH system is properly designed [147].

3.1.2 Conclusions

IDH offers unique imaging properties, such as passive 3D imaging, infinite DOF imaging, and super-resolution due to the violation of Lagrange invariant [32, 148,149,150]. However, significant challenges exist, which must be overcome. Particularly, the noise should be removed depending on the sparseness of the object to be captured. IDH would introduce and stimulate significant and attractive applications to various fields such as photography, biology, and measurement, by furthering the investigations.

3.2 Holographic camera (Kihong Choi)

3.2.1 Status

There have been numerous works related to the development of a digital holographic display to reach the ultimate goal of the immersive display [151,152,153]. Meanwhile, the studies related to the holographic content creation for digital holographic display have been mainly focused on the development of a method to generate holograms via computation [154]. And the real-world holographic acquisition has relied on the laser-based holographic interferometer, or the indirect three-dimensional (3D) optical information capturing methods. However, the conventional holographic interferometer is hard to be applied to various shooting situations, due to the limitation of the light source, and the mobility of the system. The holographic content generation from other 3D information acquisition methods, such as the RGB-D camera, light-field camera, requires a cumbersome computation process to convert the original data format into the complex-valued holographic data. Therefore, it is necessary to develop a holographic camera that can capture the 3D scene under the general light source into the complex hologram directly and instantly, to transmit information to the futuristic holographic display system.

In order to develop the practical holographic camera, the following requirements, which are required even in modern cameras, must be satisfied. For example, it should be possible to acquire a hologram under the general light source conditions with a short coherence length. The path length required by the optical system should be short enough to be embedded in mobile devices in the future. A beamsplitter or mirror should not exist to reduce the form factor and weight. Lastly, the real-time acquisition should be possible to capture the dynamic scene and to cope with the futuristic holographic broadcasting service. In this regard, our research on the development of a holographic camera has started from the well-known incoherent digital holographic recording systems, which are free from the types of light sources [2, 34, 44]. Several systems are introduced as incoherent digital holography technology so far, but a few reports are contributing to the development of the holographic camera. To name a representative achievement, the hologram acquisition of an outdoor building under the daylight or the surface of the moon in the night sky has been demonstrated using the Michelson interferometric configuration, which is also referred as SIDH [44, 155].

The SIDH system using a geometric phase (GP) lens as a wavefront modulating element is reported, which is shortly called GP-SIDH [125, 156]. Even though it is widely classified as a SIDH system, the system resembles the principle of the FINCH system, in terms of the polarization-based wavefront division method. The essential components of the GP-SIDH system are the two polarizers and a GP lens, which provide a simple and compact form factor. The full-color incoherent holographic video recording functionalities can be provided with the current state of the GP-SIDH systems. In the remaining chapter, the principle of the system operation is described and the demonstration of the hologram acquisition is followed.

3.2.2 Principle and demonstration

In GP-SIDH, the GP lens serves as a dual focal length lens under the input circular polarization states. When the right-handed circularly polarized light passes through the lens, the light experiences the lens as a convex lens, and the polarization state is converted into the orthogonal one. For the left-handed circularly polarized light input, the light reacts vice versa. Therefore, the Fresnel hologram can be obtained at the sensor plane by the self-interference of two spherical waves with different curvatures.

In addition to the wavefront modulating functionality of the GP lens, the configuration of the polarizer, GP lens, and the polarizer gives rise to the phase-shifting effect as well. The GP lens is a kind of metasurface flat lens where the tiny waveplates are formed of chiral liquid crystals (LC) [157]. The retardance value of the whole surface of the LC layer is set to be half of the input wavelength λ, while the director angle ϕ of the LC molecules is determined by \(2\pi \left( {\sqrt {(x^{2} + y^{2} + f^{2} } - f} \right)/\lambda\), where x and y are the local coordinates of the lens, f is the intended focal length of the lens at λ. When the transmission axis of the first linear polarizer is parallel to the ground plane, and the angle of the transmission axis of the last linear polarizer is rotated in δ, the Jones matrix circuit for the normal incidence light on the arbitrary location of x, y can be represented as,

$$R\left( { - \delta } \right)\left( {\begin{array}{*{20}c} 1 & 0 \\ 0 & 0 \\ \end{array} } \right)R\left( \delta \right)R\left( { - \phi } \right)\left( {\begin{array}{*{20}c} { - j} & 0 \\ 0 & j \\ \end{array} } \right)R\left( \phi \right)\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right).$$
(11)

In Eq. (11), \(R\left( \psi \right)\) is the rotation matrix with a rotation angle of \(\psi\), that is \(\left[ {\cos \psi ,\sin \psi ; - \sin \psi ,\cos \psi } \right]\). The calculated intensity from Eq. (11) is \(1 + \cos \left( {4\phi - 2\delta } \right)\), which shows that when the linearly polarized light passes through the GP lens and the polarizer, the intensity is determined by the spatially varying variable ϕ that is the identity of the GP lens. The intensity is also dependent on the twice of the rotation angle δ of the second polarizer, which gives rise to the phase-shifting effect to the interferograms. Therefore, to utilize the conventional four-step 90° stepwise phase-shifting method, the required values of δ are 0, π/4, π/2, 3π/4 radians. This simple phase-shifting method is achromatic because there is no wavelength dependency in Eq. (11), as long as the half-wave retardance of the GP lens and the performance of the polarizer are maintained over the required wavelength range.

By implementing the aforementioned principle, the straight-line prototype system with the total path length within 300 mm is proposed [156]. The motionless compact holographic video camera prototype is also proposed, by replacing the rotating polarizer and the conventional image sensor into the polarized image sensor (PIS) [125, 126]. As illustrated in Fig. 18b, the unit pixel structure of the PIS consists of 2 by 2 array of micro-polarizers, where the angle of each polarizer is 0°, 45°, 90°, and 135°, respectively. Therefore, the four phase-shifted interferograms can be captured simultaneously, or in other words, in parallel [123]. The schematic illustration of the optical configuration of the GP-SIDH and the procedure of extracting the complex-valued hologram is shown in Fig. 18.

Fig. 18
figure 18

a Schematic illustration of the optical configuration of the GP-SIDH. b Pixel structure of the color polarized image sensor (PIS). c Procedure for extracting full-color complex-valued hologram from color PIS. L: lens, P: polarizer, GP: geometric phase lens

Introducing the off-the-shelf color PIS enables us to capture the full-color holographic video. However, the performance of the GP lens should be maintained regardless of the wavelength of the input light, to realize the full-color recording by using the proposed system. The performance of the GP lens related to the input wavelength can be considered with two characteristics. Firstly, the half-wave retardance should be maintained in a broad wavelength range. Otherwise, the amount of zeroth-order information increases as the wavefront modulation efficiency of the GP lens decreases, except the optimal wavelength range. This zeroth-order information possibly results in an undesired self-interference pattern and makes the reconstructed image noisy. To overcome this phenomenon, the double LC layer with reversed chirality can be fabricated as the GP lens, to compensate the variation of retardation over the broad range of wavelength input [158]. The second consideration is the wavelength dependency of the focal length of the GP lens. When the intended focal length of the GP lens is fo at the wavelength of λo, the focal length at another wavelength λ' becomes foλo/λ'. It would be best to employ the achromatic GP lens to avoid such concern. Nevertheless, the full-color reconstruction of the hologram obtained with the non-achromatic GP lens is available via numerical method [159].

The sample hologram acquired by using the GP-SIDH is presented in Fig. 19. The utilized GP lens has a focal length of 264 mm at 550 nm input wavelength. The toy airplane, where the width of two wings is about 95 mm, is placed approximately 200 mm in front of the system. The single frame is captured under the general-purpose LED light, while the streaming speed is set to 26 frames per second. The number of pixels of the raw image is 2048 × 2048, with a pixel size of 3.45 μm. Hence the extracted full-color hologram contains 1024 × 1024 × 3 complex values, where the effective pixel size is 6.9 μm. The Bayer mosaic pattern of each phase-shifted interferogram is processed to produce the full-color image using the demosaicing function provided in MATLAB software. The reconstruction is performed with the widely known angular spectrum method.

Fig. 19
figure 19

The GP-SIDH prototype and the obtained hologram sample. a The optical configuration of the prototype. b Photograph of the prototype. c Magnitude of the hologram. d Phase-angle of the hologram. e Reconstructed image. f Focused on the left-side wing. g Focused on the right-side wing

3.2.3 Conclusion

The GP-SIDH has several important advantages as a candidate for futuristic 3D optical information capturing systems, such as portability and the full-color video recording capability. The enhancement of the optical elements such as the GP lens and the postprocessing methods are required to increase the visibility of the reconstructed image, and to mitigate the noisy information. The proposed optical configuration is expected to be utilized not only in the field of the holographic camera but also for the real-time incoherent holographic microscopy, the non-destructive macroscopic testing system, or the compact RGB-D camera.

3.3 Incoherent digital holography for three-dimensional radiometric temperature measurement (Masatoshi Imbe)

3.3.1 Introduction

Objects with temperature emit electromagnetic wave termed thermal radiation. Planck’s law states the relation: spectral radiance—one of physical quantities in radiometry—of thermal radiation emitted by blackbody is determined by temperature and wavelength [160]. Radiation thermometry is based on this law. A radiation thermometer measures the spectral radiance by an object under test to determine its temperature. Thermography enables two-dimensional radiation thermometry by capturing a temperature image. It has been utilized in applications, such as non-destructive testing of buildings [161, 162], monitoring for laser welding [163, 164], and fever screening to prevent the spread of viral infection [165, 166]. The fundamental optical setup of a thermography is as a camera. In-focus image capture is necessary to obtain object temperature distribution; otherwise, the out-of-focus deteriorates both temperature and spatial resolutions. It can be a problem when measuring multiple objects at different depths and moving object(s) along the depth direction, such as the above-mentioned applications.

Incoherent digital holography can overcome this problem. An object emitting thermal radiation can be regarded as a spatially incoherent light source. Once its incoherent hologram is captured, the numerical propagation provides its reconstructed images at each reconstruction distance [70, 167, 168]—the numerical focusing is performed over the distances to adjust focus after capturing holograms. Spectral radiance is derived from in-focus reconstructed image, so that the temperature distribution is determined based on radiation thermometry—the three-dimensional (3D) radiometric temperature measurement is realized [49].

In this paper, after the definition of spectral radiance is described, the relation between reconstructed image and spectral radiance is yielded. The detailed derivation of reconstructed image is presented in references [49, 169]. Experimental results confirm the capability of 3D radiometric temperature measurement by IDH.

3.3.2 Definition of spectral radiance

Spectral radiance is defined as radiant flux per unit area per unit solid angle per unit wavelength. Its unit is \({\text{W m}}^{ - 3} {\text{ sr}}^{ - 1}\). Radiant flux is also a physical quantity in radiometry defined as the flow of energy per unit time. It represents optical power with a unit of \({\text{W}}\). Figure 20 shows a schematic diagram for the definition of spectral radiance: a sensor measures radiant flux of thermal radiation emitted by an object along the optical axis. Radiant flux \({\text{d}}\phi\) is derived by

$$\begin{array}{*{20}c} {\text{d}\phi = L\left( {\lambda , T} \right) \cos\theta \, \text{d}S \, \text{d}\mathit{\Omega} \, \text{d}\lambda ,} \\ \end{array}$$
(12)

where \(L\left( {\lambda ,T} \right)\) denotes a spectral radiance based on Planck’s law, \(\lambda\) denotes a wavelength, \(T\) denotes an object temperature, \(\theta\) denotes the angle between the normal to an object and the optical axis, \({\text{d}}S\) denotes the area of an object, \({\text{d}}\mathit{\Omega}\) denotes the solid angle at an object viewing a sensor, and \({\text{d}}\lambda\) denotes the bandwidth for the wavelength [170]. The letter “d” denotes infinitesimal in this definition. It indicates that when spectral radiance is determined by measuring radiant flux and substituting the other parameters into Eq. (12), the other parameters should be small or narrow to be regarded as constant. Moreover, Fig. 20 indicates that the solid angle is determined if the area of a sensor and the object–sensor distance are known.

Fig. 20
figure 20

Geometry for defining spectral radiance. Reprinted with permission from [49] © The Optical Society

3.3.3 Derivation of spectral radiance from reconstructed image

To obtain the spectral radiance from a reconstructed image, an incoherent hologram is recorded by employing the FINCH-based optical configuration with fixed transverse magnification [169]. In FINCH [70, 168], two optical images of a spatially incoherent object are formed as secondary radiation sources. An incoherent hologram is generated by the sum of interference intensity patterns between the two spherical waves from the two images. They are emitted from the spatial regions originating from the same point on an object—the self-interference is realized. However, if the optical images are formed with different magnifications on the two paths, the dimensions of those spatial regions are also different. When an object has spatial variations, different scales of the optical images can lead to the generation of the self-interference by the spatial regions where the spectral radiances are different. The optical configuration employed makes the same scaled optical images; they are formed at different distances from an image sensor. This is because the transverse magnification of the optical configuration is identical on the two paths and independent of locations of an object and an image sensor [169].

The invariance property of radiance [170] states that radiance by an object is equal to that by an image in an imaging system. Because a spectral radiance by an object is equal to those by the optical secondary images, the relation between the intensity of reconstructed image and the spectral radiances by the optical secondary images can derive a spectral radiance by an object. For the terminology, “intensity” refers to the flow of energy per unit area per unit time with a unit of \({\text{W m}}^{ - 2}\) and is commonly used in digital holography; this term does not refer to “radiant intensity” and “luminous intensity” in radiometry and photometry.

Figure 21 shows a schematic diagram of the geometry for the optical secondary images. The planes of these images and an image sensor are parallel: \(\cos \theta = 1\) in Eq. (12). These secondary images are formed at distances \(z_{{\text{A}}}\) and \(z_{{\text{B}}}\) from an image sensor—the two spherical waves have the radii of curvature of \(z_{{\text{A}}}\) and \(z_{{\text{B}}}\), respectively. These radii can be known with the parameters of the optical setup. For the determination of the solid angles at the optical secondary images viewing an image sensor \(\Delta \mathit{\Omega}_{{\text{A}}}\) and \(\Delta \mathit{\Omega}_{{\text{B}}}\)., it is assumed that the spherical waves are distributed and interfere with each other on the whole of an image sensor. The solid angels are derived by

$$\begin{array}{*{20}c} {\Delta \mathit{\Omega}_{m} = \frac{{D_{{\text{h}}} D_{{\text{v}}} }}{{z_{m}^{2} }} , } \\ \end{array}$$
(13)

where \(D_{{\text{h}}}\) and \(D_{{\text{v}}}\) are the horizontal and vertical dimensions of an image sensor, and \(m\) represents A and B. Moreover, it is assumed that the solid angles are independent of spatial coordinates over the planes of the optical secondary images.

Fig. 21
figure 21

Geometry of optical secondary images. Reprinted with permission from [49] © The Optical Society

A phase-shifting technique [171] extracts the complex field from an incoherent hologram generated by the sum of the interference patterns. The numerical propagation with the propagation distance of \(z_{{{\text{AB}}}}\) results in the in-focus reconstructed image. The reconstruction distance corresponds to the synthesized radius:

$$\begin{array}{*{20}c} {\frac{1}{{z_{{{\text{AB}}}} }} = \frac{1}{{z_{{\text{A}}} }} - \frac{1}{{z_{{\text{B}}} }} .} \\ \end{array}$$
(14)

Let denote \(I_{{\text{r}}} \left( {x_{{\text{r}}} , y_{{\text{r}}} } \right)\). the intensity of the in-focus reconstructed image, where \(x_{{\text{r}}}\) and \(y_{{\text{r}}}\) denote the spatial coordinates on the reconstruction. When the angular spectrum method or convolution method is used for the numerical propagation, the sampling interval on the reconstructed image is equal to the size of pixels of an image sensor. The reconstructed image is interpreted as the set of radiation sources with the area of a pixel. Considering the relation between spectral radiance and the other parameters as Eq. (12), the dimension of intensity with a unit of Wm−2, and the geometry for the optical secondary images as shown in Fig. 21, the relations are given by

$$\begin{array}{*{20}c} {I_{r} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) \Delta S = L\left( {x_{{\text{r}}} ,y_{{\text{r}}} ,\lambda ,T} \right) \Delta S \Delta \mathit{\Omega}_{{{\text{AB}}}} \Delta \lambda ,} \\ \end{array}$$
(15)
$$\begin{array}{*{20}c} {\Delta \mathit{\Omega}_{{{\text{AB}}}} = \sqrt {\Delta \mathit{\Omega}_{{\text{A}}} \Delta \mathit{\Omega}_{{\text{B}}} } ,} \\ \end{array}$$
(16)
$$\begin{array}{*{20}c} {I_{r} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) = a_{{{\text{r}},{\text{ A}}}} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) a_{{{\text{r}},{\text{ B}}}} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) ,} \\ \end{array}$$
(17)
$$\begin{array}{*{20}c} {a_{{{\text{r}},{ }m}} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) \sqrt {\Delta S} = \sqrt {L\left( {x_{{\text{r}}} ,y_{{\text{r}}} ,\lambda ,T} \right){ }\Delta S \Delta \mathit{\Omega}_{m} \Delta \lambda } ,} \\ \end{array}$$
(18)

where \(\Delta S\) denotes the area of a pixel, \(\Delta \lambda\) denotes a wavelength width expected by a wavelength filter used, and \(a_{{{\text{r}},{ }m}} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right)\) denotes the amplitudes of the optical secondary images. The expression of spectral radiance \(L\left( {x_{{\text{r}}} ,y_{{\text{r}}} ,\lambda ,T} \right)\) is for the emphasis for the dependence on spatial coordinates. Equation (15) is stated based on the equivalence for the optical secondary images—their spectral radiances and their areas for the emission owing to the fixed transverse magnification of the optical configuration. The spectral radiance is derived by

$$\begin{array}{*{20}c} {L\left( {x_{{\text{r}}} ,y_{{\text{r}}} ,\lambda ,T} \right) = \frac{1}{{\Delta \mathit{\Omega}_{{{\text{AB}}}} \, \Delta \lambda }} I_{r} \left( {x_{{\text{r}}} ,y_{{\text{r}}} } \right) .} \\ \end{array}$$
(19)

Consequently, the intensity of the in-focus reconstructed image is obtained with the reconstruction distance in Eq. (14), the solid angle is determined with radii of curvature of the spherical waves in Eq. (13), and the spectral radiance is obtained with the relations in Eqs. (15, 16, 17, 18, 19).

Equation (19) is not explicit about the responsibility of an image sensor and the transmittance of optical equipment used. They can be determined by calibration because they are static and independent of temperature. A radiation temperature standard is used as an object, which the temperature is known by another thermometer. The signal values corresponding to the spectral radiances are acquired at multiple known temperatures to yield the calibration parameters representing the responsibility and transmittance [172]. Once the calibration parameters are determined, the radiometric temperature measurement can be performed even if the location of an object is different from that of the standard at the calibration—it is not necessary that the calibration is performed at each distance.

3.3.4 Experimental results

Figure 22 shows a schematic diagram of the optical setup used in the experiment. It is based on a Michelson interferometer. A 4-f optical system consisted of two lenses (L1 and L2) with focal lengths of \(f_{1}\) and \(f_{2}\). The Fourier plane was located on each arm of the interferometer. A concave mirror (MA) and a planar mirror (MB) with focal lengths of \(f_{{\text{A}}}\) and \(f_{{\text{B}}}\), respectively, on the Fourier plane led to the different locations of the optical secondary images through the 4-f optical system. The focal lengths \(f_{1}\), \(f_{2}\), \(f_{{\text{A}}}\), and \(f_{{\text{B}}}\) were \(250~{\text{mm}}\), \(150~{\text{mm}}\), \(500~{\text{mm}}\), and \(\infty\), respectively. A bandpass filter (F) had a central wavelength of \(1550~{\text{nm}}\) and a wavelength width of \(10~{\text{nm}}\), which was used to have temporal coherence for interference. This wavelength was selected based on Planck’s law for the temperature range in the experiment. An InGaAs image sensor with a pixel size of \(25~{\upmu \text{m}} \times 25~{\upmu \text{m}}\), pixel count of \(612 \times 512~{\text{pixels}}\), and quantization of \(12~{\text{bit}}\) was used to record holograms, which was located at a distance of \(z_{{\text{s}}} = 100~{\text{mm}}\) from the rear focal plane of L2.

Fig. 22
figure 22

Schematic diagram of optical setup used in experiment: L1, L2, lenses; MA, MB, mirrors; BS, a beamsplitter; F, a bandpass filter; A, an aperture. Reprinted with permission from [49] © The Optical Society

Before measuring the temperature of an object, the calibration was performed. The combination of a blackbody furnace with emissivity of \(0.99\) and an aperture (A) with a diameter of \(7.5~{\text{mm}}\) was used as the radiation temperature standard, which the temperature was monitored by a calibrated radiation thermometer. This radiation source was located at the front focal plane of L1. The four-step phase-shifted incoherent holograms were captured to extract the complex field [171]. The phase-shifting was realized by a piezoelectric actuator attached to MB. The in-focus reconstructed image is obtained with the numerical propagation, and the signal for spectral radiance is calculated. This procedure was performed at the temperatures of 550 °C, 625 °C, and 700 °C, respectively, so that the calibration parameters were obtained.

As an object under test, the blackbody furnace with the aperture operated at 600 °C was also used—this radiation source worked as both a radiation temperature standard and an object under test in the experiment. The above-mentioned procedure for the acquisition of the signal for spectral radiance was performed. The temperature image of the object was calculated from the signal and the calibration parameters. Then, the object was moved and located at a distance of \(100~{\text{mm}}\) before the front focal plane of L1. Similarly, the temperature image was calculated except the reconstruction distance for the in-focus.

Figure 23 shows the temperature images: (a) the object was located at the same position as the calibration performed and (b) at a distance from \(100~{\text{mm}}\) backward. The average temperatures on the apertures of these images are (a) 603 °C and (b) 627 °C. Although there is a slight difference between the measurement result and monitoring result by a radiation thermometer, Fig. 23a indicates that the radiometric temperature measurement is achieved at the same distance at which the calibration is performed. Meanwhile, Fig. 23b has the discrepancy and indicates that the spectral radiance obtained from the reconstructed image depends on the distance. It might be caused by the miss-alignment of optical elements because the determination of a spectral radiance requires solid angle. Moreover, the phase-shifting error might cause the residual undesired image that deteriorates the desired reconstructed image. Nevertheless, this proof of principle study suggests the applicability of IDH to 3D radiometric temperature measurement, which was considered difficult hitherto.

Fig. 23
figure 23

Radiometric temperature measurement results. The object is a blackbody furnace with an aperture operated at 600 °C and located at a the same position as the calibration performed and b a distance of \(100~{\text{mm}}\) backward. Reprinted with permission from [49] © The Optical Society

3.3.5 Conclusion

3D radiometric temperature measurement scheme utilizing IDH has been proposed. Spectral radiance is derived from in-focus reconstructed image, and the radiometric temperature measurement is performed using the calibration parameters. The experimental results confirm the capability of the temperature measurement using a blackbody furnace as a standard for the calibration and an object under test.

3.4 Incoherent digital holography system for high-speed 3D motion-picture sensing of weak light (Tatsuki Tahara, Yuichi Kozawa, Ryo Okamoto, Ayumi Ishii, and Ryutaro Oi)

3.4.1 Single-shot phase-shifting incoherent digital holography (SSPS-IDH) system

Polarization-imaging cameras are commercially available, and researchers can easily obtain these cameras at a low price. Various SSPS-IDH systems [74, 123,124,125,126] have been proposed over the last five years as a result. Most SSPS-IDH systems exploit the camera to implement single-shot phase-shifting [74, 123, 125, 126]. An image sensor with high quantum efficiency, wide dynamic range, and low noise is ideal for the high-speed 3D sensing of weak light such as fluorescence light as well as night scenes. Several SSPS-IDH systems [124, 136, 137] that can adopt scientific complementary metal-oxide semiconductor (sCMOS) and electron-multiplying charge-coupled device (EM-CCD) cameras rather than a polarization-imaging camera have been proposed to obtain bright holograms from weak light. The high-speed holographic motion-picture sensing of weak light can be achieved using such SSPS-IDH systems.

Figure 24 shows a schematic of an SSPS-IDH system adopting an sCMOS camera and a phase-retarder array [136]. This optical setup consists of a combination of IDH [70] and SSPS holography [120,121,122] systems. In the SSPS holography system, one of the two object waves obtains the spatial phase-shift distribution generated by the phase-retarder array. The phase-shift distribution is transferred by a relay optics and imaged on the image sensor plane. One pixel of the phase-retarder array corresponds to one pixel of the sCMOS camera to obtain a wide space-bandwidth product, which is determined on the basis of research on SSPS holography [121, 122]. The alignment of the relay optics and array is important in obtaining the desired phase-shift distribution with the high visibility of interference fringes. The measurement error of a complex amplitude distribution due to misalignment is numerically analyzed in [173].

Fig. 24
figure 24

SSPS-IDH system with a phase-retarder array to capture motions of nanoparticles in 3D space

We have conducted an experiment to obtain motion-picture holograms of moving fluorescence nanoparticles with a diameter of 0.2 μm. We used an inverted optical microscope (Olympus, IX-73), an excitation light source (Olympus, U-HGLGPS), birefringent lenses and a plate (SIGMAKOKI), a phase-retarder array made from a photonic-crystal array (Photonic Lattice), and an sCMOS image sensor (Andor, Neo5.5) to construct the SSPS-IDH system. The frame rate of the image sensor was 102 fps and the exposure time per recording was 9.7 ms. The field of view was 147 μm × 147 μm and the magnification of the system was 45. The articles showed Brownian motion in water and the constructed SSPS-IDH system obtained motion-picture holograms of the particles simultaneously. Figure 25 shows experimental results, which indicate that the 3D information of particles without undesired-order diffraction waves was simultaneously reconstructed from a single incoherent digital hologram. 3D motion-picture recording was conducted at the frame rate of the image sensor because 3D information was retrieved from a single recorded image. The 3D information of the particles was reconstructed from each recorded hologram, and focused images of particles were obtained at each frame. Thus, the high-speed 3D motion-picture recording ability under the weak fluorescence light generated from nanoparticles was experimentally demonstrated.

Fig. 25
figure 25

Experimental results for 3D motion-picture recording of nanoparticles at a frame rate of 102 fps. a Recorded digital hologram containing multiple phase-shifted holograms. b Image reconstructed from (a). Images reconstructed with numerical propagation distances of ce 2 mm and fh − 1 mm obtained by calculating diffraction integrals. The area inside the red rectangle in (b) corresponds to the areas of (c)–(h). Arrows indicate focused particles

3.4.2 Single-shot computational coherent superposition (SS-CCS) incoherent digital holography system with a wavelength-dependent polarization-sensitive phase-modulation (WPP) array

CCS-IDH with SSPS is realized by fabricating a WPP array. Here we call the IDH technique single-shot CCS (SS-CCS) IDH. A WPP array was developed to combine the two IDH techniques. Instead of the use of micro-polarizer array, a WPP array is inserted to apply both CCS and SSPS. Multiple holograms that are required for CCS are obtained with a single-shot exposure, a WPP array, and space-division multiplexing of wavelength-multiplexed holograms. Figure 26 shows the schematic of the WPP array and the photograph of the image sensor with the WPP array. We have developed the image sensor with the WPP array because such the image sensor is not commercially available. Each WPP cell is composed of a photonic crystal, and a photonic-crystal array is fabricated by the self-cloning technique [174]. The fabricated array is attached on the center of the image sensor. The size of each cell is 14 μm × 14 μm and the number of cells in the fabricated array is 384 × 288. Light at 400–700 nm wavelength-band passes with high transmittance through each cell of the fabricated WPP array. A polarizer is inserted between the WPP array and the CCD image sensor before the attachment of the fabricated WPP array, which enables the recording of the interference light with periodical and nonlinear phase shifts. One of the nine wavelength-dependent phase shifts is recorded pixel by pixel using the developed image sensor shown in Fig. 26b. The information of the nine wavelength-multiplexed phase-shifted monochrome holograms is recorded with a single-shot exposure by using space-division multiplexing. Using the developed image sensor, we have constructed a single-shot wavelength-multiplexed holographic fluorescence microscope based on SS-CCS IDH. The constructed IDHM system is composed of a fluorescence microscope, a CCS-IDH system, and the developed image sensor. Detailed experimental conditions are described in [136]. Experimental results shown in Fig. 27 indicate that fluorescence object waves at different wavelength bands are selectively extracted and 3D information at respective wavelength bands is reconstructed successfully. Different types of fluorescence particles are identified using wavelength separations with CCS. The experimental results indicate that wavelength information is obtained without a color filter array and the developed WPP array is usable for wavelength sensing with CCS. Furthermore, experimental results show that SS-CCS IDHM performs color 3D imaging of fluorescence light from the single wavelength-multiplexed hologram. Improvements of image quality and frame rate are conducted, and color 3D motion-picture recording of incoherent holograms with more than 70 fps and 4 megapixels is performed to date [137].

Fig. 26
figure 26

Adapted from [36]

WPP array and developed image sensor. a Schematic of the designed WPP array and b photograph of the developed CCD image sensor with the WPP array.

Fig. 27
figure 27

Adapted from [36]

Experimental results. a Recorded hologram and b wavelength-multiplexed phase-shifted hologram de-mosaicked from (a). c Intensity and d phase images of the object wave on the image sensor plane at a wavelength of 618 nm. e Intensity and f phase images on the image sensor plane at 545 nm. Color-synthesized images focused at depths of g 20.7 μm, h 23.7 μm, i 26.6 μm, and j 29.6 μm in the object plane. k 618 nm and l 545 nm components of (g). m 618 nm and n 545 nm components of (j). Light-blue circles in this figure highlight focused complex molecules.

3.4.3 Discussion and conclusions

High-speed 3D motion-picture recording with single-shot IDH has been achieved by using a PSI technique. The use of SSPS-IDH with ultraweak light is expected to open important applications, such as the motion-picture imaging of phototoxic molecules and ultrafast phenomena. For this purpose, it is essential to clarify the physical limit of the measurement accuracy per photon in PSI. Recently, we have clarified the physical limit of PSI by theory [175, 176] and numerical simulation [177]. By extending this theoretical approach to IDH, we expect to be able to reveal the physical limit of IDH with PSI. Also, in the microscopy application of IDH, super-resolution with FINCH is an important property [32, 70, 72, 178]. The combination of FINCHSCOPE and the high-speed IDH system will make it possible to realize the holographic 3D imaging of incoherent light with high spatial and temporal resolutions. Furthermore, acceleration of holographic image reconstruction is researched for IDH systems [179, 180]. Accelerations of both recording and reconstruction in IDH lead to the improvement of throughput in holographic measurement, and such research achievements will cause real-time holographic motion-picture imaging.

3.5 Other remarks for applications of IDH (Tatsuki Tahara)

IDH can exploit interferometric techniques to enhance incoherent 3D imaging, and various applications of IDH have been developed by making use of many interferometric techniques. As an example, multidimensional imaging with interferometric techniques has been performed until now. Applicability of IDH for hyperspectral holographic 3D imaging is frequently discussed and actively researched. IDH using a Michelson-type two-arm interferometer with rotational shearing and the Fourier spectroscopy is initially suggested for hyperspectral holographic imaging of natural light [181]. The experimental demonstration of [181] was successfully performed [182]. After that, various types of hyperspectral IDH systems have been presented: the use of a Mach–Zehnder-type two-arm interferometer with radial shearing [183], and the use of a Michelson-type interferometer with ring-shaped slits [38]. The single-path polarization-based spectroscopic IDH system shown in Fig. 15a will be exploited as a novel type of a hyperspectral holographic 3D imager whose tolerance against external vibration is high. As another aspect of multidimensional imaging, polarization imaging with IDH for a 3D object is also researched [184]. Polarization imaging of incoherent light was conducted generally with a polarization-imaging camera. However, it is difficult to distinguish the rotation direction of circularly polarized light. Such a problem was efficiently solved with IDH, an interferometric technique, and no polarization-imaging camera [185]. In the application to incoherent 3D microscopy, there are many reports every year. Besides OSH and FINCH, many research groups reported holographic fluorescence microscopy systems with different types of interferometers: a Michelson interferometer [45, 186, 187], a single-path off-axis self-interference system [188, 189], and a single-path self-interference system using a grating for quadriwave lateral shearing interferometry [190]. As other research achievements in IDHM, 3D fluorescence imaging through static or moving scattering media was performed with an OSH system [191], which exploits the nature of single-pixel DH. IDHM was also combined with lattice light-sheet microscopy for depth-resolved holographic 3D imaging [192] and Raman scattering microscopy for 3D imaging of surface-enhanced Raman scattering light [193]. From the viewpoint of the development of a 3D imager, application of IDH to a compact hologram recorder is also highly required. It is said that an IDH system was constructed on a wagon table to record full-color incoherent digital holograms of outdoor 3D scene illuminated by sunlight. As described in Sect. 2, the concepts of the compact IDH systems have been proposed and their optical implementations are now important for portable holographic sensing apparatus. Compact IDH systems are described in not only Sects. 2 and 3 but also a review article [194]. Conoscopic holography [13] is also an effective solution to construct a compact hologram recorder. CH has been implemented with a unique PSI system employing a liquid crystal phase modulator and a rotational amplitude mask [19,20,21]. It is notable that phase-shifting DH was proposed and implemented for CH and spatially incoherent light before the proposal of famous phase-shifting DH using a laser [30]. A compact single-path holographic interferometer based on the combination of Conoscopic holography, a single-path polarimetric spectrometer [130], and CCS is proposed as another way of a multidimensional IDH system [195, 196].

Various IDH techniques have been introduced in this roadmap article. Comparative evaluations between the IDH techniques were discussed in several review articles [196, 197]. The authors hope this roadmap article serves the key information to study incoherent holography and IDH techniques more deeply.