1 Introduction

The solution of many optical problems in a scalar diffraction can be obtained using either the Rayleigh diffraction integral or the Fresnel–Kirchhoff diffraction integral. In practice, we deal with Fresnel diffraction integral as approximate diffraction formula. The Fresnel transform of optical image has been formulated mathematically and presented as a convolution formulation which is a description of Fresnel diffraction [1,2,3,4]. Up to now, moreover, Fresnel transforms and inverse Fresnel transforms have been formulated systematically and defined as a bounded additive, a unitary operator in Hilbert space. It has been revealed the algebraic and topological property by mean of a functional analytic method. Furthermore, the set of Fresnel transform operator has the group property [5,6,7]. In recently, it is used in image processing, optical information processing, optical waveguides, computer-generated holograms, phase retrieval techniques, speckle pattern interferometry and so on [8,9,10,11,12,13,14,15].

The extension of optical fields through an optical instrument is practically limited to some finite area. This leads to the spatially band-limited problem. The effect of a band limitation has been studied for an optical Fourier transform. Sampling theorems have been derived by band-limited effect in Fourier transform plane and applied to some application areas. The sampling function systems are orthogonal system in Hilbert space and can be considered as coordinate system in some functional space. In sampling theorem, it is important to develop the orthogonal or orthonormal functional systems. In the literature, there are many examples of band-limited function in Fourier transform, its applications and references therein [16,17,18]. Band-limited effects in Fresnel diffraction plane and Rayleigh–Sommerfeld equation have been investigated using Fourier transform in which it was based on the angular-spectrum method [19,20,21,22]. However, the band-limited effect in Fresnel transform plane is not revealed sufficiently.

In this paper, the band-limited effect in Fresnel transform plane is investigated. For that, we seek the function that its total power in finite Fresnel transform plane is maximized, on condition that an input signal is zero outside the bounded region. This problem is a variational one with an accessory condition. This leads to the eigenvalue problems of Fredholm integral equation of the first kind. The kernel of the integral equation is of Hermitian conjugate and positive definite. Therefore, the eigenvalues are real nonnegative numbers. Moreover, we prove that the eigenfunctions corresponding to distinct eigenvalues have dual orthogonal property. By discretizing the kernel and integral range to seek the approximate solutions, the eigenvalue problems of the integral equation can depend on a one of the Hermitian matrix in finite-dimensional complex value vector space \(\left( {{\mathbb{C}}^{k} , k < \infty } \right)\). We use the Jacobi method to compute all eigenvalues and eigenvectors of the matrix. We consider the application of the eigenvectors to the problems of the approximating any functions. We show the validity and limitation of the eigenvectors in computer simulations.

2 Fresnel transformation

Assume that we place a diffracting screen on \(z = 0\) plane. The parameter \(z\) represents the normal distance from the input plane. Let \(\left( {\xi ,\eta } \right)\) be the coordinate of any point in input plane. Parallel to the screen at \(z > 0\) is a plane of observation. Let \(\left( {x,y} \right)\) be the coordinate of any point in this latter plane. If \(f\left( {\xi ,\eta } \right)\) represents the amplitude transmittance in Hilbert space, then the Fresnel transform is defined bywhere \(k\) is the wave number and \(i = \sqrt { - 1}\). The inverse Fresnel transform is defined by

Figure 1 shows a general optical system and its coordinate system.

Fig. 1
figure 1

A general optical system and its coordinate system. S and R are bounded rectangular region

3 Spatial band-limited signal

We assume that an input signal is zero outside the bounded region R and total power of \(g\left( {x,y} \right)\) is limited in the bounded region S. In Fig. 1, we showed that R is rectangular region in input plane and S is a rectangular region in diffraction plane. In this section we seek the function f(ξn) that maximizes the total power of \(g\left( {x,y} \right)\) provided that the total power in R is fixed.

3.1 Fredholm integral equation of the first kind

To simplify the discussion, we consider only one-dimensional Fresnel transform. The one-dimensional Fresnel transform is defined by

$$F\left( {x,z} \right) = \frac{1}{{\sqrt {i2\pi z} }}\mathop \smallint \limits_{ - \infty }^{\infty } f\left( \xi \right)\exp \left\{ {\frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi ,$$
(3)

where we set the wave number unit. The inverse Fresnel transform is defined by

$$f\left( \xi \right) = \sqrt {\frac{i}{2\pi z}} \mathop \smallint \limits_{ - \infty }^{\infty } F\left( {x,z} \right){\text{exp}}\left\{ { - \frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}dx.$$
(4)

Assume that \(f\left( \xi \right)\) is limited within the finite region \(R\) on the \({\upxi }\)-plane and its total power \(P_{R}\), namely the inner product of the function, is constant:

$$\begin{gathered} P_{R} = \mathop \smallint \limits_{ - \infty }^{\infty } \left| {f\left( \xi \right)} \right|^{2} d\xi = \mathop \smallint \limits_{R}^{{}} \left| {f\left( \xi \right)} \right|^{2} d\xi \hfill \\ = \int\limits_{R} {f\left( \xi \right)f^{*} \left( \xi \right)d\xi = const.} \hfill \\ \end{gathered}$$
(5)

where \(f^{*} \left( \xi \right)\) denotes the complex conjugate function of \(f\left( \xi \right)\). Assume that \(g\left( x \right)\) is the Fresnel transform of the function \(f\left( \xi \right)\) which is bounded by a finite region \(R\), that is,

$$\begin{gathered} g\left( x \right) = \frac{1}{{\sqrt {i2\pi z} }}\mathop \smallint \limits_{ - \infty }^{\infty } f\left( \xi \right)\exp \left\{ {\frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi \hfill \\ = \frac{1}{{\sqrt {i2\pi z} }}\int\limits_{R} {f\left( \xi \right){\text{exp}}\left\{ {\frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi .} \hfill \\ \end{gathered}$$
(6)

Then, the total power \(P_{S}\) of \(g\left( x \right)\) in the bounded region \(S\) is

$$\begin{gathered} P_{S} = \mathop \smallint \limits_{S}^{{}} \left| {g\left( x \right)} \right|^{2} dx = \mathop \smallint \limits_{S}^{{}} g^{*} \left( x \right)g\left( x \right)dx \hfill \\ = \mathop \smallint \limits_{S}^{{}} \frac{1}{{\sqrt { - i2\pi z} }}\mathop \smallint \limits_{R}^{{}} f^{*} \left( \xi \right){\text{exp}}\left\{ { - \frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi \hfill \\ \times \frac{1}{{\sqrt {i2\pi z} }}\mathop \smallint \limits_{R}^{{}} f\left( {\xi^{\prime}} \right)\exp \left\{ {\frac{i}{2z}\left( {x - \xi^{\prime}} \right)^{2} } \right\}d\xi^{\prime}dx \hfill \\ = \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)f^{*} \left( \xi \right)f\left( {\xi^{\prime}} \right)d\xi^{\prime}d\xi , \hfill \\ \end{gathered}$$
(7)

where the kernel function Ks(.,.) is defined by

$$\begin{gathered} K_{S} \left( {\xi ,\xi^{\prime}} \right) = \frac{1}{2\pi z}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ {\frac{i}{z}\left( {\xi - \xi^{\prime}} \right)x} \right\}dx. \hfill \\ \end{gathered}$$
(8)

We seek the function \(f\left( \xi \right)\) that maximizes \(P_{S}\) provided that the total power \(P_{R}\) is fixed. This problem is a variational one with an accessory condition. We use the method of Lagrange multiplier to solve this problem. Let us define two functions \(G\left( \xi \right)\) and \(H\left( \xi \right)\) as following:

$$G\left( \xi \right) \equiv \mathop \smallint \limits_{R}^{{}} f\left( \xi \right)f^{*} \left( \xi \right)d\xi - const.$$
(9)
$$\begin{gathered} H\left( \xi \right) \equiv \mathop \smallint \limits_{S}^{{}} \left| {g\left( x \right)} \right|^{2} dx \hfill \\ = \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)f^{*} \left( \xi \right)f\left( {\xi^{\prime}} \right)d\xi^{\prime}d\xi . \hfill \\ \end{gathered}$$
(10)

We want to maximize the function \(H\left( \xi \right)\), subject to the constraint \(G\left( \xi \right)\). Setting \({\uplambda }\) the Lagrange multiplier, Lagrangian is defined by

$$L\left( \xi \right) \equiv H\left( \xi \right) - {\uplambda }G\left( \xi \right).$$
(11)

We set the gradient of Lagrangian to zero, that is,

$$\nabla L\left( \xi \right) = \nabla H\left( \xi \right) - {\uplambda }\nabla G\left( \xi \right) = 0,$$
(12)

where \(\nabla\) indicate the gradient. By Eqs. (7), (9) and (10), we obtain

$$\mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)f^{*} \left( \xi \right)f\left( {\xi^{\prime}} \right)d\xi^{\prime} - {\uplambda }f\left( \xi \right)f^{*} \left( \xi \right) = 0.$$
(13)

We conclude that

$$\mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)f\left( {\xi^{\prime}} \right)d\xi^{\prime} = {\uplambda }f\left( \xi \right).$$
(14)

This is the Fredholm integral equations of the first kind. The light in diffraction plane spread, but for the spatial band-limited function \(f\left( \xi \right)\), the functional system in object plane which maximizes the light energy of the Fresnel transform \(g\left( x \right)\) in the bounded region satisfies this integral equation in analytically. In order to raise resolving power or resolution in general optical instruments, it is necessary to maximize the light energy. In signal processing, it is possible to represent the input signal on the functional system that maximizes the total power in Fresnel transform plane. Solving this equation depends on eigenvalue problems of the integral equation. From now on, we deal with the eigenvalue problem of this integral equation. This equation corresponds to some modification of the integral equation for prolate spheroidal wave functions. The integral equation and differential equation for the prolate spheroidal wave functions have been generalized and revealed its properties. Moreover, discrete prolate spheroidal wave functions have derived and their mathematical properties have been investigated [23,24,25,26,27,28,29]. The prolate spheroidal wave function has been applied to some problems [30, 31].

According to Eq. (7), we can write

$$\begin{gathered} \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)f^{*} \left( \xi \right)f\left( {\xi^{\prime}} \right)d\xi^{\prime}d\xi \hfill \\ = \mathop \smallint \limits_{S}^{{}} \left| {\frac{1}{{\sqrt {i2\pi z} }}\mathop \smallint \limits_{R}^{{}} f\left( \xi \right){\text{exp}}\left\{ {\frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi } \right|^{2} dx \ge 0. \hfill \\ \end{gathered}$$
(15)

Therefore, the kernel \(K_{S} \left( {\xi ,\xi^{\prime}} \right)\) of the integral equation is positive definite.

To prove the eigenvalues of above integral equation are nonnegative, it is necessary to show \(\lambda \varphi \ge 0\). In this case, \({\uplambda }\) is an eigenvalue, \(\varphi\) is an eigenvector and \(\parallel \cdot \parallel\) indicates the norm in Hilbert space [32]. Therefore, by replacing \(f\left( \xi \right)\) with \(\varphi \left( \xi \right)\) and taking Eq. (14) into consideration, we can write

$$\begin{gathered} \lambda \mathop \smallint \limits_{R}^{{}} \left| {\varphi \left( \xi \right)} \right|^{2} d\xi \hfill \\ = \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)\varphi^{*} \left( \xi \right)\varphi \left( {\xi^{\prime}} \right)d\xi d\xi^{\prime} \hfill \\ = \mathop \smallint \limits_{S}^{{}} \left| {\frac{1}{{\sqrt {i2\pi z} }}\mathop \smallint \limits_{R}^{{}} \varphi \left( \xi \right)\exp \left\{ {\frac{i}{2z}\left( {x - \xi } \right)^{2} } \right\}d\xi } \right|^{2} dx \hfill \\ \ge 0. \hfill \\ \end{gathered}$$
(16)

Therefore, the eigenvalues of the integral equation are nonnegative and real number.

Denoting the largest eigenvalue of Eq. (14) by \(\lambda_{max}\), then the maximum \(P_{S - max}\) of Eq. (7) becomes

$$P_{S} \le P_{S - max} = \lambda_{max} P_{R} .$$
(17)

Theorem 1

For the spatial band-limited function \(\varphi \left( \xi \right)\), the optimum function that maximizes the power of the Fresnel transform \(g\left( {x,z} \right)\) in the bounded region S equals the eigenfunction corresponding to the largest eigenvalue of Eq. (14).

Let us consider the kernel of the integral equation. If object plane and Fresnel transform plane are bounded by finite regions, the kernels of the integral equation are calculated analytically as the kernel function. We set the finite region \(S\) in Fresnel transform plane \(- {\text{a}} \le x \le {\text{a}}\):

$$\begin{gathered} K_{{\left[ { - a,a} \right]}} \left( {\xi ,\xi^{\prime}} \right) = \frac{1}{2\pi z}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{ - a}^{a} {\text{exp}}\left\{ {\frac{i}{z}\left( {\xi - \xi^{\prime}} \right)x} \right\}dx \hfill \\ = \frac{1}{{\pi \left( {\xi - \xi^{\prime}} \right)}}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times {\text{sin}}\left( {\frac{{\xi - \xi^{\prime}}}{z}a} \right). \hfill \\ \end{gathered}$$
(18)

Let us consider the complex conjugate of the kernel of the integral equation:

$$\begin{gathered} \overline{{K_{{\left[ { - a,a} \right]}} \left( {\xi ,\xi^{\prime}} \right)}} = \frac{1}{{\pi \left( {\xi - \xi^{\prime}} \right)}}\exp \left\{ {\frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times {\text{sin}}\left( {\frac{{\xi - \xi^{\prime}}}{z}a} \right) \hfill \\ = \frac{ - 1}{{\pi \left( {\xi^{\prime} - \xi } \right)}}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{2} } \right)} \right\} \hfill \\ = \frac{1}{{\pi \left( {\xi^{\prime} - \xi } \right)}}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{2} } \right)} \right\}\sin \left( {\frac{{\xi^{\prime} - \xi }}{z}a} \right) \hfill \\ \times \left( { - 1} \right)\sin \left( {\frac{{\xi^{\prime} - \xi }}{z}a} \right) \hfill \\ = K_{{\left[ { - a,a} \right]}} \left( {\xi^{\prime},\xi } \right). \hfill \\ \end{gathered}$$
(19)

Therefore, the kernel is of Hermitian symmetry.

3.2 Dual orthogonal property

If \(\lambda_{m}\) and \(\lambda_{n}\) are distinct eigenvalues of the above integral equation, i.e., \({\text{m}} \ne {\text{n}}\), and \(\varphi_{m}\), \(\varphi_{n}\) are corresponding eigenfunctions, we can express them as the following integral formulas:

$$\mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)\varphi_{m} \left( {\xi^{\prime}} \right)d\xi^{\prime} = \lambda_{m} \varphi_{m} \left( \xi \right).$$
(20)
$$\mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)\varphi_{n} \left( {\xi^{\prime}} \right)d\xi^{\prime} = \lambda_{n} \varphi_{n} \left( \xi \right).$$
(21)

Let us consider the complex conjugate of the kernel of the integral equation:

$$\begin{gathered} K_{S}^{*} \left( {\xi ,\xi^{\prime}} \right) = \frac{1}{2\pi z}\exp \left\{ {\frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ { - \frac{i}{z}\left( {\xi - \xi^{\prime}} \right)x} \right\}dx. \hfill \\ \end{gathered}$$
(22)

From Eq. (8), we have

$$\begin{gathered} K_{S} \left( {\xi^{\prime},\xi } \right) = \frac{1}{2\pi z}\exp \left\{ { - \frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} \exp \left\{ {\frac{i}{z}\left( {\xi^{\prime} - \xi } \right)x} \right\}dx \hfill \\ = \frac{1}{2\pi z}\exp \left\{ {\frac{i}{2z}\left( {\xi^{2} - \xi^{^{\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ { - \frac{i}{z}\left( {\xi - \xi^{\prime}} \right)x} \right\}dx. \hfill \\ \end{gathered}$$
(23)

Therefore, we obtain

$$K_{S}^{*} \left( {\xi ,\xi^{\prime}} \right) = K_{S} \left( {\xi^{\prime},\xi } \right),$$
(24)

and the integral kernel \(K_{S} \left( {\xi ,\xi^{\prime}} \right)\) is of Hermitian symmetry. If we multiply the both sides of Eq. (20) by \(\varphi_{n}^{*} \left( \xi \right)\) and integrate with respect to over \(R\), we obtain

$$\begin{gathered} \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)\varphi_{m} \left( {\xi^{\prime}} \right)\varphi_{n}^{*} \left( \xi \right)d\xi^{\prime}d\xi \hfill \\ = \lambda_{m} \mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi . \hfill \\ \end{gathered}$$
(25)

After taking the complex conjugate of Eq. (21), we multiply the both sides by \(\varphi_{m} \left( \xi \right)\) and integrate with respect to over \(R\), we obtain

$$\begin{gathered} \int\limits_{R}^{{}} {\int\limits_{R}^{{}} {K_{s}^{*} (\xi ,} } )\varphi_{m} (\xi )\varphi (\xi^{\prime})d\xi^{\prime}d\xi \hfill \\ = \lambda_{n}^{*} \mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi . \hfill \\ \end{gathered}$$
(26)

From Eqs. (26) and (24), we obtain

$$\begin{gathered} \mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi^{\prime},\xi } \right)\varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( {\xi^{\prime}} \right)d\xi^{\prime}d\xi \hfill \\ = \lambda_{n}^{*} \mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi . \hfill \\ \end{gathered}$$
(27)

Since the left side of Eq. (25) and the right side of Eq. (27) are equal and \(\lambda_{n}\) is real number, we have

$$\left( {\lambda_{m} - \lambda_{n} } \right)\mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi = 0.$$
(28)

For \(\lambda_{m} \ne \lambda_{n}\), we conclude

$$\mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi = 0.$$
(29)

That is to say, \(\varphi_{m} \left( \xi \right)\) and \(\varphi_{n} \left( \xi \right)\) are orthogonal on \(R\). If \(m = n\), the orthogonal condition can be shown by Eq. (14), Mercer’s theorem [33] and the complete orthonormal eigenfunctional systems which are derived by the eigenfunctions.

Let us consider the extension of the domain of \({\upxi }\) into one-dimensional Euclidean space \(E\). Now, we can redefine the following integral equation:

$$\mathop \smallint \limits_{R}^{{}} K_{S} \left( {\xi ,\xi^{\prime}} \right)\varphi \left( {\xi^{\prime}} \right)d\xi^{\prime} = \lambda \varphi \left( \xi \right) \xi \in E.$$
(30)

Then, for the eigenfunctions\(\varphi_{m}\), and\(\varphi_{n}\), \(m \ne n\), we have

$$\begin{gathered} \mathop \smallint \limits_{ - \infty }^{\infty } \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi \hfill \\ = \frac{1}{{\lambda_{m} \lambda_{n} }}\mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( {\xi^{\prime}} \right)\varphi_{n}^{*} \left( {\xi^{\prime\prime}} \right) \hfill \\ \times \mathop \smallint \limits_{ - \infty }^{\infty } K_{S} \left( {\xi ,\xi^{\prime}} \right)K_{S}^{*} \left( {\xi ,\xi^{\prime\prime}} \right)d\xi d\xi^{\prime}d\xi^{\prime\prime} \hfill \\ \end{gathered}$$
(31)

We need to consider the integral part about the kernel:

$$\begin{gathered} \mathop \smallint \limits_{ - \infty }^{\infty } K_{S} \left( {\xi ,\xi^{\prime}} \right)K_{S}^{*} \left( {\xi ,\xi^{\prime\prime}} \right)d\xi \hfill \\ = \frac{1}{{\left( {2\pi z} \right)^{2} }}\exp \left\{ {\frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{^{\prime\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{ - \infty }^{\infty } \mathop \smallint \limits_{S}^{{}} \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ {\frac{i}{z}\left( {\xi - \xi^{\prime}} \right)x - \frac{i}{z}\left( {\xi - \xi^{\prime\prime}} \right)x^{\prime}} \right\}dxdx^{\prime}d\xi \hfill \\ \times \mathop \smallint \limits_{S}^{{}} \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ { - \frac{i}{z}\left( {\xi^{\prime}x + \xi^{\prime\prime}x^{\prime}} \right)} \right\} \hfill \\ = \frac{1}{{\left( {2\pi z} \right)^{2} }}{\text{exp}}\left\{ {\frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{^{\prime\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{ - \infty }^{\infty } {\text{exp}}\left\{ {\frac{i}{z}\left( {x - x^{\prime}} \right)\xi } \right\}d\xi dxdx^{\prime}. \hfill \\ \end{gathered}$$
(32)

Using the delta function \(\delta \left( \cdot \right)\), as shown in the Appendix, such that

$${\updelta }\left( {x - x^{\prime}} \right) = \frac{1}{2\pi z}\mathop \smallint \limits_{ - \infty }^{\infty } {\text{exp}}\left\{ {\frac{i}{z}\left( {x - x^{\prime}} \right)\xi } \right\}d\xi ,$$
(33)

the above equation can be expressed by the following form:

$$\begin{gathered} \frac{1}{2\pi z}{\text{exp}}\left\{ {\frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{^{\prime\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ { - \frac{i}{z}\left( {\xi^{\prime}x + \xi^{\prime\prime}x^{\prime}} \right)} \right\}{\updelta }\left( {x - x^{\prime}} \right)dxdx^{\prime} \hfill \\ = \frac{1}{2\pi z}{\text{exp}}\left\{ {\frac{i}{2z}\left( {\xi^{^{\prime}2} - \xi^{^{\prime\prime}2} } \right)} \right\} \hfill \\ \times \mathop \smallint \limits_{S}^{{}} {\text{exp}}\left\{ { - \frac{i}{z}\left( {\xi^{\prime} - \xi^{\prime\prime}} \right)x} \right\}dx \hfill \\ = K_{S}^{*} \left( {\xi^{\prime},\xi^{\prime\prime}} \right). \hfill \\ \end{gathered}$$
(34)

Substituting Eq. (34) into Eq. (32), we have

$$\begin{gathered} \mathop \smallint \limits_{ - \infty }^{\infty } \varphi_{m} \left( \xi \right)\varphi_{n}^{*} \left( \xi \right)d\xi \hfill \\ = \frac{1}{{\lambda_{m} \lambda_{n} }}\mathop \smallint \limits_{R}^{{}} \mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( {\xi^{\prime}} \right)\varphi_{n}^{*} \left( {\xi^{\prime\prime}} \right)K_{S}^{*} \left( {\xi^{\prime},\xi^{\prime\prime}} \right)d\xi^{\prime}d\xi^{\prime\prime} \hfill \\ = \frac{1}{{\lambda_{m} }}\mathop \smallint \limits_{R}^{{}} \varphi_{m} \left( {\xi^{\prime}} \right)\varphi_{n}^{*} \left( {\xi^{\prime}} \right)d\xi^{\prime}. \hfill \\ \end{gathered}$$
(35)

If the functional systems \(\left\{ {\varphi_{m} \left( \xi \right)} \right\}\) are orthogonal on \(E\), these also are orthogonal on \(R\). Therefore, the orthogonal functional systems have dual orthogonal property.

4 Numerical computations

It is difficult in general to seek the strict solution for the eigenvalue problem of the integral equation. Therefore, we desire to seek the approximate solutions in practical exact accuracy [34]. By discretizing the integral equation, we deal with an eigenvalue problem of a matrix. We apply the eigenvectors of a matrix to the problems of approximation of any functions in computer.

4.1 Discretization of the kernel function

By discretizing the kernel function and integral range at equal distance and using the values of the discrete sampling points, the integral equation can be written as follows:

$$\mathop \sum \limits_{j = 1}^{N} K_{ij} x_{j} = \lambda x_{i} ,$$
(36)

where \(i,j\) are the natural number, \(1 \le i \le M\). The matrix \(K_{ij}\) is the Hermitian matrix if the kernel is discretized evenly spaced and \(M = N\). Therefore, the eigenvalue problems of the integral equation depend on one of the Hermitian matrix in finite-dimensional vector space. In general finite-dimensional vector spaces, the eigenvalues of Hermitian matrix are real numbers, and then, eigenvectors from different eigenspaces are orthogonal [35].

However, the diagonal elements of our matrix can be of indeterminate form. Therefore, we seek the limit value. If \(\xi = \xi^{\prime}\), we can write

$$\begin{gathered} \mathop {\lim }\limits_{{\xi \to \xi^{\prime}}} K_{{\left[ { - a,a} \right]}} \left( {\xi ,\xi^{\prime}} \right) \hfill \\ = \mathop {\lim }\limits_{{\xi \to \xi^{\prime}}} \frac{1}{\pi }{\text{exp}}\left( {\xi - \xi^{\prime}} \right)\frac{{{\text{sin}}\left\{ {\frac{a}{z}\left( {\xi - \xi^{\prime}} \right)} \right\}}}{{\xi - \xi^{\prime}}}. \hfill \\ \end{gathered}$$
(37)

We can replace \(\xi - \xi^{\prime}\) with \(X\):

$$\begin{gathered} \mathop {\lim }\limits_{{\xi - \xi^{\prime}}} K_{{\left[ { - a,a} \right]}} \left( {\xi ,\xi^{\prime}} \right) = \mathop {\lim }\limits_{X \to 0} \frac{1}{\pi }{\text{exp}}\left( 0 \right)\frac{{{\text{sin}}\left\{ {\frac{a}{z}X} \right\}}}{X} \hfill \\ = \frac{1}{\pi }{\text{exp}}\left( 0 \right)\mathop {\lim }\limits_{X \to 0} \frac{{{\text{sin}}\left\{ {\frac{a}{z}X} \right\}}}{{\frac{a}{z}X}}\frac{a}{z} \hfill \\ = \frac{1}{\pi }\frac{a}{z}\mathop {\lim }\limits_{X \to 0} \frac{{{\text{sin}}\left\{ {\frac{a}{z}X} \right\}}}{{\frac{a}{z}X}} = \frac{a}{\pi z}. \hfill \\ \end{gathered}$$
(38)

We use the Jacobi method [36] to compute all eigenvalues and eigenvectors of the matrix. The Jacobi method is a procedure for the diagonalization of complex symmetric matrices, using a sequence of plane rotations through complex angles. All eigenvectors computed by the Jacobi method are of orthonormal vectors automatically. Now, we set \(M = N = 100\). Figure 2 shows the eigenvalues in descending order if \(z = 27\), \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\). They are nonnegative and real number. In Fig. 3, (a) shows real part and imaginary part of the eigenvectors for the largest eigenvalue. (b) is the eigenvectors for second large eigenvalue. (c) is the eigenvectors for third large eigenvalue. (d–f) are eigenvectors for 4th–6th large eigenvalue. Because of 100-dimensional complex-valued vector space \(\left( {{\mathbb{C}}^{100} } \right)\), there are 94 eigenvectors except for these. These eigenvectors are the orthonormal bases in \({\mathbb{C}}^{100}\).

Fig. 2
figure 2

Plot of the eigenvalues in descending order. \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\), z = 27.0

Fig. 3
figure 3

Plots of real part and imaginary part of the eigenvectors for the largest eigenvalue a. b Eigenvectors for second large eigenvalue. c Eigenvectors for third large eigenvalue. d-f Eigenvectors for 4th–6th large eigenvalue. \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\), z = 27.0

4.2 Approximation of any functions

We consider the application of the eigenvectors to the problem of approximating any functions. Theoretically, we deal with a problem of expressing an arbitrary element on a finite N-dimensional Hilbert space \(H_{N}\) with an orthonormal basis. For any element \(u\) in \(H_{N}\), using orthonormal basis \(\left\{ {\psi_{n} } \right\}_{n = 1}^{N}\), we can write

$$u = \mathop \sum \limits_{n = 1}^{N} u,\psi_{n} \psi_{n} ,$$
(39)

where \(\left\langle { \cdot \,,\, \cdot } \right\rangle\) is an inner product [32]. Now, we set \(N = 100\). Let us consider the set in \({\mathbb{C}}^{100}\) of all 100-tuples:

$$u = \left( {u_{1} , u_{2} , \cdots , u_{100} } \right),$$
(40)

where \(u_{1} , u_{2} , \cdots , u_{100}\) are complex number.

First, let us consider a following real value test function:

$$f_{1} \left( x \right) = \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\cos \left( x \right)} \\ {\sin \left( x \right)} \\ \end{array} } & {, x \in \left[ {0, 2\pi } \right[} \\ \end{array} } \right..$$
(41)

By discretizing the test function evenly spaced at 100 points, we created test vectors in 100-dimensional real-valued vector space \(\left( {{\mathbb{R}}^{100} } \right)\). It was confirmed that these test vectors can be expressed by the eigenvector expansions. Figure 4 illustrates the mean square error versus the number of eigenvectors. The normalized mean square error is defined by

$${\text{NMSE}}\left( k \right) = \frac{{u^{k} - u_{2} }}{{u_{2} }},$$
(42)

where \(u^{k}\) is the sum in Eq. (39) up to \(k\), \(u\) is the original vector and \(\parallel \cdot \parallel_{2}\) is the \(\ell^{2}\)-norm. From Fig. 4, we can see that the error decreases with increasing number of eigenvectors used in the expansion.

Fig. 4
figure 4

Plots of the normalized mean square error versus the number of eigenvectors. Real-valued function cosine(x) and sine(x) were constituted by the eigenvector expansions

Second, let us consider a following real value test function which is real-valued hyperbolic cosine function:

$$f_{2} \left( {x,c} \right) = {\text{cosh}}\left( {cx} \right), c \in \left\{ {1,2,3} \right\}.$$
(43)

By discretizing the test function in the same way, we created a test vector in \({\mathbb{R}}^{100}\). Figure 5 illustrates the mean square error versus the number of eigenvectors. From Fig. 5, we can see that the error decreases with increasing number of eigenvectors used in the expansion.

Fig. 5
figure 5

Plots of the normalized mean square error versus the number of eigenvectors. Real-valued hyperbolic cosine \(\cosh \left( {cx} \right)\) was constituted by the eigenvector expansions. Cϵ{1,2,3}

Third, let us consider a following complex value test function:

$$f_{3} \left( {x,c} \right) = {\text{exp}}\left( {icx} \right), c \in \left\{ {1,2,3} \right\}.$$
(44)

By discretizing the test function in the same way, we created a test vector in \({\mathbb{C}}^{100}\). Figure 6 illustrates the mean square error versus the number of eigenvectors. From Fig. 6, we can see that the error decreases with increasing number of eigenvectors used in the expansion.

Fig. 6
figure 6

Plots of the normalized mean square error versus the number of eigenvectors. Complex-valued function \({\text{exp}}\left( {icx} \right)\) was constituted by the eigenvector expansions. Cϵ{1,2,3}

4.3 Discrete Fresnel transforms of the eigenvectors

By discretizing the function and using the sampling theorem in Eqs. (3) and (4), we can derive one-dimensional discrete Fresnel transform [5, 37], such that

$$F\left( l \right) = \frac{1}{{\sqrt {iN} }}\mathop \sum \limits_{k = 0}^{N - 1} f\left( k \right){\text{exp}}\left\{ {\frac{i\pi }{N}\left( {k - l} \right)^{2} } \right\},\left( {l = 0,1,2, \cdots ,N - 1} \right).$$
(45)

There exists an inverse discrete Fresnel transform, which is

$$f\left( k \right) = \sqrt{\frac{i}{N}} \mathop \sum \limits_{l = 0}^{N - 1} F\left( l \right){\text{exp}}\left\{ { - \frac{i\pi }{N}\left( {k - l} \right)^{2} } \right\},\left( {k = 0,1,2, \cdots , N - 1} \right).$$
(46)

The eigenvectors which computed in Sect. 4.1 are the orthonormal systems in input plane. It can be considered to be a coordinate system. To confirm the appearance of the eigenvectors in Fresnel transform plane, we computed one-dimensional discrete Fresnel transform. In Fig. 7, (a) shows real part and imaginary part of the eigenvectors for the largest eigenvalue, if \(z = 20.0\), \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\). (b) and (c) are second large eigenvector and third eigenvector, respectively. (d) is discrete Fresnel transform of (a). (e) and (f) are discrete Fresnel transform of (b) and (c), respectively. From Fig. 7, we can see that a real part and an imaginary part become similar form in discrete Fresnel transform plane.

Fig. 7
figure 7

Plots of real part and imaginary part of the eigenvectors for the largest eigenvalue a. b Eigenvector for second large eigenvalue. c Eigenvector for third large eigenvalue. \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\), z = 20.0. d-f Discrete Fresnel transforms of a-c, respectively

4.4 Extension to two-dimensional case

We consider the extension of the eigenvector expansions to two-dimensional image. Assume that a matrix \(V\) is written in the form

$$V = \left[ {v_{1} v_{2} v_{3} \cdots v_{N} } \right],$$
(47)

where the elements \(\left\{ {v_{i} } \right\}_{i = 1}^{N}\) are the eigenvectors which are computed in Sect. 4.1 and column vectors. Then, \(V\) can be of unitary and its column vectors form an orthonormal set in \({\mathbb{C}}^{N}\) with respect to the complex Euclidean inner product. Assume that the matrix \(A\) is an image that has been sampled and quantized and is of a square matrix of size \(N \times N\). A general linear transformation on an image matrix \(A\) can be written in the form

$$B = V^{t} AV.$$
(48)

The transpose of \(V\) is denoted by \(V^{t}\). The inverse transform of Eq. (48) can be written in the form

$$A = VBV^{t} .$$
(49)

We define the standard unit vectors in \({\mathbb{R}}^{N}\).

$$e_{1} = \left( {1,0, \cdots ,0} \right), e_{2} = \left( {0,1,0, \cdots ,0} \right), \cdots ,e_{N} = \left( {0, \cdots ,1} \right).$$
(50)

The matrix \(B\) whose elements are \(b_{st}\) can be expressed as

$$B = \left( {b_{st} } \right) = \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} e_{i}^{t} e_{j} B, s,t \in \left[ {1,2, \cdots ,N} \right].$$
(51)

The matrix \(A\) can be reconstructed by the following equation:

$$A = \mathop \sum \limits_{s = 1}^{N} \mathop \sum \limits_{t = 1}^{N} b_{st} v_{s} v_{t}^{t} .$$
(52)

This means the eigenvector expansion of an image.

To confirm the effectiveness of our eigenvectors, reconstruction of an image from the expansion coefficients was carried out using Eq. (52). The original test image 1 which is a rectangular uniform distribution in the center region is shown in Fig. 8a. This image is discretized \(100 \times 100\) pixels and \(8{\text{ bit/pixel}}\). We computed expansion coefficients of the original image, using Eq. (48) and all eigenvectors. The eigenvectors to use were computed on condition that \(z = 27\), \(S = \left[ { - 15.0, 15.0} \right]\), \(R = \left[ { - 15.0, 15.0} \right]\). The reconstructed image in Fig. 8b–f were computed using the coefficients and Eq. (52). The number of the eigenvectors to reconstruct was varied from 10 to 100. From Fig. 8 (b-f), we can see that the reconstructed image can approximate sufficiently to the original image with increasing the number of eigenvectors. The original test image 2 which is based on letters is shown in Fig. 9a. This image is the same size in the test image 1. The reconstructed image in Fig. 9b–f were computed on the same condition in Fig. 8. From Fig. 9b–f, we can see that the letter on reconstructed image can approximate gradually to the original image with increasing the number of eigenvectors.

Fig. 8
figure 8

a The original test image 1 (100 × 100 pixels, 8 bpp) is a rectangular uniform distribution. b-f These images are reconstructed by 10, 25, 50, 75 and 100 eigenvectors, respectively

Fig. 9
figure 9

a The original test image 2 (100 × 100 pixels, 8 bpp) is based on letters. b-f These images are reconstructed by 10, 25, 50, 75 and 100 eigenvectors, respectively

The image matrix norm is defined by

$$A = \left( {\mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} a_{ij}^{2} } \right)^{1/2} .$$
(53)

The normalized mean square error is defined by

$${\text{NMSE}}\left( k \right) = \frac{{A^{k} - A_{o} }}{{A_{o} }},$$
(54)

where \(A^{k}\) is reconstructed image using \(k\) eigenvectors and \(A_{o}\) is the original test image. Figure 10 illustrates the mean square error versus the number of eigenvectors in reconstructed image of the original test image 1 and 2. From Fig. 10, we can see that NMSE decreased with increasing the number of eigenvectors.

Fig. 10
figure 10

Plots of the normalized mean square error versus the number of eigenvectors

5 Conclusions

We have investigated the band-limited effect in Fresnel transform plane. For that, we have sought the function that its total power is maximized in finite Fresnel transform plane, on condition that an input signal is zero outside the bounded region. We have showed that this leads to the eigenvalue problems of Fredholm integral equation of the first kind. We have proved that the kernel of the integral equation is of Hermitian conjugate and positive definite. We have also shown that the eigenfunctions corresponding to distinct eigenvalues have dual orthogonal property. By discretizing the kernel and integral range, the problem depends on the eigenvalue problem of Hermitian conjugate matrix in finite-dimensional vector space. Using the Jacobi method, we computed the eigenvalues and eigenvectors of the matrix. Furthermore, we applied it to the problem of approximating a function and evaluated the error. We confirmed the validity of the eigenvectors for finite Fresnel transform by computer simulations.

In two-dimensional case, a modified Fredholm integral equation of first kind can be derived using same method and condition. The orthogonal functional systems also have dual orthogonal property. It is difficult to compute all eigenvalue eigenvector by discretizing the equation. Since the matrix can be large scale. It is necessary to use other method.

In this paper, there are many parameters, especially, the input bounded region S, the band-limited areas R, the wave number k and the normal distance z. It is necessary to consist of orthogonal functional systems with the optimal parameters for finite Fresnel transform in application of an optical system. Moreover, in general, the matrix given by discretizing the kernel of the integral equation is not the Hermitian matrix. If so, it is difficult to compute all eigenvalues and eigenvectors accurately. It is also necessary to consider other computational methods for this.