Keywords

1 Introduction

In this paper, we consider the problem of estimating the frequencies of multiple two-dimensional (2-D) real-valued sinusoids in presence of additive white Gaussian noise. This problem is the more precise case of estimating the parameters of a 2-D regular and homogeneous random field from a single observed realization as of [1]. The real-valued 2-D sinusoidal signal models, also known as X-texture modes. These modes come into existence naturally in experimental, analytical, modal and vibrational analysis of circular shaped objects. X-texture modes are often used for modelling the displacements in the cross-sectional planes of isotropic, homogeneous, thick walled cylinders [24], laminated composite cylindrical shells [5], and circular plates [6]. These X-texture modes have also been used to describe the radial displacements of logs of spruce subjected to continuous sinusoidal excitation [7] and standing trunks of spruce subjected to impact excitation [810]. The proposed signal model offers cumbersome challenges for 2-D joint frequency estimation algorithms. Many algorithms for estimating complex-valued frequencies are well documented in the literatures [11, 12] and for 1-D real-valued frequencies in [1315]. A vivid discussion on the problem of analyzing 2-D homogeneous random fields with discontinuous spectral distribution functions can be found in [16]. Parameter estimation techniques of sinusoidal signals in additive white noise include the periodogram-based approximation (applicable for widely spaced sinusoids) to the maximum-likelihood (ML) solution [1719], the Pisarenko harmonic decomposition [20], or the singular value decomposition [21]. A matrix enhancement and matrix pencil method for estimating the parameters of 2-D superimposed, complex-valued exponential signals was suggested in [11]. In [22], the concept of partial forward–backward averaging is proposed as a means for en-hancing the frequency and damping estimation of 2-D multiple real-valued sinusoids (X-texturemodes) where each mode considered as a mechanism for forcing the two plane waves towards the mirrored direction-of-arrivals. In [23], 2-D parameter estimation of a single damped/undamped real/complex tone is proposed which is referred to as principal-singular-vector utilization for modal analysis (PUMA).

We present a new approach of solving the 2-D real valued sinusoidal signal frequencies estimation problem based on cross-correlation technique to resolve the identical frequencies. The proposed idea based upon the computationally efficient subspace based method without eigendecomposition (SUMWE) [24, 25]. The paper is organized as follows. The signal model, together with a definition of the addressed problem, is presented in Sect. 2. The basic definition and the proposed technique both are detailed in Sect. 3 followed by simulation results and conclusion in Sects. 4 and 5, respectively. Throughout this paper upper case, bold letters denote matrices where as lowercase bold letters are vectors. The superscript T denotes transposition of a matrix.

2 Data Model and Problem Definition

Consider the following set of noisy data:

$$ {\text{r}}\left( {{\text{m}},{\text{n}}} \right) = {\text{x}}\left( {{\text{m}},{\text{n}}} \right) + {\text{e}}\left( {{\text{m}},{\text{n}}} \right) $$
(1)

where 0 ≤ m ≤ N1−1 and 0 ≤ n ≤ N2−1. The model of the noiseless data x(m,n) is described by,

$$ x(m,n) = \sum\limits_{k = 1}^{D} {a_{k} cos(\omega_{k} m + \varphi_{1k} )cos(v_{k} n + \varphi_{2k} )} $$
(2)

The signal x(m,n) consists of D, two dimensional real-valued sinusoids described by normalized 2-D frequencies {ωk vk}, (k = 1,2…,D), the real amplitude {ak} (k = 1,2…,D) and the phases ϕ1k and ϕ2k which are independent random variables uniformly distributed over [0,2π]. e(m,n) is a zero mean additive white Gaussian noise with variance σ2. Further assumed that αk and βk are independent of e(m,n).

Let us define two M × 1 snapshot vectors with assumption M > D, described as follows

$$ {\mathbf{y}}_{\omega } \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\frac{1}{2}\left[ {{\mathbf{y}}_{1} \left( {{\text{m}},{\text{n}}} \right) + {\mathbf{y}}_{2} \left( {{\text{m}},{\text{n}}} \right)} \right] $$
(3a)
$$ {\mathbf{y}}_{\text{v}} \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\frac{1}{2}\left[ {{\mathbf{y}}_{3} \left( {{\text{m}},{\text{n}}} \right) + {\mathbf{y}}_{4} \left( {{\text{m}},{\text{n}}} \right)} \right] $$
(3b)

where,

$$ {\mathbf{y}}_{ 1} \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\left[ {{\text{r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 1,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} + {\text{M}} - 1,{\text{n}}} \right)} \right]^{\text{T}} $$
(4a)
$$ {\mathbf{y}}_{2} \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\left[ {{\text{r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}} - 1,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} - {\text{M}} + 1,{\text{n}}} \right)} \right]^{\text{T}} $$
(4b)
$$ {\mathbf{y}}_{3} \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\left[ {{\text{r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m,n}} + 1} \right) \ldots {\text{r}}\left( {{\text{m,n}} + {\text{M}} - 1} \right)} \right]^{\text{T}} $$
(4c)
$$ {\mathbf{y}}_{4} \left( {{\text{m}},{\text{n}}} \right)\,\triangleq\,\left[ {{\text{r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m,n}} - 1} \right) \ldots {\text{r}}\left( {{\text{m,n}} - {\text{M}} + 1} \right)} \right]^{\text{T}} $$
(4d)

From the above set of equations we can obtain pair of expression for the two M × 1 snapshot vectors by substituting equation (4a, b) in (3a) and (4c, d) in (3b) as follows,

$$ y_{\omega } \left( {m,n} \right) = A(\omega )s\left( {m,n} \right) + g(m,n) $$
(5)
$$ y_{v} \left( {m,n} \right) = A(v)s\left( {m,n} \right) + h(m,n) $$
(6)

where A(ω) = [γ(ω1)…γ(ωD)] and A(v) = [ρ(v1)…ρ(vD)] are M × D matrices, and s(m,n) = [a 1cos(ω1m + α1)cos(v1n + β1)…a Dcos(ωDm + αD)cos(vDn + βD)]T is the D × 1 signal vector, γ(ωi) and \( {{\uprho}}\left( {{\text{v}}_{\text{i}} } \right) \) are M × 1 vectors defined respectively as γi)=[1 cos(ωi)…cos((M−1)ωi )]T and \( {{\uprho}}\left( {{\text{v}}_{\text{i}} } \right) = [ 1 {\text{ cos}}\left( {{\text{v}}_{\text{i}} } \right) \ldots { \cos }\left( {\left( {{\text{M}} - 1} \right){\text{v}}_{\text{i}} } \right)]^{\text{T}} \). The modified M × 1 error vectors g(m,n) and h(m,n) are defined respectively as \( {\mathbf{g}}\left( {{\text{m}},{\text{n}}} \right) \triangleq \left[ {{\text{g}}_{ 1} \left( {{\text{m}},{\text{n}}} \right){\text{ g}}_{ 2} \left( {{\text{m}},{\text{n}}} \right)\ldots{\text{ g}}_{\text{M}} \left( {{\text{m}},{\text{n}}} \right)} \right]^{\text{T}} \) and \( {\mathbf{h}}\left( {{\text{m}},{\text{n}}} \right) \triangleq \left[ {{\text{h}}_{ 1} \left( {{\text{m}},{\text{n}}} \right){\text{ h}}_{ 2} \left( {{\text{m}},{\text{n}}} \right) \ldots {\text{h}}_{\text{M}} \left( {{\text{m}},{\text{n}}} \right)} \right]^{\text{T}} \), where \( {\text{g}}_{\text{j}} \left( {{\text{m}},{\text{n}}} \right) = 1/ 2\left[ {{\text{e}}\left( {{\text{m}} + {\text{j}} - 1,{\text{n}}} \right) + {\text{e}}\left( {{\text{m}} - {\text{j}} + 1,{\text{n}}} \right)} \right] \) and \( {\text{h}}_{\text{j}} \left( {{\text{m}},{\text{n}}} \right) = 1/ 2\left[ {{\text{e}}\left( {{\text{m}},{\text{n}} + {\text{j}} - 1} \right) +\,{\text{e}}\left( {{\text{m}},{\text{n}} - {\text{j}} + 1} \right)} \right] \). The matrices A(ω) and A(v) are full rank matrices because all the columns are linearly independent to each other.

2.1 Data Model Modification

We first obtained two new data models as follows,

$$ {\mathbf{z}}_{\omega } \left( {{\text{m}},{\text{n}}} \right) = {\mathbf{A}}\left( \omega \right)\Upomega_{\omega } {\mathbf{s}}\left( {{\text{m}},{\text{n}}} \right) + {\mathbf{q}}_{\text{r}} \left( {{\text{m}}.{\text{n}}} \right) $$
(7)
$$ {\mathbf{z}}_{\text{v}} \left( {{\text{m}},{\text{n}}} \right) = {\mathbf{A}}\left( {\text{v}} \right)\Upomega_{\text{v}} {\mathbf{s}}\left( {{\text{m}},{\text{n}}} \right) + {\mathbf{q}}_{\text{e}} \left( {{\text{m}}.{\text{n}}} \right) $$
(8)

by implementing following mathematical operations,

$$ z_{\omega } (m,n) = \frac{1}{4}\mathop \sum \limits_{j = 1}^{4} z_{j} (m,n) $$
(9a)
$$ {\mathbf{z}}_{\text{v}} \left( {{\text{m}},{\text{n}}} \right) = \frac{1}{4}\mathop \sum \limits_{{{\text{j}} = 5}}^{8} {\mathbf{z}}_{{\mathbf{j}}} ({\text{m}},{\text{n}}) $$
(9b)

where

$$ {\mathbf{z}}_{ 1} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} + 1,{\text{n}}} \right){\text{ r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}} - 1,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} - {\text{M}} + 2,{\text{n}}} \right)} \right]^{\text{T}} $$
(10a)
$$ {\mathbf{z}}_{ 2} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} - 1,{\text{n}}} \right){\text{ r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 1,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} + {\text{M}} - 2,{\text{n}}} \right)} \right]^{\text{T}} $$
(10b)
$$ {\mathbf{z}}_{ 3} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} - 1,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} - 2,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} - 3,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} - {\text{M}},{\text{ n}}} \right)} \right]^{\text{T}} $$
(10c)
$$ {\mathbf{z}}_{ 4} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} + 1,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 2,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 3,{\text{n}}} \right) \ldots {\text{r}}\left( {{\text{m}} + {\text{M}},{\text{n}}} \right)} \right]^{\text{T}} $$
(10d)
$$ {\mathbf{z}}_{ 5} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{n}} + 1} \right){\text{ r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}},{\text{n}} - 1} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{n}} - {\text{M}} + 2} \right)} \right]^{\text{T}} $$
(11a)
$$ {\mathbf{z}}_{ 6} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{n}} - 1} \right){\text{ r}}\left( {{\text{m}},{\text{n}}} \right){\text{ r}}\left( {{\text{m}},{\text{n}} + 1} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{n}} + {\text{M}} - 2} \right)} \right]^{\text{T}} $$
(11b)
$$ {\mathbf{z}}_{ 7} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{n}} - 1} \right) {\text{ r}}\left( {{\text{m}},{\text{n}} - 2} \right){\text{ r}}\left( {{\text{m}},{\text{n}} - 3} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{n}} - {\text{M}}} \right)} \right]^{\text{T}} $$
(11c)
$$ {\mathbf{z}}_{ 8} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{n}} + 1} \right){\text{ r}}\left( {{\text{m}},{\text{n}} + 2} \right){\text{ r}}\left( {{\text{m}},{\text{n}} + 3} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{n}} + {\text{M}}} \right)} \right]^{\text{T}} $$
(11d)

where zi(m,n) for i = 1,2…8 are M × 1 observation vectors. Ωω and Ωv are two D × D diagonal matrices defined respectively as Ωω = diag{cosω1…cosωD} and Ωv = diag{cosv1…cosvD}. The two M × 1 modified noise vectors qr(m,n) and qe(m,n) are defined respectively as qr(m,n) = [qr1(m,n)…qrM(m,n)]T, qe(m,n) = [qe1(m,n) qe2(m,n)…qeM(m,n)]T where \( {\text{q}}_{\text{ri}} \left( {{\text{m}},{\text{n}}} \right) = 1/ 4\left[ {{\text{e}}\left( {{\text{m}} - {\text{i}} + 2,{\text{n}}} \right) + {\text{e}}\left( {{\text{m}} + {\text{i}} - 2,{\text{n}}} \right) + {\text{e}}\left( {{\text{m}} + {\text{i}},{\text{n}}} \right) + {\text{e}}\left( {{\text{m}} - {\text{i}},{\text{n}}} \right)} \right] \) and \( {\text{q}}_{\text{ei}} \left( {{\text{m}},{\text{n}}} \right) = 1/ 4\left[ {{\text{e}}\left( {{\text{m}},{\text{n}} - {\text{i}} + 2} \right) + {\text{e}}\left( {{\text{m}},{\text{n}} + {\text{i}} - 2} \right) + {\text{e}}\left( {{\text{m}},{\text{n}} + {\text{i}}} \right) + {\text{e}}\left( {{\text{m}},{\text{n}} - {\text{i}}} \right)} \right] \) for i = 1,2…,M.

2.2 Further Modification of Data Model

As like Sect. 2.1, we deduced another set of modified data models described by,

$$ {\mathbf{p}}_{\omega } \left( {{\text{m}},{\text{n}}} \right) = {\mathbf{JA}}\left( \omega \right)\Upomega_{\omega } {\mathbf{s}}\left( {{\text{m}},{\text{n}}} \right) + {\mathbf{q}}_{\text{w}} \left( {{\text{m}}.{\text{n}}} \right) $$
(12)
$$ {\mathbf{p}}_{\text{v}} \left( {{\text{m}},{\text{n}}} \right) = {\mathbf{JA}}\left( {\text{v}} \right)\Upomega_{\text{v}} {\mathbf{s}}\left( {{\text{m}},{\text{n}}} \right) + {\mathbf{q}}_{\text{u}} \left( {{\text{m}}.{\text{n}}} \right) $$
(13)

The above two data models were obtained by implementing similar kind of mathematical operations as that of equation (9a, b) that is,

$$ {\mathbf{p}}_{\omega } (m,n) = \frac{1}{4}\mathop \sum \limits_{j = 1}^{4} {\mathbf{p}}_{\text{j}} (m,n) $$
(14)
$$ {\mathbf{p}}_{v} (m,n) = \frac{1}{4}\mathop \sum \limits_{{{\text{j}} = 5}}^{8} {\mathbf{p}}_{\text{j}} ({\text{m}},{\text{n}}) $$
(15)

where

$$ {\mathbf{p}}_{ 1} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} - {\text{M}} + 2,{\text{ n}}} \right) \ldots {\text{r}}\left( {{\text{m}} - 1,{\text{ n}}} \right){\text{ r}}\left( {{\text{m}},{\text{ n}}} \right){\text{ r}}\left( {{\text{m}} + 1,{\text{ n}}} \right)} \right]^{\text{T}} $$
(16a)
$$ {\mathbf{p}}_{ 2} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} + {\text{M}} - 2,{\text{ n}}} \right) \ldots {\text{r}}\left( {{\text{m}} + 1,{\text{ n}}} \right){\text{ r}}\left( {{\text{m}},{\text{ n}}} \right){\text{ r}}\left( {{\text{m}} - 1,{\text{ n}}} \right)} \right]^{\text{T}} $$
(16b)
$$ {\mathbf{p}}_{ 3} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} - {\text{M}},{\text{ n}}} \right) \ldots {\text{r}}\left( {{\text{m}} - 3,{\text{ n}}} \right){\text{ r}}\left( {{\text{m}} - 2,{\text{ n}}} \right) {\text{ r}}\left( {{\text{m}} - 1,{\text{ n}}} \right)} \right]^{\text{T}} $$
(16c)
$$ {\mathbf{p}}_{ 4} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}} + {\text{M}},{\text{ n}}} \right) \ldots {\text{r}}\left( {{\text{m}} + 3,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 2,{\text{n}}} \right){\text{ r}}\left( {{\text{m}} + 1,{\text{n}}} \right)} \right]^{\text{T}} $$
(16d)
$$ {\mathbf{p}}_{ 5} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{ n}} - {\text{M}} + 2} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{ n}} - 1} \right){\text{ r}}\left( {{\text{m}},{\text{ n}}} \right){\text{ r}}\left( {{\text{m}},{\text{n}} + 1} \right)} \right]^{\text{T}} $$
(17a)
$$ {\mathbf{p}}_{ 6} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{ n}} + {\text{M}} - 2} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{ n}} + 1} \right){\text{ r}}\left( {{\text{m}},{\text{ n}}} \right){\text{ r}}\left( {{\text{m }},{\text{n}} - 1} \right)} \right]^{\text{T}} $$
(17b)
$$ {\mathbf{p}}_{ 7} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{ n}} - {\text{M}}} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{ n}} - 3} \right){\text{ r}}\left( {{\text{m}},{\text{ n}} - 2} \right){\text{ r}}\left( {{\text{m}},{\text{ n}} - 1} \right)} \right]^{\text{T}} $$
(17c)
$$ {\mathbf{p}}_{ 8} \left( {{\text{m}},{\text{n}}} \right) \,\triangleq\, \left[ {{\text{r}}\left( {{\text{m}},{\text{ n}} + {\text{M}}} \right) \ldots {\text{r}}\left( {{\text{m}},{\text{ n}} + 3} \right){\text{ r}}\left( {{\text{m}},{\text{ n}} + 2} \right){\text{ r}}\left( {{\text{m}},{\text{ n}} + 1} \right)} \right]^{\text{T}} $$
(17d)

where pi(m,n) for i = 1,2…8 are M × 1 observation vectors and J is the M × M counter identity matrix, in which 1 s present in the principal anti-diagonal. The two M × 1 modified noise vectors qw(m,n) and qu(m,n) are defined respectively as qw(m.n) = Jqr(m.n) and qu(m.n) = Jqe(m.n).

3 Proposed Algorithm

In this section, we present the algorithm for 2-D frequency estimation for multiple real-valued sinusoidal signals.

3.1 Estimation of First Dimension Frequencies

Under the assumption of data model, from (5) and (7) we easily obtain the cross correlation matrix Ryz1 between the received data, yω(m,n) and zω(m,n) as,

$$ {\mathbf{R}}_{\text{yz1}} = {\text{ E}}\left\{ {{\mathbf{y}}_{\omega } \left( {{\text{m}},{\text{n}}} \right){\mathbf{z}}_{\omega }^{\text{T}} \left( {{\text{m}},{\text{n}}} \right)} \right\} = \, {\mathbf{A}}\left( \omega \right){\mathbf{R}}_{\text{ss}} \Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right) $$
(18)

where Rss is source signal correlation matrix defined by \( {\mathbf{R}}_{{{\mathbf{ss}}}} \,\triangleq\, {\text{E}}\left\{ {{\mathbf{s}}\left( {{\text{m}},{\text{n}}} \right){\mathbf{s}}^{\text{T}} \left( {{\text{m}},{\text{n}}} \right)} \right\} \). From (12), we have another data model that is pω(m,n) in backward way such that pω(m,n) = Jzω(m,n), similarly from (5) and (12) we can obtain another cross-correlation matrix between the two received data

$$ {\mathbf{R}}_{\text{yp1}} = {\text{ E}}\left\{ {{\mathbf{y}}_{\omega } \left( {{\text{m}},{\text{n}}} \right){\mathbf{p}}_{\omega }^{\text{T}} \left( {{\text{m}},{\text{n}}} \right)} \right\} = {\mathbf{A}}\left( \omega \right){\mathbf{R}}_{\text{ss}} {\mathbf{J}}\Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right) $$
(19)

In noise free case Ryp1 = JRyz1 but in practical case that is when signal is noise corrupted, then the relation holds true partially that is \( {\mathbf{R}}_{\text{yp1}} \cong {\mathbf{JR}}_{\text{yz1}} \). Considering the above assumptions we formulated an extended cross correlation of size M × 2M as,

$$ {\mathbf{R}}_{\omega } = \, [{\mathbf{R}}_{\text{yz1}} {\mathbf{R}}_{\text{yp1}} ] = [{\mathbf{R}}_{\text{yz1}} {\mathbf{JR}}_{\text{yz1}} ] = \, {\mathbf{A}}\left( \omega \right)\left[ {{\mathbf{R}}_{\text{ss}} \Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right){\mathbf{R}}_{\text{ss}} {\mathbf{J}}\Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right)} \right] $$
(20)

Since A(ω) is a full rank matrix, we can divide A(ω) into two sub matrices as A(ω) = [(A1(ω))T (A2(ω))T]T where A1(ω) and A2(ω) are the D × D and (M-D) × D sub matrices consisting of the first D rows and last (M-D) rows of the matrix A(ω) respectively. There exists a D × M-D linear operator P1 between A1(ω) and A2(ω) [26] such that A2(ω) = P T1 A1(ω), using the above assumptions we can segregate (20) into the following two matrices.

$$ \begin{aligned} {\mathbf{R}}_{\omega } & = \left[ {\left( {{\mathbf{A}}_{ 1} \left( \omega \right)} \right)^{\text{T}} \left( {{\mathbf{A}}_{{\mathbf{2}}} \left( \omega \right)} \right)^{\text{T}} } \right]^{\text{T}} \left[ {{\mathbf{R}}_{\text{ss}} \Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right){\mathbf{R}}_{\text{ss}} {\mathbf{J}}\Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right)} \right] \\ & = \, \left[ {\left( {{\mathbf{A}}_{ 1} \left( \omega \right)} \right)^{\text{T}} {\mathbf{P}}_{ 1}^{\text{T}} {\mathbf{A}}_{ 1} \left( \omega \right)\left] {^{\text{T}} } \right[{\mathbf{R}}_{\text{ss}} \Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right){\mathbf{R}}_{\text{ss}} {\mathbf{J}}\Upomega_{\omega } {\mathbf{A}}^{\text{T}} \left( \omega \right)} \right] \\ & \triangleq \left[ {{\mathbf{R}}^{\text{T}}_{\omega 1} {\mathbf{R}}^{\text{T}}_{\omega 2} } \right]^{\text{T}} \\ \end{aligned} $$
(21)

where Rω1 and Rω2 consist of the first D rows and the last M-D rows of the matrix Rω, and \( {\mathbf{R}}_{\omega 2} = \mathop {\text{P}}\nolimits_{1}^{T} {\mathbf{R}}_{\omega 1} \). Hence, the linear operator P1 found from Rω1 and Rω2 as [26]. However, a least-squares solution [27] for the entries of the propagator matrix P1 satisfying the relation, \( {\mathbf{R}}_{\omega 2} = \mathop {\text{P}}\nolimits_{1}^{T} {\mathbf{R}}_{\omega 1} \) obtained by minimizing the cost function described as follows,

$$ \xi \left( {{\mathbf{P}}_{ 1} } \right) = ||{\mathbf{R}}_{\omega 2} - \, {\mathbf{P}}^{\text{T}}_{ 1} {\mathbf{R}}_{\omega 1} ||^{{\mathbf{2}}} $$
(22)

where \( ||.||_{\text{F}}^{2} \) denotes the Frobenius norm. The cost function ξ(P1) is a quadratic (convex) function of P1, which can be minimized to give the unique least-square solution for P1, that can be evidently shown as,

$$ {\mathbf{P}}_{ 1} = \left( {{\mathbf{R}}_{\omega 1} {\mathbf{R}}^{{\mathbf{T}}}_{\omega 1} } \right)^{ - 1} {\mathbf{R}}_{\omega 1} {\mathbf{R}}^{{\mathbf{T}}}_{\omega 2} $$
(23)

further by defining another matrix \( Q_{\omega } = [P_{1}^{T} - I_{M - D} ]^{T} \), such that Q Tω A(ω) = 0(M-D)×D which can be used to estimate the real valued harmonic frequencies of first dimension {ωk} for k = 1,2…,D as like [25]. Thus when the number of snapshots are finite the frequencies of first dimension can be estimated by minimizing following cost function, \( \hat{f}\left( \omega \right) = a^{T} \left( \omega \right) \hat{E} a\left( \omega \right) \) where \( a(\omega ) = [1 cos\omega \ldots cos\left( {M - 1} \right)\omega ]^{T } \) and \( \hat{E} \,\triangleq\, \hat{Q}_{\omega } (\hat{Q}_{\omega }^{T} \hat{Q} _{\omega } )^{ - 1} \hat{Q}_{\omega }^{T} \). The orthonormality of matrix \( \hat{Q}_{\omega } \) is used in order to improve the estimation performance while E is calculated implicitly using matrix inversion lemma as [24] and \( \hat{E}\,and \hat{Q}_{\omega } \) are the estimates of E and Qω.

Steps for estimating ωk:

  • Calculate the estimate \( \hat{R}_{\omega } \) of the cross-correlation matrix Rω using (20).

  • Partition \( \hat{R}_{\omega } \) and determine \( \hat{R}_{\omega 1} \) and \( \hat{R}_{\omega 2} \).

  • Determine the estimate of the propagator matrix P1 using (23).

  • Define \( \hat{Q}_{\omega } = [\hat{P}_{1}^{T} - I_{M - D} ]^{T} \) and from \( \hat{Q}_{\omega } \) find out \( \hat{E} \,\triangleq\, \hat{Q}_{\omega } (\hat{Q}_{\omega }^{T} \hat{Q}_{\omega } )^{ - 1} \hat{Q}_{\omega }^{T} \) using matrix inversion lemma.

  • The first dimension frequencies that is, {ωk} for k = 1,2…,D estimated by minimizing the following cost function, \( \hat{f}\left( \omega \right) = a^{T} \left( \omega \right) \hat{E}a\left( \omega \right) \).

3.2 Estimation of Second Dimension Frequencies

The method adopted for estimating the first dimension frequencies ωi for i = 1, 2,…,D, can be used for estimating the second dimension frequencies vi for i = 1, 2,…,D. That is the same procedure used in Sect. 3.1 of this Section applied to estimate second dimension frequencies. The second dimension frequencies obtained by doing similar kind of operation across the data models developed in (6), (8) and (13).

The proposed method has notable advantages over the conventional MUSIC algorithm [15], such as computational simplicity and less restrictive noise model. Though it required peak search but there is no eigenvalue decomposition (SVD or EVD) involved in the proposed algorithm unlikely MUSIC, where the EVD of the auto correlation matrix is needed. It also provide quite efficient estimate of the frequencies and the estimated frequencies in both dimensions are automatically paired.

4 Simulation Results

Computer simulation have been carried out to evaluate the frequency estimation performance of the proposed algorithm for 2-D multiple real-valued sinusoids in presence of additive white Gaussian noise. The average root-mean-square-error (RMSE) is employed as performance measure, apart from that some other simulations also conducted to show the detection capability and bias of estimation. Besides CRLB, the performance of the proposed algorithm is compared with those of 2D-MUSIC and 2D-ESPRIT [28] algorithms for real-valued sinusoids. Four type of analysis have been performed.

4.1 Analysis of Frequency Spectra

The signal parameters are N1 = N2 = 50 and the dimension of snapshot vector M = 20. Number of undamped 2-D real-valued sinusoids D = 2, amplitude {a k  = 1} for k = 1,2…,D. The first dimension frequencies, and the second dimension frequencies are (ω12) = (0.1π,0.13π) and (v1,v2) = (0.13π,0.16π) respectively. Note that the frequency separation is 0.03, which is smaller than the Fourier resolution capacity 1/M (=0.05). This means classic FFT-based method cannot resolve these two frequencies, and also this method can resolve identical frequencies present in different dimensions (ω2 = v1 = 0.13π). Figure 1 displays spectra of the proposed algorithm at SNR = 10 dB. We can see from Fig. 1 that the frequency parameters in both the dimensions are accurately resolved. The estimated frequencies are shown in Table 1.

Fig. 1
figure 1

Spectrum of frequencies in both dimensions (M = 20)

Table 1 Estimated frequencies considering M = 20

In second analysis that is shown in Fig. 2 where we considered the signal parameters are N1 = N2 = 100 and the dimension of snapshot vector M = 50, keeping all other parameters same as previous experiment. The detection of frequencies in both the dimensions are found to be more accurate. The estimated frequencies of this analysis are shown in Table 2.

Fig. 2
figure 2

Spectrum of frequencies in both dimensions (M = 50)

Table 2 Estimated frequencies considering M = 50

4.2 Performance Analysis Considering RMSE

The same signal parameters as first analysis of Sect. 4.1 of this section is considered. We compared root-mean-square-error (RMSE) on the estimates for the proposed algorithm, MUSIC and 2-D ESPRIT algorithm as a function of SNR. Here a Monte-Carlo simulation of 500 runs was performed. Figure 3a shows the RMSEs and the corresponding CRB of first 2-D frequencies {ωk}, while Fig. 3b shows the second 2-D frequencies {vk} (k = 1,2…,D). It is clearly seen that the proposed algorithm outperforms the ESPRIT algorithm and in lower SNR case the performance is similar to that of MUSIC algorithm. As SNR increases the proposed algorithm performs exactly same as that of MUSIC algorithm.

Fig. 3
figure 3

a RMSE (dB) for first dimension frequencies vs SNRs (dB). b RMSE (dB) for second dimension frequencies vs SNRs (dB)

4.3 Performance Analysis Considering Probability of Correct Estimation and Bias of Estimation

In this analysis, we considered Probability of correct estimation of frequencies as performance measure. Taking the same signal parameters as of last two Sections, we determined the probability of correct estimation of 2-D real-valued sinusoidal signal frequencies for both dimensions by varying SNR. The obtained results are shown in Fig. 4a, b. From the above analysis, it is evident that proposed method performs far superior compared to 2-D ESPRIT and behaves in a same way as conventional MUSIC algorithm but without any eigendecomposition (EVD/SVD). Similarly we analyzed the bias of estimation for each dimensions and the results are plotted in Fig. 5a, b respectively. From bias analysis, it is clear that the proposed method performs much better than ESPRIT and almost similar to that conventional MUSIC algorithm in varied SNR ranges.

Fig. 4
figure 4

a Probability of correct estimation for first dimension frequencies vs SNRs (dB). b Probability of correct estimation for second dimension frequencies vs SNRs (dB)

Fig. 5
figure 5

a Bias of the estimator for first dimension frequencies vs SNR’s (dB). b Bias of the estimator for second dimension frequencies vs SNR’s (dB)

4.4 Performance Analysis Considering Computational Time

In this section we compared the performance of proposed method and conventional MUSIC algorithm based on their computational timing. Considering the same signal parameters at a fixed SNR of 10 dB we vary the snap shot vector dimension (M) and the results are plotted in Fig. 6. From Fig. 6 its clear that proposed method is less time consuming compared to conventional MUSIC algorithms.

Fig. 6
figure 6

Average computational time vs M at SNR = 10 dB

5 Conclusion

We have proposed a new approach based on subspace method without eigendecompostion using cross-correlation matrix for estimation of multiple real-valued 2-D sinusoidal signal frequencies embedded with additive white Gaussian noise. We have analytically quantified the performance of the proposed algorithm. It is shown that our algorithm remains operational when there exist identical frequencies in both the dimensions. Simulation results show that the proposed algorithm offers comparative performance when compared to MUSIC algorithm, but at a lower computational complexity and exhibit far superior performance when compared to ESPRIT algorithm. The frequency estimates thus obtained are automatically paired without an extra pairing algorithm.