Abstract
In this paper, we propose a computationally faster yet conceptually simple methodology to estimate the parameters of a two-dimensional (2-D) sinusoidal model in the presence of additive white noise. We develop the large sample properties like consistency and asymptotic normality of these low-complexity estimators, and they are observed to be theoretically as efficient as the ordinary least squares estimators. To assess the numerical performance, we conduct extensive simulation studies. The results indicate that the proposed estimators can successfully replace the least squares estimators for sample size as small as 20 \(\times \) 20 and for signal-to-noise ratio (SNR) as small as 12 dB.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The problem of estimation of two-dimensional (2-D) sinusoidal signals is of importance in a wide range of applications such as analysis of geophysical data [5], image restoration [2], array processing [9], radio astronomy [23], synthetic aperture radar imaging [17], nuclear magnetic resonance spectroscopy [16], medical imaging [18], wireless communications [24], health assessment of living trees [3], source localisation [1], to name a few.
Due to its extensive applicability, numerous techniques have been proposed for the parameter estimation of this signal model. Some of the prominent works are by Bansal et al. [4], Chen et al. [6], Clark and Scharf [7], Cohen and Francos [8], Francos et al. [10], Hua [12], Kundu and Gupta [13], Kundu and Nandi [14], Prasad et al. [21], Rao et al. [22], Zhang and Mandrekar [26] and the list does not end here. A more thorough overview of references in this area can be found in Peng et al. [19].
This paper addresses the problem of parameter estimation of a 2-D model, mathematically expressed as follows:
Here, y(m, n) is the observed signal characterised by amplitude parameters, \(A_k^0\)s and \(B_k^0\)s, and frequency parameters \(\mu _k^0\)s and \(\lambda _k^0\)s. The random error component X(m, n) is a 2-D sequence of independently and identically distributed random variables (i.i.d.) with mean 0 and variance \(\sigma ^2\). The fundamental problem here is to estimate the nonlinear parameters and the frequencies \(\mu _k^0\)s and \(\lambda _k^0\)s from a finite set of observations of length \(M\times N\) corrupted with additive noise.
The objective is to introduce an algorithm that is computationally feasible to implement in practice as well as one that provides statistically optimal estimators. In this paper, we show that the accuracy of the proposed estimators is as good as that of the usual least squares estimators (LSEs). This effectiveness of the algorithm is demonstrated through extensive numerical simulations along with an in-depth theoretical analysis.
It is important to note that the usual least squares estimator is one of the most conventional methods of parameter estimation of a postulated model embedded with noise. For a 2-D sinusoidal model with multiple components as described in Eq. (1) of the paper, the LSEs are obtained by minimising the error sum of squares:
with respect to the unknown parameters. It has been proved that the LSEs of the parameters of this model are strongly consistent and asymptotically normally distributed. Moreover, if the errors are assumed to be normally distributed, the LSEs are asymptotically optimal. For reference, one may look at the paper by Kundu and Gupta [13].
For the purpose of numerical comparison, in Sect. 4, along with the proposed efficient estimators and LSEs, we also compute approximate least squares estimators (ALSEs). The ALSEs are a popular alternative to the LSEs and are obtained by maximising the following 2-D periodogram function defined as follows:
with respect to the nonlinear parameters \(\mu \) and \(\lambda \). Once the nonlinear parameters are obtained, one can use simple linear regression to obtain the linear parameter estimates. Kundu and Nandi [14] show that the performance of the ALSEs is almost identical to that of the LSEs. Moreover, they provide theoretical proofs of their strong consistency and asymptotic equivalence to the LSEs.
The rest of the paper is organised as follows. In the next section, we first explain the proposed methodology for a one-component 2-D sinusoidal model. A sequential algorithm for a multiple sinusoidal model is provided subsequently. In Sect. 3, we provide the model assumptions and derive the large sample properties of the proposed estimators. The numerical results are presented in Sect. 4. The paper is concluded in Sect. 5, and thereafter the proofs of the asymptotic results are provided in the appendices.
2 Proposed Methodology
Here, we describe the proposed algorithm for the estimation of frequencies of the following 2-D sinusoidal model:
Let us first fix \(n = 1\), then the model Eq. (2) reduces to the following:
Using elementary trigonometric formulae, the above equation can be rewritten as:
where \(A^0(1)\) and \(B^0(1)\) are functions of \(A^0\), \(B^0\) and \(\lambda ^0\). It is worth noting that the reduced model equation is that of a one-dimensional sinusoidal model with amplitudes \(A^0(1)\) and \(B^0(1)\) and frequency parameter \(\mu ^0\). In general, we fix \(n = n_0\), \(n_0 \in \{1, \ldots , N\}\) and we have the corresponding 1-D model equation:
Thus, for each \(n_0\), we have a 1-D sinusoidal model with different amplitudes but same frequency parameter. We can now estimate this frequency parameter by fitting N 1-D models to the corresponding columns of the data matrix. In the absence of a distributional assumption, the most natural choice of estimation method seems to be the least squares method. Thus, the estimator of \(\mu ^0\) can be obtained by minimising the following function:
for each \(n_0 \in \{1, \ldots , N\}\). Note that this is a 1-D minimization problem as the linear parameters \(A(n_0)\) and \(B(n_0)\) can first be separated out using linear regression. This brings down the problem to minimising the reduced functions:
with respect to \(\mu \). However, this process involves solving N such problems and since the underlying errors are assumed to independently and identically distributed; instead of minimising N different objective functions, we propose to minimise the sum of these objective functions, expressed as follows:
with respect to \(\mu \) and get an estimate \({\hat{\mu }}\) of \(\mu ^0\). A similar estimate of \(\lambda ^0\), say \({\hat{\lambda }}\), can be obtained by minimising the following function with respect to \(\lambda \):
It is important to note that the optimisation problem is nonlinear and to solve such a problem an iterative algorithm has to be employed. We use Nelder–Mead algorithm to optimise this function for our simulations. The function has the problem of several local minima. Therefore, for the algorithm to converge to global minimum rather than a local minimum, we need precise initial values. To find these initial guesses in practice when the true values are unknown, we minimise this function at the Fourier frequencies, that is at the points \(\frac{\pi j}{M}; j = 1, \ldots , M.\) or \(\frac{\pi k}{N}; k = 1, \ldots , N.\)
The proposed method is not only computationally efficient than the usual least squares method but is also conceptually simple. This method can be easily extended to a more general model with multiple sinusoids as described in (1). The idea is based on modifying the sequential method proposed by Prasad et al. [21] and uses the fact that any two sinusoidal components present in the model are orthogonal to each other. We describe the sequential algorithm below.
3 Asymptotic Results
In this section, we investigate large-sample properties of the proposed estimators under the following assumption on the structure of the error component, X(m, n):
Assumption 1
X(m,n) is a double array sequence of i.i.d. random variables with mean 0 and finite variance \(\sigma ^2 > 0.\)
Also, the true parameter vector \((A^0, B^0, \mu ^0, \lambda ^0)\) satisfies the following assumption:
Assumption 2
The true parameter vector \((A^0, B^0, \mu ^0, \lambda ^0)\) is an interior point of the parameter space \(\Theta = [-K,K] \times [-K,K] \times [0, \pi ] \times [0, \pi ]\) and \({A^0}^2 + {B^0}^2 > 0\). Here \(K >0\) is any real number.
The results are stated in the following theorems.
Theorem 1
The proposed frequencies’ estimators are strongly consistent if the assumptions 1 and 2 hold true, that is,
-
(a)
\({\hat{\mu }} \xrightarrow {a.s.} \mu ^0\) as \(M \rightarrow \infty \),
-
(b)
\({\hat{\lambda }} \xrightarrow {a.s.} \lambda ^0\) as \(N \rightarrow \infty \).
Proof
See Appendix A. \(\square \)
Theorem 2
The proposed frequencies’ estimators are asymptotically normally distributed with mean 0 and variance \(\frac{6 \sigma ^2}{{A^0}^2 + {B^0}^2}\) if the assumptions 1 and 2 hold true, that is,
-
(a)
\(M^{3/2}N^{1/2}({\hat{\mu }} - \mu ^0) \xrightarrow {d} {\mathcal {N}}(0,\frac{6 \sigma ^2}{{A^0}^2 + {B^0}^2}),\)
-
(b)
\(M^{1/2}N^{3/2}({\hat{\lambda }} - \lambda ^0) \xrightarrow {d} {\mathcal {N}}(0,\frac{6 \sigma ^2}{{A^0}^2 + {B^0}^2})\),
as \(min\{M,N\}\) \(\rightarrow \infty \).
Proof
See Appendix B. \(\square \)
It is evident that the proposed method yields frequency estimators with cubic convergence rates which is extremely fast. Also, the accuracy of these estimators is inversely proportional to the sum of squares of the amplitudes, that is, \({A^0}^2 + {B^0}^2\), which is not very surprising. As a matter of fact, the asymptotic distribution of the proposed estimators coincides with that of the usual LSEs. Moreover, if the errors are assumed to be normally distributed, the derived asymptotic variances are same as the Cramer–Rao lower bounds (CRLBs).
We now prove the strong consistency and derive the asymptotic distribution of the proposed sequential estimators of the frequencies of multiple component sinusoidal model. These asymptotic properties are derived under Assumption 1 on the error component and the following assumptions on the parameters of the model:
Assumption 3
The true parameter vector for each component, that is, \((A_k^0, B_k^0, \mu _k^0, \lambda _k^0)\) is an interior point of the parameter space \(\Theta \) \(\forall \ k = 1, \ldots , p\). Also the frequencies are such that \(\mu _i^0 \ne \mu _j^0\) and \(\lambda _i^0 \ne \lambda _j^0\) \(\forall \ i \ne j, i, j = 1, \ldots , p\).
Assumption 4
The linear parameters \(A_k^0\)s and \(B_k^0\)s satisfy the following relationship:
The results are stated in the subsequent theorems.
Theorem 3
Under the assumptions 1, 3 and 4, the frequency estimates \({\hat{\mu }}_k\) and \({\hat{\lambda }}_k\) are strongly consistent estimators of \(\mu _k^0\) and \(\lambda _k^0\), respectively, that is \(\forall \ k = 1, \ldots , p,\)
-
(a)
\({\hat{\mu }}_k \xrightarrow {a.s.} \mu _k^0\) as \(M \rightarrow \infty \),
-
(b)
\({\hat{\lambda }}_k \xrightarrow {a.s.} \lambda _k^0\) as \(N \rightarrow \infty \).
Proof
See Appendix C. \(\square \)
Theorem 4
Under the assumptions 1, 3 and 4, the following results hold true:
\(\forall \) \(k = 1, \ldots , p\)
-
(a)
\({\hat{A}}_k \xrightarrow {a.s.} A_k^0 \) as \(\min \{M, N\} \rightarrow \infty \),
-
(b)
\({\hat{B}}_k \xrightarrow {a.s.} B_k^0 \) as \(\min \{M, N\} \rightarrow \infty \),
\(\forall \) \(k > p\)
-
(a)
\({\hat{A}}_k \xrightarrow {a.s.} 0 \) as \(\min \{M, N\} \rightarrow \infty \),
-
(b)
\({\hat{B}}_k \xrightarrow {a.s.} 0 \) as \(\min \{M, N\} \rightarrow \infty \).
Proof
See Appendix C. \(\square \)
Theorem 5
The proposed sequential estimators of the frequencies are asymptotically normally distributed if the assumptions 1, 3 and 4 hold true, that is,
-
(a)
\(M^{3/2}N^{1/2}({\hat{\mu }}_k - \mu _k^0) \xrightarrow {d} {\mathcal {N}}(0,\frac{6 \sigma ^2}{{A_k^0}^2 + {B_k^0}^2}),\)
-
(b)
\(M^{1/2}N^{3/2}({\hat{\lambda }}_k - \lambda _k^0) \xrightarrow {d} {\mathcal {N}}(0,\frac{6 \sigma ^2}{{A_k^0}^2 + {B_k^0}^2})\),
as \(\min \{M,N\}\) \(\rightarrow \infty \).
Proof
See Appendix D. \(\square \)
It is evident that like the estimators for the one component model, the sequential estimators are strongly consistent and asymptotically normally distributed as well. The algorithm produces estimators of \(\mu _k^0\) and \(\lambda _k^0\) with convergence rates \(O_p(M^{-3/2}N^{-1/2})\) and \(O_p(N^{-3/2}M^{-1/2})\), respectively, same as those of the usual LSEs. Another interesting property of the proposed algorithm is that depicted in Theorem 4, when the algorithm is continued beyond p number of components, the corresponding amplitudes converge to zero. This feature can help one to estimate the number of components in practice. It is important to note that these asymptotic results can be further exploited to construct confidence intervals as well as to devise testing of hypotheses and problems of practical significance.
4 Numerical Experiments
In this section, we present the simulation results. These simulation experiments were designed to assess the performance of the proposed methodology in comparison with the conventional least squares estimation method. The data matrix is generated using the following one-component 2-D sinusoidal model:
The error random variables are generated from Gaussian distribution with mean 0 and variance \(\sigma ^2\). For the first set of experiments, we fix the error variance at \(\sigma ^2 = 0.01\) and vary the sample size from 20 to 200. We compute the estimates of \(\lambda ^0\) and \(\mu ^0\) for 1000 independent replications using different error sequences for all cases. Figure 1 illustrates the MSEs of the resulting parameter estimates along with the derived asymptotic variances.Footnote 1 In the next set of experiments, we fix the sample size. For a 100 \(\times \) 100 data matrix, we compute the MSEs of the nonlinear parameter estimates for varying signal-to-noise ratio. The results are shown in Fig. 2. In these experiments, the initial values are taken as the true values. From the figure, it is apparent that the MSEs of the proposed estimates are well-matched with those of LSEs and lower than that of ALSEs. These almost coincide with the asymptotic variances for all sample sizes and for SNR above 10 dB. These results validate the theoretical claim that the accuracy of the efficient estimators is as good as that of LSEs. The results in all the figures are reported in the log scale.
Next, we evaluate the performance of the proposed sequential estimators for a multiple component model. We generate the data from a 2-D sinusoidal model with two components using the following model equation:
The same error structure and simulation set-up as described above for the one component model were used. Moreover, the accuracy of the proposed estimators is compared with that of the sequential LSEs and sequential ALSEs. Figure 3 shows the MSEs of the proposed estimators, sequential LSEs and sequential ALSEs of the first component parameters with respect to varying sample sizes. We also plot the CRLBs to benchmark the performance of these estimators. The MSEs of the second component parameter estimates are shown in Fig. 4. In Figs. 5 and 6 these results are investigated for varying SNRs. It can be seen that the performance of the proposed sequential estimators is at par with that of sequential LSEs and sequential ALSEs. Moreover, for the second component parameters, the MSEs of all the three estimators coincide with the corresponding CRLBs for increasing M(N). For varying SNR, the results of all the estimators concur with CRLBs for both the component parameters.
In order to exemplify the advantage of the proposed estimators, we compare the computational complexity of both methods, the proposed and the least squares method. The complexity is measured in terms of number of function evaluations needed to find the initial values of the parameter estimates of a one component model.Footnote 2 Figure 7 demonstrates the complexity of both the estimators as the sample size, \(M = N\) varies. It is evident that there is a significant difference between the computational involvement for the two types of estimates under consideration. This implies that calculating the proposed estimators is much faster than calculating the LSEs. Figure 8 corroborates the faster implementation of the proposed algorithm as compared to the traditional least squares estimation method. Note that the number of function evaluations needed to compute ALSEs is same as that required for LSEs and hence the ALSEs are omitted from this comparison.
5 Conclusion
In this paper, we have proposed a novel method of estimation of the frequencies of a 2-D sinusoidal model. Although the LSEs have optimal statistical properties, due to considerable computation burden finding them is practically infeasible. The proposed method reduces this burden of computability to a great extent and provides efficient estimators with accuracy on an equal footing with that of the LSEs. Statistical analysis of the proposed estimators shows that they are strongly consistent and asymptotically equivalent to the corresponding LSEs. Simulation experiments demonstrate the ability of the approach to estimate the frequencies accurately. These results are presented in comparison with the LSEs as well as in comparison to the CRLBs and the performance is at par with both.
Data Availability
Data sharing not applicable to this article as no real datasets were analysed during the current study.
Notes
Note that these asymptotic variances are actually CRLBs as the distribution under consideration is Gaussian.
Objective function.
References
S.O. Al-Jazzar, D.C. McLernon, M.A. Smadi, SVD-based joint azimuth/elevation estimation with automatic pairing. Signal Process. 90(5), 1669–1675 (2010)
H.C. Andrews, B.R. Hunt, 1977. Digital image restoration
J. Axmon, M. Hansson, L. Sornmo, Partial forward-backward averaging for enhanced frequency estimation of real X-texture modes. IEEE Trans. Signal Process. 53(7), 2550–2562 (2005)
N.K. Bansal, G.G. Hamedani, H. Zhang, Non-linear regression with multidimensional indices. Stat. Probab. Lett. 45(2), 175–186 (1999)
J. Capon, R.J. Greenfield, R.J. Kolker, Multidimensional maximum-likelihood processing of a large aperture seismic array. Proc. IEEE 55(2), 192–211 (1967)
F.J. Chen, C.C. Fung, C.W. Kok, S. Kwong, Estimation of two-dimensional frequencies using modified matrix pencil method. IEEE Trans. Signal Process. 55(2), 718–724 (2007)
M.P. Clark, L.L. Scharf, Two-dimensional modal analysis based on maximum likelihood. IEEE Trans. Signal Process. 42(6), 1443–1452 (1994)
G. Cohen, J.A. Francos, Least squares estimation of 2-D sinusoids in colored noise: Asymptotic analysis. IEEE Trans. Inf. Theory 48(8), 2243–2252 (2002)
D.E. Dudgeon, Fundamentals of digital array processing. Proc. IEEE 65(6), 898–904 (1977)
J.M. Francos, A.Z. Meiri, B. Porat, A unified texture model based on a 2-D Wold-like decomposition. IEEE Trans. Signal Process. 41(8), 2665–2678 (1993)
W.A. Fuller, Introduction to Statistical Time Series, vol. 428 (John Wiley and Sons, New Jersey, 2009)
Y. Hua, Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. IEEE Trans. Signal Process. 40(9), 2267–2280 (1992)
D. Kundu, R.D. Gupta, Asymptotic properties of the least squares estimators of a two dimensional model. Metrika 48(2), 83–97 (1998)
D. Kundu, S. Nandi, Determination of discrete spectrum in a random field. Stat. Neerl. 57(2), 258–284 (2003)
D. Kundu, S. Nandi, Statistical Signal Processing: Frequency Estimation (Springer Science and Business Media, Berlin, 2012)
Y. Li, J. Razavilar, K.R. Liu, A high-resolution technique for multidimensional NMR spectroscopy. IEEE Trans. Biomed. Eng. 45(1), 78–86 (1998)
J.W. Odendaal, E. Barnard, C.W.I. Pistorius, Two-dimensional superresolution radar imaging using the MUSIC algorithm. IEEE Trans. Antennas Propag. 42(10), 1386–1391 (1994)
N.F. Osman, J.L., Prince, Direct calculation of 2D components of myocardial strain using sinusoidal MR tagging. In Medical Imaging 1998: physiology and function from multidimensional images (Vol. 3337, pp. 142–152). International Society for Optics and Photonics (1998)
H. Peng, J. Bian, D. Yang, Z. Liu, H. Li, Statistical analysis of parameter estimation for 2-D harmonics in multiplicative and additive noise. Commun. Stat. Theory Methods 43(22), 4829–4844 (2014)
A. Prasad, D. Kundu, A. Mitra, Sequential estimation of the sum of sinusoidal model parameters. J. Stat. Plan. Inference 138(5), 1297–1313 (2008)
A. Prasad, D. Kundu, A. Mitra, Sequential estimation of two dimensional sinusoidal models. J. Probab. Stat. Sci. 10(2), 161–178 (2012)
C.R. Rao, L. Zhao, B. Zhou, Maximum likelihood estimation of 2-D superimposed exponential signals. IEEE Trans. Signal Process. 42(7), 1795–1802 (1994)
D.J. Rossi, A.S. Willsky, Reconstruction from projections based on detection and estimation of objects, pts. I and 11: performance analysis and robustness analysis. IEEE Trans. Acoust. Speech Signal Process. 32, 886–906 (1984)
A.J. Van der Veen, M.C. Vanderveen, A. Paulraj, Joint angle and delay estimation using shift-invariance techniques. IEEE Trans. Signal Process. 46(2), 405–418 (1998)
C.F. Wu, Asymptotic theory of nonlinear least squares estimation. Ann. Stat. 501–513(1981)
H. Zhang, V. Mandrekar, Estimation of hidden frequencies for 2D stationary processes. J. Time Ser. Anal. 22(5), 613–629 (2001)
Acknowledgements
The authors would like to thank four anonymous reviewers for their constructive suggestions. Part of the work of the fourth author has been supported by a grant from the Science and Engineering Research Board, Department of Science and Technology, Government of India.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A Proof of Consistency of Proposed Estimators of Model (2)
We need the following lemma to prove Theorem 1 (a):
Lemma 1
Consider the set \(S_c = \{\mu : |\mu - \mu ^0| \geqslant c\}\). If for any \(c > 0\),
then \({\hat{\mu }}\) is a strongly consistent estimator of \(\mu ^0\).
Proof
This proof follows along the same lines as that of Lemma 1 of Wu [25].
\(\square \)
Proof of Theorem 1
(a). Consider
The set \(M_c^{n_0} = \{(A(n_0),B(n_0),\mu ): |A(n_0) - A^0(n_0)|\geqslant c \text { or } |B(n_0) - B^0(n_0)|\geqslant c \text { or } |\mu - \mu ^0|\geqslant c\}\). Clearly, \(S_c \subset M_c^{n_0}\ \forall \ n_0 = 1, \ldots , N\).
Also,
Using the following result, which follows from the proof of Theorem 4.1 of Kundu and Nandi [15]:
we get:
Therefore, by Lemma 1, \({\hat{\mu }} \xrightarrow {a.s.} \mu ^0\) as \(M \rightarrow \infty \). Following similar pattern, one can show that \({\hat{\lambda }} \xrightarrow {a.s.} \lambda ^0\) as \(N \rightarrow \infty \), which proves part (b) of the theorem. \(\square \)
B Proof of Asymptotic Normality of Proposed Estimators of Model (2)
Proof of Theorem 2(a). Let us denote \({R_{MN}^{(1)}}^\prime (\mu )\) as the first derivative and \({R_{MN}^{(1)}}^{\prime \prime }(\mu )\) as the second derivative of the function \(R_{MN}^{(1)}(\mu )\).
Using Taylor series, we expand \({R_{MN}^{(1)}}^\prime ({\hat{\mu }})\) around the point \(\mu ^0\) and get:
where \({\bar{\mu }}\) is a point between \({\hat{\mu }}\) and \(\mu ^0\). Since \({R_{MN}^{(1)}}^\prime ({\hat{\mu }})=0\), the above equation can be rewritten as:
Multiplying both sides of the above equation by \(M^{3/2}N^{1/2}\), we get:
We compute the left-hand side of the above equation below. Consider
The last equality is obtained using the following results:
where o(1) denotes a function f that goes to zero almost surely as \(M \rightarrow \infty \) for each \(n_0 \in \{1, \ldots , N\}\). These results follow from proof of Theorem 2 of Prasad et al. [20]. Now using central limit theorem (CLT) of the stochastic processes (Fuller [11]),
as \(M \rightarrow \infty \) and \(\forall n_0 \in \{1, \ldots , N\}\). This implies that
Since \(\lim \limits _{M,N \rightarrow \infty }{R_{MN}^{(1)}}^{\prime \prime }({\bar{\mu }}) = \lim \limits _{M,N \rightarrow \infty }{R_{MN}^{(1)}}^{\prime \prime }(\mu ^0)\), we next compute the second derivative:
With some routine calculations and the following results:
and (4), we get:
Using the limits in (5) and (6) in (3), we get the desired result, that is,
Similarly, part (b) of Theorem 2 can be proved.
C Proof of Consistency of Proposed Estimators of Model (1)
To prove Theorem 3, we need the following lemmas:
Lemma 2
Consider the set \(S_c^1 = \{\mu _1: |\mu _1 - \mu _1^0| \geqslant c\}.\) If for any \(c > 0\),
then \({\hat{\mu }}_1 \xrightarrow {a.s.} \mu _1^0\) as \(M \rightarrow \infty .\)
Proof
This proof follows along the same lines as that of Lemma 1 of Wu [25].
\(\square \)
Lemma 3
If assumptions 1, 3 and 4 are satisfied, then
Proof
Let us use the following notations: \({R_{1,MN}^{(1)}}^{\prime }(\mu _1)\) as the first derivative and \({R_{1,MN}^{(1)}}^{\prime \prime }(\mu _1)\) as the second derivative of the objective function \(R_{1,MN}^{(1)}(\mu _1)\).
We expand the function \({R_{1,MN}^{(1)}}^{\prime }({\hat{\mu }}_1)\) around the point \(\mu _1^0\) using Taylor series expansion as follows:
where \({\bar{\mu }}_1\) is a point between \({\hat{\mu }}_1\) and \(\mu _1^0\).
Since \({R_{1,MN}^{(1)}}^{\prime }({\hat{\mu }}_1) = 0, \)
Multiplying both sides by \(\frac{1}{\sqrt{MN}}\), we have:
Now we will compute the limits of both the components of the right-hand side of the above equation. We have:
where
Now let us compute the first derivative:
At \(\mu _1 = \mu _1^0\),
Since for \(\alpha \in (0, \pi )\),
and for \(\alpha \ne \beta ,\) \(\alpha , \beta \in (0, \pi )\),
and \(\forall \ n_0 = 1, \ldots , N \)
we have,
Now we compute the second derivative
Using the model equation:
and the following results: \(\forall \ \alpha , \beta \in (0, \pi ), \alpha \ne \beta \)
and \(\forall \ n_0 = 1, \ldots , N \)
we get:
Substituting the limits obtained in (9) and (10) in (8), we get the desired result, that is \(M({\hat{\mu }}_1 - \mu _1^0) \xrightarrow {a.s.} 0\) as \(M \rightarrow \infty \).
\(\square \)
Proof of Theorem 3 (a) Consider the difference:
The last inequality follows from proof of Theorem 1 of Prasad et al. [20]. Now using Lemma 1, we have \({\hat{\mu }}_1 \xrightarrow {a.s.} \mu _1^0\) as \(M \rightarrow \infty \).
Similarly, one can show that \({\hat{\lambda }}_1 \xrightarrow {a.s.} \lambda _1^0\) as \(N \rightarrow \infty \). Also, from Theorem 4, we have:
Thus, we have the following relationship between the first component of the model 1 and its estimate:
where a function f is o(1) if \(f \xrightarrow {a.s.} 0\) as \(\min \{M,N\}\) \(\rightarrow \infty \). Now using this relationship (11) and following the same arguments as for the proof of consistency of \({\hat{\mu }}_1\) and \({\hat{\lambda }}_1\), one can extend the result for the frequencies’ estimates of the second component and extend the result further for each \(k \leqslant p\). \(\square \)
Proof of Theorem 4 We first consider the following linear parameter estimators (see Step 3 of the sequential algorithm):
where
and
Since
where \(\mathbf{I} _{2 \times 2}\) is an identity matrix of order 2. Using this in (12), we get:
Now let us consider, the estimate \({\hat{A}}_1\) and expand the function \(\cos ({\hat{\mu }}_1 + {\hat{\lambda }}_1)\) around the point \((\mu _1^0, \lambda _1^0)\), then we have:
Since for \((\alpha _i, \beta _i) \in (0,\pi )\)
it can be seen that:
Following a similar pattern, one can prove the strong consistency of \({\hat{B}}_1\).
Let us consider:
Now to prove the strong consistency of the amplitudes of the second component, we use the relationship established between the first component and its estimate in Eq. (11) and following the same procedure as that for consistency of the first component amplitudes, it can be seen that
On similar lines, the result can be extended for any integer \(k \leqslant p\).
For \(k = p+1\),
where
Since
we can see that
From here, it is immediate that the result can be extended for any integer \(k > p+1\). Hence, the result. \(\square \)
D Proof of Asymptotic Normality of Proposed Estimators of Model (1)
Proof of Theorem 5
(a) Consider the Taylor series expansion in (7) as follows:
Computing the first derivative on the left-hand side of the above equation and similar to the proof of Theorem 2, it can be shown that:
Since \({\bar{\mu }}_1\) is a point between \({\hat{\mu }}\) and \(\mu _1^0\) and \({\hat{\mu }}_1 \xrightarrow {a.s.} \mu _1^0\),
Also, (10) implies that:
Thus, on combining the above results, we have:
Hence, the result. \(\square \)
Rights and permissions
About this article
Cite this article
Grover, R., Sharma, A., Delcourt, T. et al. Computationally Efficient Algorithm for Frequency Estimation of a Two-Dimensional Sinusoidal Model. Circuits Syst Signal Process 41, 346–371 (2022). https://doi.org/10.1007/s00034-021-01782-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-021-01782-x