1 Introduction

Frequency estimation in multiplicative and additive noise is an important step in many applications in signal processing and communications. For example, frequency estimation is vital in many systems such as sonars and radars. In radars, for instance, the Doppler frequency should be estimated which depends on the velocity of the target. Similarly, in array processing, the spatial frequency should be estimated that depends on the direction of arrival (DOA) of the source [11, 14, 15]. In addition, in the field of speech processing, frequency estimation can be used to estimate maximum voiced frequency (MVF) which can separate the periodic and aperiodic components of sounds [8]. To observe time variations in the vocal tract system characteristics, frequency estimation can be applied to the tracking of speech formant frequencies [21]. Modal analysis is discussed in [19] as another application of frequency estimation.

On the other hand, multiplicative noise is used for modeling some important processes in signal processing applications. In wireless communications, the fading process is modeled as multiplicative noise [1, 2, 12, 10, 3, 27]. This fading is due to the local scatterers around the mobile unit [22]. In radars, the constant amplitude model is insufficient for pulse-to-pulse fluctuating targets; therefore, the random amplitude is typically modeled by multiplicative noise. In addition, the multiplicative noise is suitable for modeling the jammers in radar systems [25, 18]. Furthermore, investigation of the multiplicative noise is necessary in backscattered acoustic signals [1] in array processing [26], synthetic aperture radar (SAR) imagery systems [28], and some recent applications such as vehicle speed estimation and pseudo-maximum likelihood estimation of ballistic missiles [5, 16].

The problem of frequency estimation in multiplicative noise has been investigated in [20,21,22,, 23, 29, 9, 4]. The estimation algorithm used in [23] is based on maximizing the likelihood function and estimates the frequency by an iterative method. This algorithm does not give a closed-form estimate and cannot estimate the frequency directly. The algorithm in [23] estimates the frequency by the estimation of signal parameters via rotation invariance technique (ESPRIT) method. Another method is presented in [24], which is based on a subspace algorithm and finds the frequency by multidimensional search. The nonlinear least squares (NLS) method [4, 20] is based on peak searching and cannot estimate the frequency directly. Another method used for frequency estimation is the multiple signal classification (MUSIC) method [9].

In this paper, we propose a closed-form algorithm for frequency estimation of complex sinusoidal signals in multiplicative and additive noise. The proposed method is based on the linear prediction property of the autocorrelation function (ACF) of the signal. The predictor is order one, and its coefficient can be obtained by over determined linear equations. These equations can be considered as a generalized eigenvalue problem. By solving the generalized problem using the method in [7], the predictor coefficient is estimated. Finally, we can find the frequency using the angle of predictor coefficient. Five cases are considered for the multiplicative noise. In the first case, the multiplicative noise is assumed to be white having a nonzero mean. In the second case, the multiplicative noise is assumed to be white zero mean. In the third case, the noise is considered as a colored noise with exponential ACF. In the fourth case, we extend the proposed method to multi-component signal frequency estimation. In the last case, we apply the proposed method to a real application of radial velocity estimation for fluctuating targets.

The main contribution of this paper is frequency estimation using linear prediction instead of autoregressive (AR) modeling. It is well known that the estimates obtained by AR modeling may be biased. The estimates obtained by linear prediction approach can be unbiased, as the quasi-Yule–Walker equations proposed by the authors start from zero lag instead of one lag in the Yule–Walker equations.

The rest of the paper is organized as follows: The signal model is given in Sect. 2. In Sect. 3, the methodology is presented. Simulation results and comparisons are provided in Sect. 4. Conclusions are presented in Sect. 5.

2 Signal Model

The complex sinusoidal signal in multiplicative and additive noise is modeled as:

$$ x\left( n \right) = z\left( n \right)e^{{j(\omega_{0} n + \theta )}} + v\left( n \right) $$
(1)

where ω0 is the normalized angular frequency to be estimated (\( \omega_{0} \in \left[ { - \pi ,\pi } \right) \)) and θ is the phase of the signal. The observation noise v(n) is additive complex white Gaussian noise with zero mean. The multiplicative noise z(n) is a random process with mean μ:

$$ z\left( n \right) = \mu + w\left( n \right) $$
(2)

where w(n) is white noise with zero mean. It is also assumed that w(n) and v(n) are mutually independent with variances \( \sigma_{v}^{2} \) and \( \sigma_{w}^{2} \), respectively. It is worth noting that parameters \( \omega_{0} ,\mu ,\theta ,\sigma_{v}^{2} \) and \( \sigma_{w}^{2} \) are unknown. The main goal of this paper is to estimate ω0 from the observations \( x\left( n \right);n = 0,1, \ldots ,N - 1 \). All of the random processes are under Wide-Sense Stationary (WSS) hypothesis.

3 Proposed Method

The proposed method is based on the linear prediction property of the autocorrelation sequence of observations. Five cases for multiplicative noise are considered. In each case, the autocorrelation sequence of observations is computed by an unbiased relation and is then used to derive the estimator.

If the signal is defined as:

$$ s\left( n \right) = z\left( n \right)e^{{j(\omega_{0} n + \theta )}} $$
(3)

Then the ACF of the signal is:

$$ \begin{aligned} r_{s} \left( k \right) & = E\left\{ {s\left( n \right)s^{*} \left( {n - k} \right)} \right\} \\ & = E\left\{ {z\left( n \right)e^{{j\left( {\omega_{0} n + \theta } \right)}} z^{*} \left( {n - k} \right)\left( {e^{{ - j\left( {\omega_{0} (n - k} \right) + \theta )}} } \right)} \right\} \\ & = e^{{j\omega_{0} k}} E\left\{ {z\left( n \right)z^{*} \left( {n - k} \right)} \right\} \\ & = e^{{j\omega_{0} k}} \left( {\mu^{2} + \sigma_{w}^{2} \delta \left( k \right)} \right) \\ \end{aligned} $$
(4)
$$ \to r_{s} \left( k \right) = \mu^{2} e^{{j\omega_{0} k}} + \sigma_{w}^{2} \delta \left( k \right) $$
(5)

The signal in the presence of both multiplicative and additive noises is as follows:

$$ x\left( n \right) = s\left( n \right) + v\left( n \right) $$
(6)

Finally, the ACF of the observed signal is:

$$ \begin{aligned} r_{x} \left( k \right) & = E\left\{ {x\left( n \right)x^{*} \left( {n - k} \right)} \right\} \\ & = E\left\{ {\left( {s\left( n \right) + v\left( n \right)} \right)\left( {s^{*} \left( {n - k} \right) + v^{*} \left( {n - k} \right)} \right)} \right\} \\ & = E\left\{ {s\left( n \right)s^{*} \left( {n - k} \right)} \right\} + E\left\{ {s\left( n \right)v^{*} \left( {n - k} \right)} \right\} + E\left\{ {v\left( n \right)s^{*} \left( {n - k} \right)} \right\} + E\left\{ {v\left( n \right)v^{*} \left( {n - k} \right)} \right\} \\ r_{x} \left( k \right) & = r_{s} \left( k \right) + r_{v} \left( k \right) \\ \end{aligned} $$
(7)

where

$$ r_{v} \left( k \right) = \sigma_{v}^{2} \delta \left( k \right) $$
(8)

It follows that:

$$ r_{x} \left( k \right) = \mu^{2} e^{{j\omega_{0} k}} + \left( {\sigma_{w}^{2} + \sigma_{v}^{2} } \right)\delta \left( k \right) $$
(9)

Using the prediction property of \( r_{ps} \left( k \right) = \mu^{2} e^{{j\omega_{0} k}} , \) we have:

$$ r_{ps} \left( k \right) - ar_{ps} \left( {k - 1} \right) = 0 $$
(10)

where

$$ a = e^{{j\omega_{0} }} $$
(11)

Therefore, we can see that the frequency can be estimated by calculating the phase of the perdition coefficient a.


Case 1 The signal with additive and multiplicative white noise and \( \mu \ne 0 \)

It follows from (9) and (10) that:

$$ r_{x} \left( k \right) - ar_{x} \left( {k - 1} \right) = \left( {\sigma_{w}^{2} + \sigma_{v}^{2} } \right)\delta \left( k \right) - a\left( {\sigma_{w}^{2} + \sigma_{v}^{2} } \right)\delta \left( {k - 1} \right) $$
(12)

Evaluating (12) for k = 0, …q and arranging equations in matrix form give

$$ \left( {\varvec{R} - \lambda \varvec{G}} \right){\mathbf{v}} = 0 $$
(13)

where

$$ \varvec{R} = \left( {\left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {r_{x} \left( 0 \right)} & {r_{x} \left( { - 1} \right)} \\ {r_{x} \left( 1 \right)} & {r_{x} \left( 0 \right)} \\ \end{array} } \\ {\begin{array}{*{20}c} {r_{x} \left( 2 \right)} & {r_{x} \left( 1 \right)} \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ \end{array} } \\ {\begin{array}{*{20}c} {r_{x} \left( q \right)} & {r_{x} \left( {q - 1} \right)} \\ \end{array} } \\ \end{array} } \right]} \right),\quad \varvec{G} = \left( {\left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \\ {\begin{array}{*{20}c} 0 & 0 \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ \end{array} } \\ {\begin{array}{*{20}c} 0 & 0 \\ \end{array} } \\ \end{array} } \right]} \right),\quad {\mathbf{v}} = \left[ {\begin{array}{*{20}c} 1 \\ { - a} \\ \end{array} } \right]\;{\text{and}}\;\lambda = \left( {\sigma_{w}^{2} + \sigma_{v}^{2} } \right) $$
(14)

The R and G are \( \left( {q + 1} \right) \times 2 \) matrices, and v is \( a\left( {2 \times 1} \right) \) vector.

Using generalized eigenvalue problem in [6], we can find \( \hat{a} \), and then, \( \hat{\omega }_{0} = {\text{phase}}\;\left( {\hat{a}} \right) \). The steps for solving the generalized eigenvalue problem are as follows [7]:

  1. 1.

    Construct the following matrices

    $$ {\mathbf{A}}_{o} = {\mathbf{R}}^{H} .{\mathbf{R}},{\mathbf{A}}_{1} = {\mathbf{R}}^{\text{H}} {\mathbf{G}} + {\mathbf{G}}^{\text{H}} {\mathbf{R}},{\mathbf{A}}_{2} = {\mathbf{G}}^{\text{H}} {\mathbf{G}} $$
    (15)
  2. 2.

    Using the matrices in step 1 form:

    $$ {\mathbf{P}} = \left[ {\begin{array}{*{20}c} {{\mathbf{A}}_{{\mathbf{o}}} } & {\mathbf{O}} \\ {\mathbf{O}} & {\mathbf{I}} \\ \end{array} } \right],{\mathbf{Q}} = \left[ {\begin{array}{*{20}c} {{\mathbf{A}}_{1} } & { - {\mathbf{A}}_{2} } \\ {\mathbf{I}} & {\mathbf{O}} \\ \end{array} } \right] $$
    (16)

    where O and I are the zeros matrix and the identity matrix, respectively.

  3. 3.

    By solving the generalized eigenvalue problem as \( {\mathbf{Px}} = \lambda {\mathbf{Qx}} \), the eigenvalues and eigenvectors can be obtained.

  4. 4.

    λ and x choose to be the minimum eigenvalue and its associated eigenvector, respectively, of this generalized eigenvalue problem. This choice is supported by the simulation results given in Sect. 4.

  5. 5.

    We select two first elements of x to obtain v. The second element of v gives us an estimate of − a.

Case 2 The signal with additive and multiplicative white noise and \( \mu = 0 \)

In this case, the term related to the signal in autocorrelation is removed. Therefore, we cannot directly use linear prediction.

In the case where μ = 0, the observed signal is:

$$ \mu = 0 \to x\left( n \right) = w\left( n \right)e^{{j(\omega_{0} n + \theta )}} + v\left( n \right) $$
(17)

By squaring the observed signal, we have:

$$ y\left( n \right) = x^{2} \left( n \right) = w^{2} \left( n \right)e^{{j2(\omega_{0} n + \theta )}} + v^{2} \left( n \right) + 2v\left( n \right)w\left( n \right)e^{{j\left( {\omega_{0} n + \theta } \right)}} $$
(18)

The expectation of y(n) is equal to:

$$ E\left\{ {y\left( n \right)} \right\} = \sigma_{w}^{2} e^{{j2\left( {\omega_{0} n + \theta } \right)}} + \sigma_{v}^{2} $$
(19)

An approximation of (19) is given by:

$$ y\left( n \right) \approx \sigma_{w}^{2} e^{{j2\left( {\omega_{0} n + \theta } \right)}} + e\left( n \right) $$
(20)

The randomness e(n) is due to omitting of the expectation in (20). We can roughly approximate e(n) as a white noise with variance \( \sigma_{e}^{2} \). It follows that:

$$ y\left( n \right) - ay\left( {n - 1} \right) = e\left( n \right) - ae\left( {n - 1} \right) $$
(21)
$$ r_{\varvec{y}} \left( k \right) - ar_{y} \left( {k - 1} \right) = \sigma_{e}^{2} \delta \left( k \right) - a\sigma_{e}^{2} \delta \left( {k - 1} \right) $$
(22)

where the a parameter is \( e^{{j2\omega_{o} }} . \)

Similar to Case 1, the equation can be considered as a quadratic eigenvalue problem. By solving the quadratic eigenvalue problem [7], we can find \( \hat{a} \) so \( \hat{\omega }_{0} = \left( {0.5} \right)*{\text{phase}}\left( {\hat{a}} \right) \). In this case, the estimated frequency must be multiplied by 0.5, so the range of frequencies must be between (\( - \frac{\pi }{2},\frac{\pi }{2} \)).


Case 3.1 The signal with additive white and multiplicative colored AR(1) noise

At first, we assume that the multiplicative colored noise is generated by [17]:

$$ z\left( n \right) = bz\left( {n - 1} \right) + e\left( n \right) $$
(23)

where b is supposed to be a positive real value and e(n) is a white Gaussian noise with zero mean and variance \( \sigma_{e}^{2} \). It is worth noting that in the case of b = 0, z(n) = e(n), so the colored noise is converted to zero mean white noise (see Case 2).

The ACF of the colored multiplicative noise is:

$$ r_{z} \left( k \right) = \frac{{\sigma_{e}^{2} }}{{1 - b^{2} }}b^{\left| k \right|} $$
(24)

Here, the ACF of the observed data is:

$$ r_{x} \left( k \right) = r_{z} \left( k \right)e^{{j\omega_{0} k}} + \sigma_{v}^{2} \delta \left( k \right) $$
(25)

For k ≥ 0, we can find out:

$$ r_{x} \left( k \right) - ar_{x} \left( {k - 1} \right) = \sigma_{v}^{2} \delta \left( k \right) - a\sigma_{v}^{2} \delta \left( {k - 1} \right) $$
(26)

where

$$ a = be^{{j\omega_{0} }} $$
(27)

Assuming b is a positive number, we can find \( \hat{a} \) and \( \hat{\omega }_{0} = {\text{phase}}\left( {\hat{a}} \right) \).


Case 4 Multi-component signal spectral estimation

If we suppose that two fluctuating targets exist, the observation is modeled as:

$$ x\left( n \right) = z\left( n \right)e^{{j\omega_{1} n}} + \beta \left( n \right)e^{{j\omega_{2} n}} + v\left( n \right) $$
(28)

where z(n) is a multiplicative white Gaussian noise with mean μ1 and variance σ2z, β(n) is a multiplicative white Gaussian noise with mean μ2 and variance \( \sigma_{\beta }^{2} \) and v(n) is an additive white Gaussian noise with zero mean and variance \( \sigma_{v}^{2} \). It is assumed that z(n) and β(n) are mutually uncorrelated.

We consider the signal in the absence of the observation noise as the following:

$$ s\left( n \right) = z\left( n \right)e^{{j\omega_{1} n}} + \beta \left( n \right)e^{{j\omega_{2} n}} $$
(29)

The ACF of the signal can be computed as:

$$ r_{s} \left( k \right) = \mu_{1}^{2} e^{{j\omega_{1} k}} + \mu_{2}^{2} e^{{j\omega_{2} k}} + \left( {\sigma_{z}^{2} + \sigma_{\beta }^{2} } \right)\delta \left( k \right) $$
(30)

It can be considered as follows:

$$ r_{x} \left( k \right) = r_{ps1} \left( k \right) + r_{ps2} \left( k \right) + \left( {\sigma_{w}^{2} + \sigma_{v}^{2} } \right)\delta \left( k \right) $$
(31)

where \( r_{ps1} \left( k \right) = \mu_{1}^{2} e^{{j\omega_{1} k}} \) and \( r_{ps2} \left( k \right) = \mu_{2}^{2} e^{{j\omega_{2} k}} \).


Using the prediction property of \( r_{ps1} \left( k \right) = \mu_{1}^{2} e^{{j\omega_{1} k}} , \) we have:

$$ r_{ps1} \left( k \right) - ar_{ps1} \left( {k - 1} \right) = 0 $$
(32)

where

$$ a = e^{{j\omega_{1} }} $$
(33)

Similar to (32), we use linear prediction for \( r_{ps2} \left( k \right) = \mu_{2}^{2} e^{{j\omega_{2} k}} \):

$$ r_{ps2} \left( k \right) - br_{ps2} \left( {k - 1} \right) = 0 $$
(34)

where

$$ b = e^{{j\omega_{2} }} $$
(35)

Taking Z transform on Eqs. (32) and (34), we obtain:

$$ \left( {1 - az^{ - 1} } \right)r_{ps1} \left( z \right) = 0 $$
(36)
$$ \left( {1 - bz^{ - 1} } \right)r_{ps2} \left( z \right) = 0 $$
(37)

Multiplying (36) by \( \left( {1 - bz^{ - 1} } \right) \) and (37) by (\( \left( {1 - az^{ - 1} } \right) \), we get:

$$ \left( {1 - bz^{ - 1} } \right)\left( {1 - az^{ - 1} } \right)r_{ps1} \left( z \right) = 0 $$
(38)
$$ \left( {1 - az^{ - 1} } \right)\left( {1 - bz^{ - 1} } \right)r_{ps2} \left( z \right) = 0 $$
(39)

Adding Eqs. (38) and (39) yields:

$$ \left( {1 - az^{ - 1} } \right)\left( {1 - bz^{ - 1} } \right)(r_{ps1} \left( z \right) + r_{ps2} \left( z \right)) = 0 $$
(40)

It results

$$ \left( {1 - az^{ - 1} } \right)\left( {1 - bz^{ - 1} } \right)(r_{ps} \left( z \right)) = 0 $$
(41)

By taking inverse Z transform on the above equation, we have:

$$ r_{ps} \left( k \right) - ar_{ps} \left( {k - 1} \right) - br_{ps} \left( {k - 1} \right) + abr_{ps} \left( {k - 2} \right) = 0 $$
(42)

Substituting Eq. (31) into (42), we can obtain:

$$ r_{x} \left( k \right) - ar_{x} \left( {k - 1} \right) - br_{x} \left( {k - 1} \right) + abr_{x} \left( {k - 2} \right) = h\left( {\delta \left( k \right) - a\delta \left( {k - 1} \right) - b\delta \left( {k - 1} \right) + ab\delta \left( {k - 2} \right)} \right) $$
(43)

where \( h \, = \sigma_{z}^{2} + \sigma_{\beta }^{2} + \sigma_{v}^{2} \)

Similar to previous cases, Eq. (43) can be arranged in matrix form (\( \left( {\varvec{R} - \lambda \varvec{G}} \right){\mathbf{v}} = 0 \)) and solved for ab and h by constructing a generalized eigenvalue problem as follows:

where

$$ \varvec{R} = \left( {\left[ {\begin{array}{*{20}c} {r_{x} \left( 0 \right)} & {r_{x} \left( { - 1} \right)} & {r_{x} \left( { - 2} \right)} \\ {r_{x} \left( 1 \right)} & {r_{x} \left( 0 \right)} & {r_{x} \left( { - 1} \right)} \\ {\begin{array}{*{20}c} \vdots \\ {r_{x} \left( q \right)} \\ \end{array} } & {\begin{array}{*{20}c} \vdots \\ {r_{x} \left( {q - 1} \right)} \\ \end{array} } & {\begin{array}{*{20}c} \vdots \\ {r_{x} \left( {q - 2} \right)} \\ \end{array} } \\ \end{array} } \right]} \right),\quad \varvec{G} = \left( {\left[ {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ {\begin{array}{*{20}c} 0 \\ {\begin{array}{*{20}c} \vdots \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} 0 \\ {\begin{array}{*{20}c} \vdots \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} 1 \\ {\begin{array}{*{20}c} \vdots \\ 0 \\ \end{array} } \\ \end{array} } \\ \end{array} } \right]} \right),\quad {\mathbf{v}} = \left[ {\begin{array}{*{20}c} 1 \\ {\begin{array}{*{20}c} { - a - b} \\ {ab} \\ \end{array} } \\ \end{array} } \right] $$
(44)

and \( \lambda = h = \sigma_{z}^{2} + \sigma_{\beta }^{2} + \sigma_{v}^{2} \)

The R and G are \( \left( {q + 1} \right) \times 3 \) matrices, and v is (3 × 1) vector. In comparison with the previous cases, in this case the number of columns of matrices is increased by one.

Using the estimates, the spectrum of observations can be obtained as follows:

$$ \hat{\varphi }\left( \omega \right) = \frac{1}{{\left| {\left( {1 - ae^{ - j\omega } } \right)\left( {1 - be^{ - j\omega } } \right)} \right|^{2} }} $$
(45)

Case 5 Real application example: Doppler frequency estimation for fluctuating moving targets in radar systems

The linear frequency modulated (LFM) signals are common in radar systems. One of their important applications is to estimate the range and radial velocity of the targets. This goal is achieved by transmitting the known LFM signal as follows [13]:

$$ x_{T} \left( n \right) = e^{{j2\pi f_{o} n}} e^{{j\pi \alpha n^{2} }} $$
(46)

where \( f_{o} \) is the carrier frequency and α denotes the frequency rate.

In the case of a fluctuating target, the received signal in the absence of the additive noise is given by:

$$ x_{R} \left( n \right) = u\left( n \right)e^{{j2\pi f_{o} \left( {n - n_{d} } \right)}} e^{{j\pi \alpha \left( {n - n_{d} } \right)^{2} }} $$
(47)

where u(n) is considered as a complex white Gaussian noise due to target fluctuation. Time delay nd for the target is obtained by:

$$ n_{d} = \frac{2R}{c} = \frac{2}{c}(R_{o} - v_{o} n ) $$
(48)

where RRo and vo are the range, the initial range and the velocity of the target, respectively.

The goal is to estimate the velocity of the moving target in the presence of target fluctuation. Multiplying the conjugate of the received signal by the transmitted signal, we will have:

$$ x_{b} \left( n \right) = x_{T} \left( n \right)x_{R}^{*} \left( n \right) = u^{*} \left( n \right)e^{{j2\pi f_{o} n_{d} }} e^{{j2\pi n\alpha n_{d} }} e^{{ - j\pi \alpha n_{d}^{2} }} $$
(49)

where (.)*stands for complex conjugate.

With a little approximation, we can neglect the term \( e^{{ - j\pi \alpha n_{d}^{2} }} \), and by substituting (48) into (49), we will have:

$$ x_{b} \left( n \right) = u^{*} \left( n \right)e^{{\frac{{j4\pi f_{o} R_{o} }}{c}}} e^{{\frac{j4\pi }{c}\left( {\alpha R_{o} - f_{o} v_{o} } \right)n}} e^{{\frac{{ - j4\pi \alpha v_{o} n^{2} }}{c}}} $$
(50)

Now, we have:

$$ c\left( n \right) = x_{b} \left( {n - 1} \right)x_{b}^{*} \left( n \right) = z\left( n \right)e^{{\frac{{j8\pi \alpha v_{o} n}}{c}}} $$
(51)

where

$$ z\left( n \right) = u^{*} \left( n \right)u\left( {n - 1} \right)e^{{\frac{{ - j4\pi \alpha R_{o} }}{c}}} e^{{\frac{{ - j4\pi \alpha v_{o} }}{c}}} e^{{\frac{{j4\pi f_{o} v_{o} }}{c}}} $$
(52)

We can suppose that z(n) is a complex white noise. Moreover, in (51), the term \( \frac{{8\pi \alpha v_{o} }}{c} \) is equal to ωo which can be estimated by the proposed method.

4 Simulations and Comparisons

Several computer simulations have been conducted to illustrate the performance of the proposed method and compare it with the method in [20], the ESPRIT method, the MUSIC method and the CRB.

Under the assumptions that the multiplicative noise z(n) in (3) is a Gaussian stationary random process and v(n) is an additive Gaussian white noise, for Cases 1–2, the Cramer-Rao bound (CRB) for the parameter ω0 can be approximated by [23]:

$$ {\text{CRB}}\;(\omega_{0} ) = \frac{6}{{N\left( {N^{2} - 1} \right)\;{\text{SNR}}}} $$
(53)
$$ {\text{SNR}} = \frac{{\mu^{2} }}{{\sigma_{v}^{2} }}\left( {1 + m_{A}^{2} } \right) = \frac{{\mu^{2} + \sigma_{w}^{2} }}{{\sigma_{v}^{2} }} $$
(54)
$$ m_{A} = \frac{{\sigma_{w} }}{\mu } $$
(55)

For Case 3, the SNR is defined as:

$$ {\text{SNR}} = \frac{{\sigma_{e}^{2} }}{{\left( {1 - b^{2} } \right)\sigma_{v}^{2} }} $$
(56)

All processes are generated with Gaussian distribution. The number of simulation runs is 500. Five cases are considered, and the results are shown in Figs. 1, 2, 3, 4, 5 and 6, respectively. In Figs. 1, 2, 3 and 4, the number of data samples is 32. In Fig. 5, the number of data samples is 100, in Fig. 6, this number is 32, and in the last figure this number is different. The mean squared errors (MSEs) of the frequency estimation for different SNRs are presented in Figs. 1, 2 and 3. In Fig. 1, the multiplicative noise is a white Gaussian noise with μ = 2. In Fig. 2, the multiplicative noise is a white Gaussian noise with μ = 0. In Fig. 3, the multiplicative noise is a colored noise generated by (23), in which the parameters are set to \( \sigma_{e}^{2} \)  = 1 and b = 0.8. In Figs. 1, 2 and 3, the SNR varies from 0 to 25 dB with step size 2 dB. In the proposed method, the parameter q is set to 8.

Fig. 1
figure 1

MSE versus SNR. N = 32, μ = 2, \( \omega_{0} = 0.3 \;{\text{radian}}/{\text{sample}} \)

Fig. 2
figure 2

MSE versus SNR. \( N = 32 \), μ = 0, \( \omega_{0} = 0.3\;{\text{radian}}/{\text{sample}} \)

Fig. 3
figure 3

MSE versus SNR. N = 32, μ = 0, b = 0.8, \( \omega_{0} = 0.3\;{\text{radian}}/{\text{sample}} \)

Fig. 4
figure 4

MSE versus frequency. N = 32, μ = 2, b = 0.8

Fig. 5
figure 5

Spectrum versus frequency. N = 100, \( \omega_{1} = 1\;\left( {{\text{radian}}/{\text{sample}}} \right) \) and \( \omega_{2} = 1.25\;\left( {{\text{radian}}/{\text{sample}}} \right) \)

Fig. 6
figure 6

MSE versus velocity. \( N = 32, \;R_{o} = 3 \;{\text{km}}, \; v_{o} = 500 \; {\text{m}}/{\text{s}}, \; \alpha = 5 \)

It can be seen from Figs. 1 and 2 that the performance of the proposed method is close to CRB (except for low SNRs) and better than the algorithm presented in [20], the ESPRIT method and the MUSIC method.

According to Fig. 3, in the case of colored multiplicative noise, the proposed method can still estimate the frequency better than the other algorithms.

To show the uniformity of performance in the proposed method, the frequency is changed from 0 to 1.4 with step size 0.1. Other simulation parameters are the same as Fig. 1. The results are shown in Fig. 4. It can be seen from this figure that the performance of the proposed method is uniform in this frequency range.

To show the advantage of the proposed method regarding resolution performance in the multi-target case, we consider two signals at frequencies \( \omega_{1} = 1\left( {{\text{radian}}/{\text{sample}}} \right) \) and \( \omega_{2} = 1.25\left( {{\text{radian}}/{\text{sample}}} \right) \). The number of data samples is 100, SNR = 5 dB and q = 80. The multiplicative noise is generated by a complex white Gaussian process with μ = 2. In this case, we compare the performance of the proposed method with the MUSIC method. The result is shown in Fig. 5. From Fig. 5, it is understood that the resolution of the proposed method is better than that of the MUSIC method. In other words, the proposed method can precisely separate the two targets while the MUSIC method fails to separate the targets.

Figure 6 shows the performance of the proposed method and the other methods in velocity estimation in Case 5. It can be seen from Fig. 6 that the performance of the proposed method in velocity estimation is better than the other methods in terms of the mean squared error. Moreover, it is worth emphasizing that the proposed method can estimate the velocity in this case without knowing parameters \( R_{o} ,f_{o} \) and α.

Figure 7 shows the performance of the proposed method and the other methods in frequency estimation in Case 1 versus data samples. It can be seen from Fig. 7 that the performance of the proposed method in frequency estimation is better than the other methods in terms of the mean squared error in several data samples.

Fig. 7
figure 7

MSE versus data samples (N). \( {\text{SNR}} = 7\;{\text{dB}},\;\mu = 2 \), \( \omega_{0} = 0.3 \;{\text{radian}}/{\text{sample}} \)

5 Conclusion

In this paper, the frequency estimation of complex sinusoidal signals in multiplicative and additive noise is addressed. The proposed method is based on the linear prediction property of autocorrelation function. The problem of frequency estimation is converted into a noisy autoregressive parameter estimation one. Five cases of signal and noise are considered in autocorrelation domain. For each case, the resulting equations are solved by generalized eigenvalue problem. This frequency estimator can directly estimate the frequency without peak searching.

Simulation results have confirmed that the performance of the proposed method is better than the other existing methods. Monte Carlo simulation results confirm the effective performance of the proposed method. Furthermore, it has been shown that the performance of the proposed method is close to CRB at high SNRs. It is worth noting that simulation results match with the mathematical model.