1 Introduction

A two-dimensional (2-D) complex-valued chirp signal model with random amplitude is expressed as

$$\begin{aligned} y(m,n) = \psi (m,n) e^{i(\alpha _1^0 m + \alpha _2^0 m^2 + \beta _1^0 n + \beta _2^0 n^2)} + X(m,n), \;\;\; m=1,\ldots ,M; n=1,\ldots ,N. \nonumber \\ \end{aligned}$$
(1)

Here \(i = \sqrt{-1}\); and \(y(m,n)=y_R(m,n) + i y_I(m,n)\) are the complex-valued signals; \(\alpha _1^0\) and \(\alpha _2^0\) are the frequency and the chirp rate, respectively in the first dimension and \(\beta _1^0\) and \(\beta _2^0\) are the corresponding frequency and the chirp rate, respectively in the second dimension. Further, \(0< \alpha _1^0, \alpha _2^0, \beta _1^0, \beta _2^0 < \pi \) and the amplitude \(\{\psi (m,n)\}\) is a 2-D sequence of real-valued random variables which can be viewed as multiplicative error as \(\psi (m,n)\) is random and enters the model as product of the non-random signal component. The additive error \(\{X(m,n)\}\) is a 2-D sequence of complex-valued random variables with zero mean and it is assumed that the fourth moment exists. The problem is to estimate the unknown parameters \(\alpha _1^0\), \(\alpha _2^0\), \(\beta _1^0\) and \(\beta _2^0\) based on MN observations \(\{y(m,n); m=1, \ldots , M; n=1,\ldots , N\}\) under certain assumptions on the sequence of random amplitudes \(\{\psi (m,n)\}\) and the sequence of additive error \(\{X(m,n)\}\). The specific assumptions on \(\{\psi (m,n)\}\) and \(\{X(m,n)\}\) are given in Appendix A.

The 2-D chirp model as well as 2-D sinusoidal model has many applications in image analysis and telecommunications. The later is one of the basic models in statistical signal processing literature. Different other applications are found in biomedical spectral analysis, texture analysis; see Kliger and Francos (2008), Kliger and Francos (2013), Cao et al. (2006) etc. Similar models have been used in analysis of synthetic aperture radar data by Djurović et al. (2010). Grover et al. (2020) illustrated that 2-D chirp model can be effectively used in black and white texture analysis. Apart from these applications, such signals are commonly observed in surveillance system, in sonar and radar, mobile telecommunications, finger print images etc., see for example Zhang et al. (2008), Simeunović et al. (2019), Djurović and Simeunović (2018), Porwal et al. (2019) and the references cited therein.

Model (1) is a natural extension of one-dimensional (1-D) random amplitude chirp model to two dimension. Nandi and Kundu (2020b) considered this model and proved the consistency of the estimators proposed by Besson et al. (1999) and obtained their asymptotic distribution. In case of constant amplitude instead of random amplitude, this model is the usual 1-D chirp model which again generalizes the complex-valued exponential or sinusoidal model. The complex-valued exponential model is the most basic model in this class of models. The 2-D random amplitude chirp model is a more generalized version of these models.

The 2-D random amplitude chirp model can be viewed as a generalized version of 2-D chirp model when \(\psi (m,n)\) is identically equal to a non-zero constant. Several authors have studied the 2-D chirp model, see for example, Zhang et al. (2008), Cao et al. (2006), Lahiri et al. (2013), Lahiri and Kundu (2017) and Grover et al. (2018a, 2020). Model (1) can also be seen as a generalization of 2-D random amplitude sinusoidal model when \(\alpha _2^0 = \beta _2^0=0\). The 2-D sinusoidal model has many applications in grey symmetric textures. When \(\psi (m,n)\) is constant and the frequency rates \(\alpha _2^0\) and \(\beta _2^0\) are simultaneously equal to zero, then model (1) is nothing but the 2-D sinusoidal model. Therefore, model (1) is quite a general model.

The aim of this paper is to consider the problem of estimation of the unknown parameters, the frequencies \(\alpha _1^0\) and \(\alpha _2^0\) and the frequency rates \(\beta _1^0\) and \(\beta _2^0\) and study the theoretical properties of these estimators. The estimation method considered in this paper is based on the maximization of the 2-D periodogram function of \(y^2(m,n)\), interlaced with zeros. We have shown that the proposed method is equivalent to the nonlinear least squares estimation method. It is an extension of the method proposed by Besson et al. (1999) for 1-D random amplitude chirp model to two dimension. Although, this estimator has not been studied, and its properties have not been explored. We have shown that the proposed estimators of the unknown parameters are consistent and asymptotically normally distributed. A consistent estimator of an unknown parameter means that as the number of observations increase, the resulting sequence of estimators converges to the true value of the parameter. The asymptotic normality here means that the proposed estimator is asymptotically distributed with Gaussian behavior as the data sample size goes up. This asymptotic distribution also provides different rates of convergence of the estimators of the frequency and the chirp rate parameters. Based on the asymptotic distribution, the approximate variance of the estimator can be obtained at least for large sample sizes. It has also been observed that the proposed estimator attains the Cramer–Rao lower bound when the additive error is normally distributed. We perform numerical experiments extensively to see the behavior of the estimates for different sample sizes and different variances for additive as well as multiplicative error. The performances are quite satisfactory and gives an idea how the large sample results will be useful in practice for finite samples.

The rest of the paper is organized as follows. In Sect. 2, we discuss the method of estimation of the unknown parameters of model (1) and its equivalence to the nonlinear least squares method. In Sect. 3, we consider the multicomponent model and discuss how the estimation problem of the one component model can be applied to this case. In Sect. 4, numerical experiments are presented and we conclude the paper in Sect. 5. All the required assumptions, theoretical results and proofs are provided in Appendices. In Appendix A, the theoretical results of the proposed estimator for the one component model is presented including the assumptions. Similarly, for multicomponent model, the results are stated in Appendix B. The consistency of the proposed estimator for the one component model has been proved in Appendix C. The asymptotic distribution of the same has been derived in Appendix D. The outline of the proof of the consistency of the estimator for multicomponent model is presented in Appendix E.

2 Estimation of parameters of 2D-random amplitude chirp signal

In this section, we first discuss the problem of estimation of unknown parameters, namely, the frequency and the chirp rate parameters present in model (1). We consider a method of estimation which is equivalent to the nonlinear least squares estimation method.

In order to describe the estimation method, write \({\varvec{\alpha }} = (\alpha _1, \alpha _2)^\top \), \({\varvec{\beta }} = (\beta _1, \beta _2)^\top \); \({\varvec{\alpha }}^0\) and \({\varvec{\beta }}^0\) are the true values of \({\varvec{\alpha }}\) and \({\varvec{\beta }}\), respectively. Also denote \({\varvec{\xi }} = (\alpha _1, \alpha _2, \beta _1, \beta _2)^\top \) as the unknown parameter vector and \({\varvec{\xi }}^0\) be the true value of \({\varvec{\xi }}\). Then consider the estimator \(\widehat{\varvec{\xi }}\) of \({\varvec{\xi }}^0\) which maximizes the following four dimensional function.

$$\begin{aligned} Q({\varvec{\xi }}) = \frac{1}{MN} \left| \sum _{m=1}^M \sum _{n=1}^N y^2 (m,n) e^{-i 2(\alpha _1 m + \alpha _2 m^2 + \beta _1 n + \beta _2 n^2)} \right| ^2. \end{aligned}$$
(2)

Observe that \(Q({\varvec{\xi }})\) is the 2-D chirp periodogram function of \(y^2(m,n)\) with exponent replaced by twice the usual 2-D chirp periodogram exponent. The usual nonlinear least squares method is behind the motivation of taking this particular form of the criterion function of our proposed estimation method. In the following, we start with the form of the usual residual sum of squares corresponding to the additive error and derive the equivalence of the nonlinear least squares and the estimation method addressed in this paper.

The following derivation motivates us to maximize \(Q({\varvec{\xi }})\) to obtain the nonlinear least squares estimators of the unknown parameters present in model (1). Write model (1) as

$$\begin{aligned} y(m,n) = \psi (m,n) e^{i\phi ^0(m,n)} + X(m,n), \;\;\; m=1,\ldots ,M; n=1,\ldots ,N \end{aligned}$$

where \(\phi ^0(m,n) = (\alpha _1^0 m + \alpha _2^0 m^2 + \beta _1^0 n + \beta _2^0 n^2)\). Considering \(\psi (m,n)\) as unknown parameters for \(m=1,\ldots ,M\) and \(n=1,\ldots ,N\), the usual nonlinear least squares estimators of \(\psi (m,n)\) and \(\alpha _1^0\), \(\alpha _2^0\), \(\beta _1^0\) and \(\beta _2^0\) are obtained by minimizing

$$\begin{aligned} R({\varvec{\psi }},{\varvec{\xi }}) = \frac{1}{MN} \sum _{m=1}^M \sum _{n=1}^N \left| y(m,n) - \psi (m,n) e^{i(\alpha _1 m + \alpha _2 m^2 + \beta _1 n + \beta _2 n^2)} \right| ^2, \;\;\; {\varvec{\psi }} = ((\psi (m,n))) \end{aligned}$$

with respect to \(\psi (1,1), \ldots , \psi (M,N)\) and \({\varvec{\xi }}=(\alpha _1, \alpha _2, \beta _1, \beta _2)^\top \). Denote

$$\begin{aligned} \mathbf{y}_m= & {} (y(m,1), \ldots , y(m,N))^\top , \;\;\; {\varvec{\psi }}_m = (\psi (m,1), \ldots , \psi (m,N))^\top , \\ \mathbf{A}_m= & {} \text {diag}\Bigl \lbrace e^{i\phi (m,1)}, \ldots , e^{i\phi (m,N)} \Bigr \rbrace , \;\;\; \phi (m,n)=(\alpha _1 m + \alpha _2 m^2 + \beta _1 n + \beta _2 n^2), \end{aligned}$$

for \(m=1,\ldots ,M\) and \(n=1,\ldots ,N\). Then, the criterion function \(R({\varvec{\psi }},{\varvec{\xi }}) \) can be written as

$$\begin{aligned} R({\varvec{\psi }},{\varvec{\xi }}) = \frac{1}{MN} \sum _{m=1}^M \left| \left| \mathbf{y}_m - \mathbf{A}_m {\varvec{\psi }}_m \right| \right| ^2. \end{aligned}$$

In order to minimize \(R({\varvec{\psi }},{\varvec{\xi }})\), differentiating it with respect to \({\varvec{\psi }}_m\) for fixed \({\varvec{\xi }}\), we have

$$\begin{aligned} \frac{\partial R({\varvec{\psi }},{\varvec{\xi }})}{\partial {\varvec{\psi }}_m} = \frac{1}{MN} \sum _{m=1}^M \Bigl [ - \mathbf{A}_m^\top \mathbf{y}_m^* - \mathbf{A}_m^* \mathbf{y}_m + 2{\varvec{\psi }}_m \Bigl ] , \end{aligned}$$

where \(\mathbf{y}_m^*\) and \(\mathbf{A}_m^*\) are complex conjugate of \(\mathbf{y}_m\) and \(\mathbf{A}_m\), respectively. Therefore, for a given \({\varvec{\xi }}\), the vectors \({\varvec{\psi }}_1, \ldots , {\varvec{\psi }}_M\) which minimize \(R({\varvec{\psi }},{\varvec{\xi }})\) is given by

$$\begin{aligned} \widehat{\varvec{\psi }}_m({\varvec{\xi }}) = \frac{1}{2} \Bigl [ \mathbf{A}_m^\top \mathbf{y}_m^* + \mathbf{A}_m^* \mathbf{y}_m \Bigl ] , \;\;\; m=1,\ldots ,M. \end{aligned}$$

Replacing \({\varvec{\psi }}_m\) by \(\widehat{\varvec{\psi }}_m({\varvec{\xi }})\) in \(R({\varvec{\psi }},{\varvec{\xi }})\)

$$\begin{aligned} R(\widehat{\varvec{\psi }}({\varvec{\xi }}),{\varvec{\xi }})= & {} \frac{1}{MN} \sum _{m=1}^M \left| \left| \mathbf{y}_m - \frac{1}{2}{} \mathbf{A}_m \mathbf{A}_m^\top \mathbf{y}_m^* - \frac{1}{2}{} \mathbf{A}_m \mathbf{A}_m^* \mathbf{y}_m \right| \right| ^2 \\= & {} \frac{1}{MN} \sum _{m=1}^M \left| \left| \mathbf{y}_m - \frac{1}{2}{} \mathbf{A}_m^2 \mathbf{y}_m^* - \frac{1}{2}{} \mathbf{y}_m \right| \right| ^2 \\= & {} \frac{1}{4 MN} \sum _{m=1}^M \left| \left| \mathbf{y}_m - \mathbf{A}_m^2 \mathbf{y}_m^* \right| \right| ^2 \\= & {} \frac{1}{2 MN} \sum _{m=1}^M \mathbf{y}_m^H \mathbf{y}_m - \frac{1}{2 MN} \sum _{m=1}^M Re\Bigl [ \mathbf{y}_m^\top \mathbf{A}_m^{2*} \mathbf{y}_m \Bigl ] , \end{aligned}$$

where \(\mathbf{y}_m^H\) is the conjugate transpose of \(\mathbf{y}_m\). Now minimizing \(R(\widehat{\varvec{\psi }}({\varvec{\xi }}),{\varvec{\xi }})\) with respect to \({\varvec{\xi }}\) is equivalent to maximizing

$$\begin{aligned} \frac{1}{MN} \sum _{m=1}^M Re\Bigl [ \mathbf{y}_m^\top \mathbf{A}_m^{2*} \mathbf{y}_m \Bigl ]= & {} \frac{1}{MN} \sum _{m=1}^M Re\Bigl [ \sum _{n=1}^N y^2(m,n) e^{-2i\phi (m,n)} \Bigl ] \\= & {} \frac{1}{MN} \sum _{m=1}^M Re\Bigl [ \sum _{n=1}^N y^2(m,n) e^{-2i(\alpha _1 m + \alpha _2 m^2 + \beta _1 n + \beta _2 n^2)} \Bigl ] . \end{aligned}$$

Therefore, taking into consideration of the corresponding imaginary part, we base our estimation method on maximization of \(Q({\varvec{\xi }})\) with respect to \({\varvec{\xi }}\).

The nonlinear least squares estimation method has been addressed through the periodogram like function \(Q({\varvec{\xi }})\). The unknown parameters \(\alpha _1\), \(\alpha _2\), \(\beta _1\) and \(\beta _2\) are estimated by maximizing \(Q({\varvec{\xi }})\). Denote \(\widehat{\varvec{\xi }}^{\top }=(\widehat{\varvec{\alpha }}^{\top }, \widehat{\varvec{\beta }}^{\top })=(\widehat{\alpha }_1, \widehat{\alpha }_2, \widehat{\beta }_1, \widehat{\beta }_2)\) as the maximizer of \(Q({\varvec{\xi }})\), then

$$\begin{aligned} \widehat{\varvec{\xi }} = {\arg \max }_{(\alpha _1, \alpha _2, \beta _1, \beta _2)} \; Q({\varvec{\xi }}). \end{aligned}$$
(3)

Using notation \(y^2(m,n)=z(m,n)\), and \(2\alpha _1 = a_1\), \(2\alpha _2=a_2\), \(2\beta _1 = b_1\) and \(2\beta _2 = b_2\), we note that \(Q({\varvec{\xi }})\) is the usual 2-D chirp periodogram function for 2-D chirp model. The real and imaginary parts of the squared responses z(mn), say \(z_R(m,n)\) and \(z_I(m,n)\) are explicitly given in Appendix C. These will be required to establish the consistency and the asymptotic distribution of the proposed estimator \(\widehat{\varvec{\xi }}\). It is observed that the maximization of \(Q({\varvec{\xi }})\) can be carried out by any four-dimensional optimization method over \(\varvec{\Omega }\).

In model (1), if we fix \(n=s\), then \(\{y(m,s); m=1,\ldots ,M\}\) represents the sth column of the \(M \times N\) data matrix ((y(mn))). Therefore, the sth column is a sample of the following 1-D complex-valued random amplitude chirp model

$$\begin{aligned} y(m,s) = \delta (m,s, \beta _1^0, \beta _2^0) e^{i(\alpha _1^0 m + \alpha _2^0 m^2)} + e(m,s), \end{aligned}$$

where the amplitude \(\delta (m,s, \beta _1^0, \beta _2^0) = \psi (m,s) e^{i(\beta _1^0 s + \beta _2^0 s^2)}\) is complex-valued. If we sum the columns over s,

$$\begin{aligned} \sum _{s=1}^N y(m,s)= & {} \Bigl [ \sum _{s=1}^N \delta (m,s, \beta _1^0, \beta _2^0) \Bigl ] e^{i(\alpha _1^0 m + \alpha _2^0 m^2)} + \sum _{s=1}^N e(m,s), \\ \Rightarrow z_1(m)= & {} a(m, \beta _1^0, \beta _2^0) e^{i(\alpha _1^0 m + \alpha _2^0 m^2)} + \epsilon _1(m), \;\;\;m=1,\ldots ,M. \end{aligned}$$

Similarly, each row of the data matrix ((y(mn))) and their sum say \(z_2(n)\) represent 1-D complex-valued random amplitude model of the same form with unknown parameters \(\beta _1^0\) and \(\beta _2^0\). Efficient estimation of \(\alpha _1^0\) and \(\alpha _2^0\) as well as \(\beta _1^0\) and \(\beta _2^0\) may be developed based on the above observation.

Model (1) is a highly nonlinear model in its parameters. Therefore, all the theoretical results of the proposed estimator \(\widehat{\varvec{\xi }}\) will be valid for large samples. The theoretical properties namely the consistency and asymptotic normality of \(\widehat{\varvec{\xi }}\) as well as the necessary assumptions required to establish these properties are stated in Appendices A, C and D.

3 Multicomponent random amplitude chirp model

In this section, we extend the idea of 2-D random amplitude chirp model to multiple components when p pairs of a frequency and a chirp rate corresponding to both the dimensions are present. The model can be formulated as

$$\begin{aligned}&y(m,n) = \sum _{k=1}^p \psi _k(m,n) e^{i(\alpha _{1k}^0 m + \alpha _{2k}^0 m^2 + \beta _{1k}^0 n + \beta _{2k}^0 n^2)} + X(m,n); \nonumber \\&m=1,\ldots ,M; n = 1, \ldots , N. \end{aligned}$$
(4)

For \(k=1,\ldots ,p\), the frequencies \(\alpha _{1k}^0\) and \(\alpha _{2k}^0\) and the frequency rates \(\beta _{1k}^0\) and \(\beta _{2k}\) are unknown and needed to be estimated given a sample of size MN. The additive errors \(\{X(m,n)\}\) is a 2-D sequence of complex-valued random variables similar to model (1). The sequence of random variables \(\{\psi _k(m,n)\}\) corresponds to the kth component random amplitude, \(k=,\ldots ,p\); it is assumed that \(\{\psi _1(m,n)\} \ldots \{\psi _p(m,n)\}\) are sequences of independent and identically distributed (i.i.d.) random variables. We assume that the number of component, p is known in advance.

The method of estimation of the unknown parameters for the multicomponent model (4) is based on the same chirp periodogram like function \(Q({\varvec{\xi }})\) defined in Sect. 2. The unknown parameters are estimated by maximizing \(Q({\varvec{\xi }})\) locally. Denote \({\varvec{\xi }}_k = (\alpha _{1k}, \alpha _{2k}, \beta _{1k}, \beta _{2k})^{\top }\) and suppose \({\varvec{\xi }}_k^0\) is the true value of \({\varvec{\xi }}_k\). The maximization is carried out in a neighborhood of \({\varvec{\xi }}_k^0\) to estimate the kth component parameters. Let \(N_k\) be a neighborhood of \({\varvec{\xi }}_k^0\) such that for \(j \ne k\), \({\varvec{\xi }}_j^0 \notin N_k\). That is, \(N_k\) has to be chosen in such a way that no other \({\varvec{\xi }}_j^0\) belongs to \(N_k\) and \({\varvec{\xi }}_k^0\) and \({\varvec{\xi }}_j^0\), \(j\ne k\) are needed to be well separated. The choice of \(N_k\) for small samples depends on the variance of the additive error sequence \(\{X(m,n)\}\) also. Formally, estimate \({\varvec{\xi }}_k\) as

$$\begin{aligned} \widehat{ \varvec{\xi }}_k = \arg \max _{(\alpha _1, \alpha _2, \beta _1, \beta _2) \in N_k} \frac{1}{MN} \left| \sum _{m=1}^M \sum _{n=1}^N y^2(m,n) e^{-i 2(\alpha _1 m + \alpha _2 m^2 + \beta _1 n + \beta _2 n^2)} \right| ^2 \end{aligned}$$

where y(mn) is given in (4). The whole process of estimation can be carried out by solving p separate optimization problems and each one involves a four dimensional maximization over a bounded region.

Similar to the one component model, the theoretical properties of the estimator \(\widehat{\varvec{\xi }}_k\) of \({\varvec{\xi }}_k^0\) defined above are provided in Appendix B along with the assumptions required to develop the properties of the estimator.

4 Numerical experiments

In this section, we perform simulation experiments to evaluate the accuracy of the proposed estimators. These simulations are carried out for various choices of M, N and \(\sigma ^2\). For every \(M = N = 25, 50, 75, 100\) and \(\sigma ^2 = 0.01,0.1,0.5, 1\), 1000 replications are generated. In the first set of experiments, we consider a simple synthetic signal generated using the following model structure:

$$\begin{aligned} \begin{aligned} y(m,n) = \psi (m,n) e^{i(1.50 m + 0.15 m^2 + 2.50 n + 0.25 n^2)} + X(m,n). \end{aligned}\end{aligned}$$
(5)

Here, the multiplicative error \(\psi (m,n)\) is assumed to be i.i.d. sequence of Gaussian random variables with mean 5 and variance 0.5. Similarly, the additive error random variables \(\{X(m,n)\}\) are assumed to be i.i.d N(0, \(\sigma ^2\)). The objective is to estimate the nonlinear parameters of the model by maximizing the function defined in (2). We use Nelder-Mead algorithm to optimize the function \(Q(\varvec{\xi })\). For the initial values for the optimization, we use the true values of the parameters. For each generated data set, we compute the proposed estimators and report their averages, mean squares errors (MSEs) and the corresponding theoretically derived asymptotic variances (avar). Figure 1 shows the results of these simulations.

Fig. 1
figure 1

MSEs and the asymptotic variances of the estimates when the data is from model (5)

In the second set of simulations, we consider a more challenging set of samples from a multiple component 2-D model with the following expression:

$$\begin{aligned} \begin{aligned} y(m,n) =&\psi _1(m,n) e^{i(1.50 m + 0.15 m^2 + 2.50 n + 0.25 n^2)} + \psi _2(m,n)\\&e^{i(1.00 m + 0.10 m^2 + 2.00 n + 0.20 n^2)} + X(m,n). \end{aligned}\end{aligned}$$
(6)

The amplitude random variables \(\psi _1(m,n) \sim \mathcal {N}(6,0.5)\) and \(\psi _2(m,n) \sim \mathcal {N}(5,0.5)\). The additive errors \(X(m,n) \sim \mathcal {N}(0, \sigma ^2)\). The average estimates, the MSEs and the asymptotic variances of the proposed estimators for the first and second component parameters are shown in Figs. 2 and 3 respectively.

Fig. 2
figure 2

MSEs and the asymptotic variances of the estimates of the first component when the data is from model (6)

Fig. 3
figure 3

MSEs and the asymptotic variances of the estimates of the second component when the data is from model (6)

Some noteworthy observations from the simulation results are stated below:

  • The biases of the estimates are small and are close to 0, which implies that the difference between the average estimates and the true values of the parameters is negligible.

  • For fixed values of M and N, the accuracy of the proposed estimators (measured in terms of MSEs) progressively decreases as the error variance increases.

  • The MSEs of the estimates decrease as the dimension of the data matrix increases, thereby verifying consistency of the proposed estimators.

  • The MSEs are observed to be smaller than the theoretical asymptotic variances for most of the cases.

Clearly, the results of these experiments reveal that the performance of the estimators is satisfactory. Therefore we can conclude that the proposed method yields accurate estimates in practice.

5 Concluding remarks

In this paper, we study the 2-D random amplitude chirp model and propose an estimation method to estimate the unknown parameters, the frequencies and chirp rates. The proposed method maximizes a 2-D periodogram-like function of the squared observations and are consistent and asymptotically normally distributed. A 2-D multicomponent random amplitude model has also been studied. The unknown parameters are estimated by maximizing the same periodogram function locally. The maximization is carried out in a neighborhood of the true value of the parameter vector. The implementation is done step by step. Numerical experiments have been done to see the small sample performance and reported graphically. In this paper, we have assumed that the additive error are i.i.d. It will be interesting to see how the proposed estimators work if the additive error are from a stationary linear process. We have not discussed the estimation of parameters of multiplicative as well as additive error; it needs to be addressed to use the theoretical results in practice. The number of components in multicomponent model is assumed to be known which will not be the case in practice and needs to be estimated. Further works are needed in that direction.