Abstract
Data collected over a geographical space may exhibit some sort of dependence in the sense that closer observations are more alike than those far apart. Such behavior is modeled by including a covariance structure into the classical statistical models. In particular, spatial regression models which accommodate various types of spatial dependencies have been increasingly applied in epidemiology, geology, disease surveillance, urban planning, analysis and mapping of poverty indicators and others. An important type of spatial regression models is the Spatial Moving Average (SMA, which imposes a moving average specification on the noise term, as is the case in temporal time series regressions. In this paper we consider the SMA models and propose efficient estimators of their regression coefficients by using shrinkage and penalty approaches. We provide analytical and numerical analysis to illustrate the superiority of the proposed estimators over the classical MLE estimators. Additionally, we apply the new methodology to the Baltimore housing sale prices data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Huang, J.S.: The autoregressive moving average model for spatial analysis. Aust. J. Stat. 26(2), 169–178 (1984)
Mur, J.: Testing for spatial autocorrelation: Moving average versus autoregresive processes. Environ. Plan. A. 31(8), 1371–1382 (1999)
Anselin, L., Florax, R.: Small sample properties of tests for spatial dependence in regression models: some further results. New Dir. Spat. Econ. 1995, 21–74 (1995)
Anselin, L.: Spatial Econometrics: Methods and Models. Kluwer Academic, The Netherlands (1988)
Bailey, T.C., Gatrell, A.C.: Interactive Spatial Data Analysis. Longman Scientific & Technical Essex, London (1995)
Cliff, A.D., Ord, J.K.: Spatial Processes: Models and Applications. Pion Ltd, London (1981)
Noel, C., Wikle, C.K.: Statistics for Spatio-Temporal Data. Wiley, New Jersey (2011)
Bancroft, T.A.: On biases in estimation due to the use of preliminary tests of significance. Ann. Math. Stat. 15, 190–204 (1944)
Stein, C.: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, pp. 197–206. University of California Press., Berkeley and Los Angeles (1956)
Stein, C.: An approach to the recovery of inter-block information in balanced incomplete block designs. In: Research Papers in Statistics, pp. 351–366. Wiley, London (1966)
Ahmed, S.E.: Improved \(R\)-estimation of regression coefficients. J. Stat. Res. 31(1), 53–73 (1997)
Ahmed, S.E.: Improved pretest nonparametric estimation in a multivariate regression model. Commun. Stat.-Theory Methods 27(10), 2391–2421 (1998)
Khan, B.U., Ahmed, S.E.: Improved estimation of coefficient vector in a regression model. Commun. Stat.-Simul. Comput. 32(3), 747–769 (2003)
Ahmed, S.E., Hussein, A.A., Sen, P.K.: Risk comparison of some shrinkage M-estimators in linear models. J. Nonparametric Stat. 18(4–6), 401–415 (2006)
Ahmed, S.E., Hussein, A.A., Al-Momani, M.: Efficient estimation for the conditional autoregressive model. J. Stat. Comput. Simul. 85(13), 2569–2581 (2015)
Nkurunziza, S., Al-Momani, M., Lin, Y.: Shrinkage and LASSO strategies in high-dimensional heteroscedastic models. Commun. Stat.-Theory Methods 45(15), 4454–4470 (2016)
Dawod Abdaljb, B.A., Al-Momani, M., Abbasi, S.A.: On efficient estimation strategies in monitoring of linear profiles. Int. J. Adv. Manuf. Technol. 96(9), 3977–3991 (2018)
Al-Momani, M., Hussein, A.A., Ahmed, S.E.: Penalty and related estimation strategies in the spatial error model. Stat. Neerl. 71(1), 4–30 (2017)
Ahmed, S.E.: Asymptotic shrinkage estimation: the regression case. Applied statistical science, II (Malang, 1996) 1997, 113–143 (1997)
Ahmed, S.E.: Shrinkage estimation of regression coefficients from censored data with multiple observations. Empirical Bayes and likelihood inference (Montreal, QC 1997). Lecture Notes in Statist., pp. 103–120. Springer, New York (2001)
Saleh, A.K.: Theory of Preliminary Test and Stein-type Estimation with Applications. Wiley, Hoboken, NJ (2006)
Ahmed, S.E., Doksum, K.A., Hossain, S., You, J.: Shrinkage, pretest and absolute penalty estimators in partially linear models. Aust. N. Z. J. Statistics. 49(4), 435–454 (2007)
Nkurunziza, S., Ahmed, S.E.: Estimation strategies for the regression coefficient parameter matrix in multivariate multiple regression. Statistica Neerlandica 65(4), 387–406 (2011)
Raheem, S.M.E., Ahmed, S.E., Doksum, K.A.: Absolute penalty and shrinkage estimation in partially linear models. Comput. Stat. Data Anal. 56(4), 874–891 (2012)
Al-Momani, M.: Shrinkage and penalty estimation for some spatial regression models. Ph.D. thesis, University of Windsor, Canada (2013)
Ahmed, S.E.: Penalty, Shrinkage and Pretest Strategies. Springer, New York (2014)
Ahmed, S.E., Raheem, S.M.E.: Shrinkage and absolute penalty estimation in linear regression models. Wiley Interdiscip. Rev.: Comput. Stat. 4(6), 541–553 (2012). Wiley
Hossain, S., Doksum, K.A., Ahmed, S.E.: Positive shrinkage, improved pretest and absolute penalty estimators in partially linear models. Linear Algebra Appl. 430(10), 2749–2761 (2009)
Hossain, S., Ahmed, S.E.: Shrinkage and penalty estimators of a poisson regression model. Aust. N. Z. J. Statistics. 54(3), 359–373 (2012)
Hussein, A.A., Nkurunziza, S., Tomanelli, K.: Nonparametric Shrinkage estimation for Aalen’s additive hazards model. Aust. N. Z. J. Stat. (2013)
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. 58(1), 267–288 (1996)
Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)
Hannes, L., Benedikt, M.P.: Sparse estimators and the oracle property, or the return of Hodges’ estimator. J. Econom. 142(1), 201–211 (2008)
Dubin, R.A.: Spatial autocorrelation and neighborhood quality. Reg.Nal Sci. Urban Econom. 22(3), 433–452 (1992)
Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101(476), 1418–1429 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Proof of Theorem 2:
-
(i)
From Theorem 1, we have:
$$\begin{aligned} \sqrt{n} (\hat{\varvec{\beta }}-\varvec{\beta })=\sqrt{n} \left( \left( \begin{array}{cc}\hat{\varvec{\beta }}_1 \\ \hat{\varvec{\beta }}_2 \end{array}\right) - \left( \begin{array}{cc}\varvec{\beta }_1 \\ \varvec{\beta }_2 \end{array} \right) \right) \overset{D}{\longrightarrow }N_p\left( \left( \begin{array}{cc}\varvec{0}_{p_1} \\ \varvec{0}_{p_2}\end{array}\right) ,\sigma ^2\varvec{V}^{-1}\right) , \end{aligned}$$where \(\varvec{V}^{-1}\) as in (18). Therefore,
$$\begin{aligned} \varvec{T}^{(1)}_n= \sqrt{n}(\hat{\varvec{\beta }}_1-\varvec{\beta }_1)\overset{D}{\longrightarrow }\varvec{T}^{(1)}\sim N_{p_1}(\varvec{0}_{p_1},\sigma ^2\varvec{D}), \end{aligned}$$where \(\varvec{D}=\varvec{V}_{11.2}^{-1}\).
-
(ii)
Note that
$$\begin{aligned} \hat{\varvec{\beta }}^R_1= & {} \hat{\varvec{\beta }}_1+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\hat{\varvec{\beta }}_2, \text { so}\\ \varvec{T}^{(2)}_n= & {} \sqrt{n} \Big \{\hat{\varvec{\beta }}_1+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\hat{\varvec{\beta }}_2-\varvec{\beta }_1\Big \}\\= & {} \varvec{T}^{(1)}_n+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n} (\hat{\varvec{\beta }}_2-\varvec{\beta }_2)\\+ & {} (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n} \varvec{\beta }_2. \end{aligned}$$As \(n\longrightarrow \infty \), and by Slutsky’s theorem, we have: \(\varvec{T}^{(2)}_n\overset{D}{\longrightarrow }\varvec{T}^{(2)}\sim N_{p_1}(\varvec{\mu }^{(2)},\varvec{\Sigma }^{(2)})\), where
$$\begin{aligned} \varvec{\mu }^{(2)}= & {} \varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{\xi }= \varvec{\pi }\\ \varvec{\Sigma }^{(2)}= & {} Var\Big \{\varvec{T}^{(1)}+\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{T}^{(12)}\Big \}\\= & {} Var(\varvec{T}^{(1)})+Var(\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{T}^{(12)})+Cov(\varvec{T}^{(1)},\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{T}^{(12)})\\+ & {} Cov(\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{T}^{(12)},\varvec{T}^{(1)})\\= & {} \sigma ^2\Big \{\varvec{V}_{11.2}^{-1}-\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{V}_{22}^{-1}\varvec{V}_{21}\varvec{V}_{11.2}^{-1} \Big \}\\= & {} \sigma ^2\Big \{\varvec{D}-\varvec{D}^*\Big \}. \end{aligned}$$ -
(iii)
Also note that:
$$\begin{aligned} \varvec{T}^{(3)}_n= & {} \sqrt{n}(\hat{\varvec{\beta }}_1-\hat{\varvec{\beta }}^R_1)\\= & {} \sqrt{n}\left( - (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\hat{\varvec{\beta }}_2\right) \\= & {} -\sqrt{n}\left( (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})(\hat{\varvec{\beta }}_2-\varvec{\beta }_2+\varvec{\beta }_2) \right) \\= & {} -(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}(\hat{\varvec{\beta }}_2-\varvec{\beta }_2)\\- & {} (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}\varvec{\beta }_2. \end{aligned}$$As \(n\longrightarrow \infty \) and using Slutsky’s theorem, we have: \(\varvec{T}^{(3)}_n\overset{D}{\longrightarrow }\varvec{T}^{(3)}\sim N_{p_1}\left( \varvec{\mu }^{(3)},\varvec{\Sigma }^{(3)}\right) \), where
$$\begin{aligned} \varvec{\mu }^{(3)}= & {} \varvec{0} -\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{\xi }= -\varvec{\pi }\\ \varvec{\Sigma }^{(3)}= & {} Var(-\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{T}^{(12)})\\= & {} \sigma ^2\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{V}_{22.1}^{-1}\varvec{V}_{21}\varvec{V}_{11}^{-1}\\= & {} \sigma ^2\varvec{D}^*. \end{aligned}$$ -
(iv)
Note that:
$$\begin{aligned} \varvec{T}^{(2)}_n= & {} \sqrt{n}(\hat{\varvec{\beta }}^R_1-\varvec{\beta }_1)\\= & {} \sqrt{n} \left( \hat{\varvec{\beta }}_1+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\hat{\varvec{\beta }}_2 -\varvec{\beta }_1 \right) \\= & {} \sqrt{n}\Big ((\hat{\varvec{\beta }}_1-\varvec{\beta }_1)+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})(\hat{\varvec{\beta }}_2-\varvec{\beta }_2\\+ & {} \varvec{\beta }_2) \Big )\\= & {} \varvec{T}^{(1)}_n+(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}(\hat{\varvec{\beta }}_2-\varvec{\beta }_2)\\+ & {} (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}\varvec{\beta }_2\\= & {} \varvec{T}^{(1)}_n+\varvec{A}_n \varvec{T}^{(12)}_n+\varvec{A}_n \varvec{\xi }, \end{aligned}$$where \(\varvec{A}_n=(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\), and
$$\begin{aligned} \varvec{T}^{(3)}_n= & {} \sqrt{n}(\hat{\varvec{\beta }}_1-\hat{\varvec{\beta }}^R_1)\\= & {} \sqrt{n}\left( -(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})(\hat{\varvec{\beta }}_2-\varvec{\beta }_2+\varvec{\beta }_2) \right) \\= & {} -(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}(\hat{\varvec{\beta }}_2-\varvec{\beta }_2)\\- & {} (\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{01})^{-1}(\varvec{X}_{01}^\prime {\varvec{\hat{V}}_{\varvec{n}}^{-1}} \varvec{X}_{02})\sqrt{n}\varvec{\beta }_2\\= & {} -\varvec{A}_n\varvec{T}^{(12)}_n-\varvec{A}_n\varvec{\xi }. \end{aligned}$$Therefore,
$$\begin{aligned} \left( \begin{array}{cc} \varvec{T}^{(2)}_n\\ \varvec{T}^{(3)}_n\end{array}\right)= & {} \left( \begin{array}{cc}\varvec{I}_{p_1}\\ \varvec{0}_{p_1}\end{array}\right) \varvec{T}^{(1)}_n+ \left( \begin{array}{cc}\varvec{A}_n\\ -\varvec{A}_n\end{array}\right) \varvec{T}^{(12)}_n+\left( \begin{array}{cc}\varvec{A}_n \\ -\varvec{A}_n\end{array}\right) \varvec{\xi }\\= & {} \varvec{G}_1\varvec{T}^{(1)}_n+ \varvec{G}_{2n}\varvec{T}^{(12)}_n+\varvec{G}_{2n}\varvec{\xi }, \end{aligned}$$which is a linear combination of \(\varvec{T}^{(1)}_n\) and \(\varvec{T}^{(12)}_n\), where \(\varvec{G}_1 = \left( \begin{array}{cc}\varvec{I}_{p1}\\ \varvec{0}_{p_1}\end{array}\right) \), \(\varvec{G}_{2n}=\left( \begin{array}{cc}\varvec{A}_n\\ -\varvec{A}_n\end{array}\right) \), \(\varvec{I}_{p_1}\) is a \(p_1\times p_1\) idintity matrix, and \(\varvec{0}_{p_1}\) is a \(p_1\times p_1\) matrix of zeros. Thus, as \(n\longrightarrow \infty \) and by Slutsky’s theorem, we have: \(\varvec{A}_n\overset{P}{\longrightarrow }\varvec{A} = \varvec{V}_{11}^{-1} \varvec{V}_{12}\), \(\varvec{G}_{2n} \overset{P}{\longrightarrow }\varvec{G}_2 = \left( \begin{array}{cc} \varvec{A} \\ -\varvec{A} \end{array}\right) \). So, \(\left( \begin{array}{cc}\varvec{T}^{(2)}_n\\ \varvec{T}^{(3)}_n\end{array}\right) \overset{D}{\longrightarrow }\left( \begin{array}{cc}\varvec{T}^{(2)}\\ \varvec{T}^{(3)}\end{array}\right) \sim N_{2p_1}\left( \varvec{\mu }^{(4)},\varvec{\Sigma }^{(4)}\right) \), where
$$\begin{aligned} \varvec{\mu }^{(4)}= & {} \varvec{G}_1+\varvec{G}_2 \varvec{0} +\varvec{G}_2 \varvec{\xi }\\= & {} \left( \begin{array}{cc}\varvec{V}_{11}^{-1}\varvec{V}_{12}\varvec{\xi }\\ -\varvec{V}_{11}\varvec{V}_{12}\varvec{\xi }\end{array}\right) =\left( \begin{array}{cc}\varvec{\pi }\\ -\varvec{\pi }\end{array}\right) ,\\ \varvec{\Sigma }^{(4)}= & {} Var(\varvec{G}_1 \varvec{T}^{(1)})+Var(\varvec{G}_2 \varvec{T}^{(12)})+Cov(\varvec{G}_1 \varvec{T}^{(1)}, \varvec{G}_2\varvec{T}^{(12)})\\+ & {} Cov(\varvec{G}_2 \varvec{T}^{(12)}, \varvec{G}_1 \varvec{T}^{(1)}),\\= & {} \sigma ^2\left( \begin{array}{cc} \varvec{D}-\varvec{D}^* &{} \varvec{0} \\ \varvec{0} &{} \varvec{D}^* \end{array}\right) , \end{aligned}$$using the properties of the variance and covariance functions. It is clear that \(\varvec{T}^{(2)}_n\) and \(\varvec{T}^{(3)}_n\) are asymptotically independent.
-
(v)
Using the same technique as in part (iv), we can write \(\left( \begin{array}{cc} \varvec{T}^{(1)}_n\\ \varvec{T}^{(3)}_n\end{array}\right) \) as a linear combination of \(\varvec{T}^{(1)}_n\) and \(\varvec{T}^{(12)}_n\) as below:
$$\begin{aligned} \left( \begin{array}{cc} \varvec{T}^{(1)}_n\\ \varvec{T}^{(3)}_n\end{array}\right)= & {} \left( \begin{array}{cc} \varvec{I}_{p_1} \\ \varvec{0}_{p_1}\end{array}\right) \varvec{T}^{(1)}_n+ \left( \begin{array}{cc} \varvec{0}_{p_1\times p_2} \\ -\varvec{A}_n\end{array}\right) \varvec{T}^{(12)}_n+\left( \begin{array}{cc} \varvec{0}_{p_1\times p_2} \\ -\varvec{A}_n\end{array}\right) \varvec{\xi }\\= & {} \varvec{G}_1\varvec{T}^{(1)}_n+\varvec{F}_n\varvec{T}^{(12)}_n+\varvec{F}_n\varvec{\xi }, \end{aligned}$$where \(\varvec{F}_n =\left( \begin{array}{cc} \varvec{0}_{p_1\times p_2} \\ -\varvec{A}_n \end{array}\right) \), and \(\varvec{0}_{p_1\times p_2}\) is a \(p_1\times p_2\) matrix of zeros. Therefore, as \(n\longrightarrow \infty \) and by Slutsky’s theorem, we have: \(\varvec{F}_n\overset{P}{\longrightarrow }\varvec{F}=\left( \begin{array}{cc} \varvec{0} \\ -\varvec{A}\end{array}\right) \), and hence \(\left( \begin{array}{cc} \varvec{T}^{(1)}_n\\ \varvec{T}^{(3)}_n\end{array}\right) \overset{D}{\longrightarrow }\left( \begin{array}{cc} \varvec{T}^{(1)}\\ \varvec{T}^{(3)}\end{array}\right) \sim N_{2p_1} \left( \varvec{\mu }^{(5)},\varvec{\Sigma }^{(5)}\right) \), where
$$\begin{aligned} \varvec{\mu }^{(5)}= & {} \varvec{G}_1\varvec{0} +\varvec{F}\varvec{0} +\left( \begin{array}{cc}\varvec{0} \\ -\varvec{A} \end{array}\right) \varvec{\xi }= \left( \begin{array}{cc}\varvec{0} \\ -\varvec{A} \varvec{\xi }\end{array}\right) =\left( \begin{array}{cc}\varvec{0} \\ -\varvec{\pi }\end{array}\right) ,\\ \varvec{\Sigma }^{(5)}= & {} Var(G_1\varvec{T}^{(1)})+Var(\varvec{F} \varvec{T}^{(12)})+Cov(\varvec{G}_1\varvec{T}^{(1)},\varvec{F}\varvec{T}^{(12)})\\+ & {} Cov(\varvec{F} \varvec{T}^{(12)},\varvec{G}_1\varvec{T}^{(1)})\\= & {} \sigma ^2\left( \begin{array}{cc} \varvec{D}&{} \varvec{D}^*\\ \varvec{D}^* &{} \varvec{D}^* \end{array}\right) , \end{aligned}$$using the same procedure as in part (iv).
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Al-Momani, M., Ejaz Ahmed, S., Hussein, A.A. (2020). Efficient Estimation Strategies for Spatial Moving Average Model. In: Xu, J., Ahmed, S., Cooke, F., Duca, G. (eds) Proceedings of the Thirteenth International Conference on Management Science and Engineering Management. ICMSEM 2019. Advances in Intelligent Systems and Computing, vol 1001. Springer, Cham. https://doi.org/10.1007/978-3-030-21248-3_38
Download citation
DOI: https://doi.org/10.1007/978-3-030-21248-3_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-21247-6
Online ISBN: 978-3-030-21248-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)