Skip to main content
Log in

Testing for spatial dependence in a spatial autoregressive (SAR) model in the presence of endogenous regressors

  • Original Paper
  • Published:
Journal of Spatial Econometrics

Abstract

Spatial modeling is one of the growing areas of research in economics in recent years. However, these models are not tested enough. Even if tests are performed, they are done in a piece-wise fashion. Another age-long problem in economic modeling is endogeneity of one or more variables. Endgeneity is caused due to a number of reasons one of which is simultaneous modeling of economic variables. This paper considers specification testing in the context of a spatial autoregresive (SAR) model with an endogenous regressor. First, we construct standard Rao’s Score (RS) tests for null hypothesis of the absence of spatial autocorrelation and endogeneity. These standard RS tests are invalid in the presence of local misspecification of the models under the alternative hypotheses. Therefore, in our next step, we develop adjusted tests using the technique of Bera and Yoon (Econom Theor 9:649–658, 1993), that are robust to local misspecification. These adjusted (or robustified) tests are simple to calculate and easy to implement. With a Monte Carlo study we investigate the finite sample performance of all the proposed tests, and the results confirm that the robust tests perform better compared to their non-robust counterparts both in terms of size and power.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. We are thankful to a Referee for suggesting us to include a discussion on the impact of W (sparse or dense) on the test statistics.

  2. We appreciate this suggestion from a Referee.

References

  • Anselin L (2010) Thirty years of spatial econometrics. Pap Reg Sci 89:3–25

    Article  Google Scholar 

  • Anselin L (2013) Spatial econometrics: methods and models. Springer, Berlin

    Google Scholar 

  • Anselin L, Bera AK (1998) Spatial dependence in linear regression models with an introduction to spatial econometrics. Handb. Appl. Econ. Stat. 155:237–290

    Google Scholar 

  • Anselin L, Bera AK, Florax R, Yoon MJ (1996) Simple diagnostic tests for spatial dependence. Reg Sci Urban Econ 26:77–104

    Article  Google Scholar 

  • Anselin L, Cohen J, Cook D, Gorr W, Tita G (2000) Spatial analyses of crime. Crim Justice 4:213–262

    Google Scholar 

  • Arbia G, Ghiringhelli C, Mira A (2019) Estimation of spatial econometric linear models with large datasets: How big can spatial big data be? Reg Sci Urban Econ 76:67–73

    Article  Google Scholar 

  • Bera AK, Yoon MJ (1993) Specification testing with locally misspecified alternatives. Econom Theor 9:649–658

    Article  Google Scholar 

  • Bera AK, Montes-Rojas G, Sosa-Escudero W (2009) Testing under local misspecification and artificial regressions. Econ Lett 104:66–68

    Article  Google Scholar 

  • Bera AK, Bilias Y, Yoon MJ, Taşpınar S, Doğan O (2020) Adjustments of Rao’s Score Test for distributional and local parametric misspecifications. J Econom Methods 9:1–29

    Google Scholar 

  • Cheng W, Lee LF (2017) Testing endogeneity of spatial and social networks. Reg Sci Urban Econ 64:81–97

    Article  Google Scholar 

  • Cliff AD, Ord JK (1973) Spatial autocorrelation. Pion, London

    Google Scholar 

  • Davidson R, MacKinnon JG (1987) Implicit alternatives and the local power of test statistics. Econometrica 55:1305–1329

    Article  Google Scholar 

  • Drukker DM, Egger P, Prucha IR (2013) On two-step estimation of a spatial autoregressive model with autoregressive disturbances and endogenous regressors. Econom Rev 32:686–733

    Article  Google Scholar 

  • Fang Y, Park SY, Zhang J (2014) A simple spatial dependence test robust to local and distributional misspecifications. Econ Lett 124:203–206

    Article  Google Scholar 

  • Jenish N, Prucha IR (2009) Central limit theorems and uniform laws of large numbers for arrays of random fields. J Econom 150:86–98

    Article  Google Scholar 

  • Jenish N, Prucha IR (2012) On spatial processes and asymptotic inference under near-epoch dependence. J Econom 170:178–190

    Article  Google Scholar 

  • Kelejian HH, Prucha IR (2004) Estimation of simultaneous systems of spatially interrelated cross sectional equations. J Econom 118:27–50

    Article  Google Scholar 

  • Lee L-F (2002) Consistency and efficiency of least squares estimation for mixed regressive, spatial autoregressive models. Econom Theor 18:252–277

    Article  Google Scholar 

  • Lee L-F, Yu J (2009) Spatial nonstationarity and spurious regression: the case with a row-normalized spatial weights matrix. Spat Econ Anal 4:301–327

    Article  Google Scholar 

  • Liu X (2012) On the consistency of the LIML estimator of a spatial autoregressive model with many instruments. Econ Lett 116:472–475

    Article  Google Scholar 

  • Liu X, Lee LF (2013) Two-stage least squares estimation of spatial autoregressive models with endogenous regressors and many instruments. Econom Rev 32:734–753

    Article  Google Scholar 

  • Liu X, Saraiva P (2015) GMM estimation of SAR models with endogenous regressors. Reg Sci Urban Econ 55:68–79

    Article  Google Scholar 

  • Pace RK, Barry R (1997) Sparse spatial autoregressions. Stat Probab Lett 33:291–297

    Article  Google Scholar 

  • Qu X, Lee LF (2015) Estimating a spatial autoregressive model with an endogenous spatial weight matrix. J Econom 184:209–232

    Article  Google Scholar 

  • Saikkonen P (1989) Asymptotic relative efficiency of the classical test statistics under misspecification. J Econom 42:351–369

    Article  Google Scholar 

  • Smith TE (2009) Estimation bias in spatial models with strongly connected weight matrices. Geogr Anal 41:307–332

    Article  Google Scholar 

  • Stakhovych S, Bijmolt TH (2009) Specification of spatial models: a simulation study on weights matrices. Pap Reg Sci 88:389–408

    Article  Google Scholar 

  • Villar OA (1999) Spatial distribution of production and international trade: a note. Reg Sci Urban Econ 29:371–380

    Article  Google Scholar 

  • Wang X, Guan J (2017) Financial inclusion: measurement, spatial effects and influencing factors. Appl Econ 49:1751–1762

    Article  Google Scholar 

  • Whittle P (1954) On stationary processes in the plane. Biometrika 41:434–449

    Article  Google Scholar 

  • Yang K, Lee LF (2017) Identification and QML estimation of multivariate and simultaneous equations spatial autoregressive models. J Econom 196:196–214

    Article  Google Scholar 

Download references

Acknowledgements

We are most grateful to two anonymous referees for their pertinent comments and helpful suggestions which greatly improved the content and exposition of the paper. An earlier version of this paper was presented at the 19th International Workshop “Spatial Econometrics and Statistics,” Nantes, France, May 31 to June 1, 2021. We are most grateful to the participants of that workshop for comments, especially to the Discussant of our paper Professor Anna Gloria Billé for careful reading of the draft version of the paper and suggesting many pertinent comments and improvements that we have incorporated. Thanks are also due to Professors Osman Doğan and Süleyman Taşpınar for their helpful suggestions. Of course, we retain the responsibility for any remaining errors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Malabika Koley.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Score functions and the Hessian matrix

Using the expression of the log-likelihood function in Eq. (4.1), here we derive the first- and the second-order derivatives.

1.1 Scores

$$\begin{aligned} \left. \begin{aligned} \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \phi }&=\frac{y_{2}^{'}\xi }{\sigma _{\xi }^{2}}, \quad \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \beta }=\frac{X_{1}^{'}\xi }{\sigma _{\xi }^{2}}, \quad \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \gamma }=\frac{X^{'}u_{2}}{\sigma _{2}^{2}}-\frac{\delta X^{'}\xi }{\sigma _{\xi }^2},\\ \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \sigma _{\xi }^{2}}&=-\frac{n}{2\sigma _{\xi }^{2}}+\frac{\xi ^{'}\xi }{2\sigma _{\xi }^{4}}, \quad \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \sigma _{2 }^{2}}=-\frac{n}{2\sigma _{2 }^{2}}+\frac{u_{2}^{'}u_{2}}{2\sigma _{2 }^{4}}, \quad \\ \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \lambda }&=-tr(G)+\frac{y_{1}^{'}W^{'}\xi }{\sigma _{\xi }^{2}}, \quad \frac{\partial l(\theta ;y_{1},\,y_{2})}{\partial \delta }=\frac{u_{2}^{'}\xi }{\sigma _{\xi }^{2}}, \end{aligned} \right\} \end{aligned}$$

where \(G=S^{-1}W\) and \(S=(I_{n}-\lambda W)\). These score functions evaluated under the joint null \(H_{0}:\lambda _{0}=0,\,\delta _{0}=0\), give us the scores in Eq. (4.2) that were used in developing the test statistics in Sect. 4.

Taking derivatives of the above score functions we obtain the second-order derivatives, i.e., the elements of the Hessian matrix, as follows:

1.2 Elements of the Hessian matrix

$$\begin{aligned} \left. \begin{aligned} \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi ^{2}}&=-\frac{y_{2}^{'}y_{2}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi \partial \beta ^{'}}=-\frac{y_{2}^{'}X_{1}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi \partial \gamma ^{'}} =\frac{\delta y_{2}^{'}X}{\sigma _{\xi }^2},\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi \partial \sigma _{\xi }^{2}}&=-\frac{y_{2}^{'}\xi }{\sigma _{\xi }^4}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi \partial \lambda } =-\frac{y_{2}^{'}Wy_{1}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \phi \partial \delta } =-\frac{y_{2}^{'}u_{2}}{\sigma _{\xi }^2},\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \beta \partial \beta ^{'}}&=-\frac{X_{1}^{'}X_{1}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \beta \partial \gamma ^{'}} =\frac{\delta X_{1}^{'}X}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \beta \partial \sigma _{\xi }^2} = -\frac{X_{1}^{'}\xi }{\sigma _{\xi }^4},\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \beta \partial \lambda }&= -\frac{X_{1}^{'}W y_{1}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \beta \partial \delta } = -\frac{X_{1}^{'}u_{2}}{\sigma _{\xi }^2}, \quad\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \gamma \partial \gamma ^{'}} &= - \Big [\frac{X^{'}X}{\sigma _{2}^2} + \frac{\delta ^{2}X^{'}X}{\sigma _{\xi }^2}\Big ],\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \gamma \partial \sigma _{\xi }^2}&= \frac{\delta X^{'}\xi }{\sigma _{\xi }^4}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \gamma \partial \sigma _{2}^{2}} = -\frac{X^{'}u_2}{\sigma _{2}^4}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \gamma \partial \lambda } = \frac{\delta X^{'}Wy_{1}}{\sigma _{\xi }^2},\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \gamma \partial \delta }&= \frac{\delta X^{'}u_{2}-X^{'}\xi }{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial (\sigma _{\xi }^2)^{2}} =\Big [\frac{n}{2\sigma _{\xi }^4} - \frac{\xi ^{'}\xi }{\sigma _{\xi }^6}\Big ],\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \sigma _{\xi }^2\partial \lambda }&= -\frac{y_{1}^{'}W^{'}\xi }{\sigma _{\xi }^4}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \sigma _{\xi }^2\partial \delta }= -\frac{u_{2}^{'}\xi }{\sigma _{\xi }^4}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial (\sigma _{2}^2)^{2}} = \Big [\frac{n}{2\sigma _{2}^4} - \frac{u_{2}^{'}u_{2}}{\sigma _{2}^6}\Big ],\\ \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \lambda ^{2}}&=-tr(G^2) -\frac{y_{1}^{'}W^{'}Wy_{1}}{\sigma _{\xi }^2}, \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \lambda \partial \delta }= -\frac{u_{2}^{'}Wy_{1}}{\sigma _{\xi }^2},\\ \quad \frac{\partial ^{2} l(\theta ; y_{1},\,y_{2})}{\partial \delta ^{2}}&= -\frac{u_{2}^{'}u_{2}}{\sigma _{\xi }^2} \end{aligned} \right\} . \end{aligned}$$

Appendix 2: Proofs of proposition and corollaries

1.1 Proof of Proposition 1

We start with the first result. To do so we derive the asymptotic normality of \(d_{\alpha _{2}}(\tilde{\theta })\) under \(H_{A}^{\alpha _{2}}:\alpha _{20}=\alpha ^{*}_{2}+\frac{\tau _{2}}{\sqrt{n}}\), and \(H_{A}^{\alpha _{3}}:\alpha _{30}=\alpha ^{*}_{3}+\frac{\tau _{3}}{\sqrt{n}}\). Let \(\theta =(\alpha _{1}^{'},\,\alpha _{2}^{'},\,\alpha _{3}^{'})^{'}\) be an arbitrary value of \(\theta\) and \(\tilde{\theta }=(\tilde{\alpha _{1}}^{'},\,\alpha ^{*'}_{2},\,\alpha ^{*'}_{3})^{'}\) be the restricted MLE of \(\theta\) under the joint null. Let us expand the score function w.r.t. \(\alpha _{2}\), i.e., \(\frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}\), around \(\theta _{0}=(\alpha _{10}^{'},\,\alpha _{20}^{'},\,\alpha _{30}^{'})^{'}\) using Taylor series expansion as

$$\begin{aligned} \frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}\overset{a}{=} \frac{\partial l(\theta _{0})}{\partial \alpha _{2}} + \frac{\partial ^{2} l(\theta _{0})}{\partial \alpha _{2}\partial \alpha _{2}^{'}}(\alpha ^{*}_{2}-\alpha _{20}) + \frac{\partial ^{2} l(\theta _{0})}{\partial \alpha _{2}\partial \alpha _{3}^{'}}(\alpha ^{*}_{3}-\alpha _{30})+\frac{\partial ^{2} l(\theta _{0})}{\partial \alpha _{2}\partial \alpha _{1}^{'}}(\tilde{\alpha _{1}}-\alpha _{10}), \end{aligned}$$
(4.15)

where \(``\overset{a}{=}"\) means asymptotic equivalence.

Since \((\alpha ^{*}_{2}-\alpha _{20})=-\frac{\tau _{2}}{\sqrt{n}}\) and \((\alpha ^{*}_{3}-\alpha _{30})=-\frac{\tau _{3}}{\sqrt{n}}\) under \(H_{0}\), we have

$$\begin{aligned} \frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}\overset{a}{=}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}}-\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{2} \partial \alpha _{2}^{'}}\frac{\tau _{2}}{\sqrt{n}}-\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{2} \partial \alpha _{3}^{'}}\frac{\tau _{3}}{\sqrt{n}}+\frac{\partial ^{2} l(\theta _{0})}{\partial \alpha _{2}\partial \alpha _{1}^{'}}(\tilde{\alpha _{1}}-\alpha _{10})\nonumber \\ \implies \frac{1}{\sqrt{n}}\frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}\overset{a}{=}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}} + J_{\alpha _{2}\alpha _{2}}\tau _{2} + J_{\alpha _{2}\alpha _{3}}\tau _{3} -J_{\alpha _{2}\alpha _{1}}\sqrt{n}(\tilde{\alpha _{1}}-\alpha _{10}). \end{aligned}$$
(4.16)

Now expanding \(\frac{\partial l(\tilde{\theta )}}{\partial \alpha _{1}}\) around \(\theta _{0}=(\alpha _{10}^{'},\,\alpha _{20}^{'},\,\alpha _{30}^{'})^{'}\), we obtain

$$\begin{aligned} \frac{\partial l(\tilde{\theta )}}{\partial \alpha _{1}}&\overset{a}{=}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}} + \frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{2}^{'}}(\alpha ^{*}_{2}-\alpha _{20}) + \frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{3}^{'}}(\alpha ^{*}_{3}-\alpha _{30}) + \frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{1}^{'}}(\tilde{\alpha _{1}}-\alpha _{10}). \end{aligned}$$

Since \(\frac{\partial l(\tilde{\theta )}}{\partial \alpha _{1}}=0\), where \(\tilde{\theta }=(\tilde{\alpha _{1}}^{'},\,\alpha ^{*'}_{2},\,\alpha ^{*'}_{3})^{'}\) is the restricted MLE of \(\theta\), we have from the above equation

$$\begin{aligned} 0&\overset{a}{=}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}} + \frac{1}{\sqrt{n}}\frac{\partial l^{2}(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{2}^{'}}(\alpha ^{*}_{2}-\alpha _{20}) + \frac{1}{\sqrt{n}}\frac{\partial l^{2}(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{3}^{'}}(\alpha ^{*}_{3}-\alpha _{30}) + \frac{1}{\sqrt{n}}\frac{\partial l^{2}(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{1}^{'}}(\tilde{\alpha _{1}}-\alpha _{10})\nonumber \\&\implies \frac{1}{n}\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{1}^{'}}\sqrt{n}(\tilde{\alpha _{1}}-\alpha _{10})\overset{a}{=}-\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}} + \frac{1}{n}\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{2}}\tau _{2} + \frac{1}{n}\frac{\partial ^{2} l(\theta _{0})}{\partial \alpha _{1} \partial \alpha _{3}^{'}}\tau _{3}\nonumber \\&\implies -J_{\alpha _{1}\alpha _{1}}\sqrt{n}(\tilde{\alpha _{1}}-\alpha _{10})\overset{a}{=}-\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}}-J_{\alpha _{1}\alpha _{2}}\tau _{2} -J_{\alpha _{1}\alpha _{3}}\tau _{3}\nonumber \\&\implies \sqrt{n}(\tilde{\alpha _{1}}-\alpha _{10})\overset{a}{=}J^{-1}_{\alpha _{1}\alpha _{1}}\Big (\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}} + J_{\alpha _{1}\alpha _{2}}\tau _{2} + J_{\alpha _{1}\alpha _{3}}\tau _{3}\Big ). \end{aligned}$$
(4.17)

Substituting the expression of \(\sqrt{n}(\tilde{\alpha _{1}}-\alpha _{10})\) from (4.17) in Eq. (4.16) we obtain

$$\begin{aligned} \frac{1}{\sqrt{n}}\frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}&\overset{a}{=}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}} + J_{\alpha _{2}\alpha _{2}}\tau _{2} + J_{\alpha _{2}\alpha _{3}}\tau _{3} - J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}[\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}} + J_{\alpha _{1}\alpha _{2}}\tau _{2} + J_{\alpha _{1}\alpha _{3}}\tau _{3}]\\&\overset{a}{=}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}} + J_{\alpha _{2}\alpha _{2}}\tau _{2} + J_{\alpha _{2}\alpha _{3}}\tau _{3} -J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{2}}\tau _{2}\\&- J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{3}}\tau _{3}\\&\overset{a}{=}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}} + (J_{\alpha _{2}\alpha _{2}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{2}})\tau _{2}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}}\\&+ (J_{\alpha _{2}\alpha _{3}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{3}})\tau _{3}. \end{aligned}$$

Denoting \(J_{\alpha _{2}\cdot \alpha _{1}}\equiv J_{\alpha _{2}\alpha _{2}} - J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{2}}\) and \(J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}} \equiv J_{\alpha _{2}\alpha _{3}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{2}}\), from the above equation we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\frac{\partial l(\tilde{\theta })}{\partial \alpha _{2}}&\overset{a}{=} [\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}}] + [J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3}]\\&=R + C ,\, \text {say}, \end{aligned}$$

where \(R=\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{2}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}\frac{1}{\sqrt{n}}\frac{\partial l(\theta _{0})}{\partial \alpha _{1}}\) is a random variable with mean zero while \(C=J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3}\) is simply a constant.

Therefore,

$$\begin{aligned} \sqrt{n}d_{\alpha _{2}}(\tilde{\theta })&\overset{d}{\rightarrow }N(C,\,V), \end{aligned}$$

where \(V=Var(R)=(J_{\alpha _{2}\alpha _{2}}-J_{\alpha _{2}\alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}}J_{\alpha _{1}\alpha _{2}})=J_{\alpha _{2}\cdot \alpha _{1}}\). Thus, we can write

$$\begin{aligned} \sqrt{n}d_{\alpha _{2}}(\tilde{\theta })&\overset{d}{\rightarrow } N(J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3},\,\,J_{\alpha _{2}\cdot \alpha _{1}}). \end{aligned}$$
(4.18)

The asymptotic distribution of \(RS_{\alpha _{2}}\) can be easily obtained using the asymptotic distribution of \(d_{\alpha _{2}}\) in Eq. (4.18) as follows:

$$\begin{aligned} RS_{\alpha _{2}}&=nd^{'}_{\alpha _{2}}(\tilde{\theta })J^{-1}_{\alpha _{2}\cdot \alpha _{1}}(\tilde{\theta })d_{\alpha _{2}}(\tilde{\theta })\overset{d}{\rightarrow }\chi ^{2}_{r}(\nu _{3}), \end{aligned}$$
(4.19)

where \(\nu _{3}\equiv \nu _{3}(\tau _{2},\,\tau _{3})=(J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3})^{'}J^{-1}_{\alpha _{2}\cdot \alpha _{1}}(J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3})=\tau _{2}^{'}J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2} + 2\tau _{2}^{'}J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3} + \tau _{3}^{'}J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{2}\cdot \alpha _{1}}J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3}\).

This proves the first part of Proposition 1. The second part of the proposition states:

Under \(H_{o}^{\alpha _{2}}:\alpha _{20}=\alpha ^{*}_{2}\) and irrespective of the value of \(\alpha _{3}\) we have

$$\begin{aligned} RS^{*}_{\alpha _{2}}\overset{d}{\rightarrow }\chi ^{2}_{q}. \end{aligned}$$

To do this let us define \(\kappa =(\alpha _{2}^{'},\,\alpha _{3}^{'})^{'}\) and \(l_{\kappa }(\tilde{\theta })=\frac{\partial l(\tilde{\theta )}}{\partial \kappa }=\big [l^{'}_{\alpha _{2}}(\tilde{\theta }),\,l^{'}_{\alpha _{3}}(\tilde{\theta })\big ]^{'}\), say.

Expanding \(l_{\kappa }(\tilde{\theta })\) around \(\theta _{0}=(\alpha ^{'}_{10},\,\kappa ^{'}_{0})^{'}\) using first-order Taylor series expansion, we get

$$\begin{aligned} \frac{1}{\sqrt{n}} l_{\kappa }(\tilde{\theta })&\overset{a}{=}\frac{1}{\sqrt{n}}l_{\kappa }(\theta _{0})+\frac{1}{\sqrt{n}}\frac{\partial ^{2} l(\theta _{0})}{\partial \kappa \partial \kappa ^{'}}(\tilde{\kappa }-\kappa _{0})+ \frac{1}{\sqrt{n}}\frac{\partial ^{2}l(\theta _{0}}{\partial \kappa \partial \alpha _{1}}(\tilde{\alpha _{1}}-\alpha _{10})\nonumber \\ \implies \sqrt{n}d_{\kappa }(\tilde{\theta })&\overset{a}{=}\sqrt{n}d_{\kappa }(\theta _{0})-\sqrt{n}D_{\kappa \kappa }(\theta _{0})(\tau _{2}^{'},\,\tau _{3}^{'})^{'} + \sqrt{n}D_{\kappa \alpha _{1}}(\theta _{0})(\tilde{\alpha _{1}}-\alpha _{10}), \end{aligned}$$
(4.20)

where D denotes the second order derivative matrix, namely

$$\begin{aligned} D_{\kappa \alpha _{1}}=\begin{pmatrix} d_{\alpha _{2}\alpha _{1}}\\ d_{\alpha _{3}\alpha _{1}} \end{pmatrix} \quad \text {and}\quad D_{\kappa \kappa }=\begin{pmatrix} d_{\alpha _{2}\alpha _{2}} &{} d_{\alpha _{2}\alpha _{3}}\\ d_{\alpha _{3}\alpha _{2}} &{} d_{\alpha _{3}\alpha _{3}} \end{pmatrix}. \end{aligned}$$

Similarly, expanding \(l_{\alpha _{1}}(\tilde{\theta })\) by Taylor series expansion we obtain

$$\begin{aligned} \frac{1}{\sqrt{n}}l_{\alpha _{1}}(\tilde{\theta })&\overset{a}{=}\frac{1}{\sqrt{n}}l_{\alpha _{1}}(\theta _{0}) + \frac{1}{\sqrt{n}}\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \kappa ^{'}}(\tilde{\kappa }-\kappa _{0}) + \frac{1}{\sqrt{n}}\frac{\partial ^{2}l(\theta _{0})}{\partial \alpha _{1} \partial \alpha ^{'}_{1}}(\tilde{\alpha _{1}}-\alpha _{10})\nonumber \\ \implies \sqrt{n}d_{\alpha _{1}}(\tilde{\theta })&\overset{a}{=}\sqrt{n}d_{\alpha _{1}}(\theta _{0}) - D_{\alpha _{1}\kappa }(\theta _{0})(\tau _{2}^{'},\,\tau _{3}^{'})^{'} +\sqrt{n}D_{\alpha _{1}\alpha _{1}}(\theta _{0})(\tilde{\alpha _{1}}-\alpha _{1}). \end{aligned}$$
(4.21)

Combining Eqs. (4.20) and (4.21), we get

$$\begin{aligned} \sqrt{n}d_{\kappa }(\tilde{\theta })&\overset{a}{=}(-J_{\kappa \alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}},\,I_{q+r})\begin{pmatrix} \frac{1}{\sqrt{n}}d_{\alpha _{1}}(\theta _{0})\\ \frac{1}{\sqrt{n}}d_{\kappa }(\theta _{0})\end{pmatrix} + \begin{pmatrix} J_{\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\\ J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{3}\cdot \alpha _{1}}\end{pmatrix}\begin{pmatrix} \tau _{2}\\ \tau _{3} \end{pmatrix}, \end{aligned}$$
(4.22)

where \(J_{\kappa \alpha _{1}}=\begin{pmatrix} J_{\alpha _{2}\alpha _{1}}\\ J_{\alpha _{3}\alpha _{1}} \end{pmatrix}\). Substituting \(\tau _{2}=0\) in Eq. (4.22) gives us distribution of \(\sqrt{n}d_{\kappa }(\tilde{\theta })\) under \(H^{\alpha _{2}}_{0}:\alpha _{20}=\alpha ^{*}_{2}\) and \(H^{\alpha _{3}}_{A}:\alpha _{30}=\alpha ^{*}_{3}+\tau _{3}/\sqrt{n}\) as

$$\begin{aligned} \sqrt{n}d_{\kappa }(\tilde{\theta })\overset{a}{=}(-J_{\kappa \alpha _{1}}J^{-1}_{\alpha _{1}\alpha _{1}},\,I_{q+r})\begin{pmatrix} \frac{1}{\sqrt{n}}d_{\alpha _{1}}(\theta _{0})\\ \frac{1}{\sqrt{n}}d_{\kappa }(\theta _{0}) \end{pmatrix} + \begin{pmatrix} J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3} \\ J_{\alpha _{3}\cdot \alpha _{1}} \tau _{3} \end{pmatrix} \end{aligned}$$

implying

$$\begin{aligned} \sqrt{n}d_{\kappa }(\tilde{\theta })\overset{d}{\rightarrow }N\Big [\begin{pmatrix} J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\tau _{3}\\ J_{\alpha _{3}\cdot \alpha _{1}}\tau _{3} \end{pmatrix},\,\begin{pmatrix} J_{\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\\ J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{3}\cdot \alpha _{1}} \end{pmatrix}\Big ]. \end{aligned}$$
(4.23)

The adjusted score w.r.t. \(\alpha _{2}\) can be written as

$$\begin{aligned} \sqrt{n} d^{*}_{\alpha _{2}}(\tilde{\theta })&=(I_{q},\,-J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}})d_{\kappa }(\tilde{\theta }). \end{aligned}$$

The distribution of the adjusted score is obtained from the distribution in Eq. (4.23) as

$$\begin{aligned} \sqrt{n}d^{*}_{\alpha _{2}}(\tilde{\theta })\overset{d}{\rightarrow }N\big [0,\,J_{\alpha _{2}\cdot \alpha _{1}}-J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}}J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}}\big ]. \end{aligned}$$

Thus, for \(RS^{*}_{\alpha _{2}}\) which is based on \(d^{*}_{\alpha _{2}}\), we have the following distribution under \(H^{\alpha _{2}}_{0}:\alpha _{20}=\alpha ^{*}_{2}\) and irrespective of the value of \(\alpha _{3}\)

$$\begin{aligned} RS^{*}_{\alpha _{2}}\overset{d}{\rightarrow }\chi ^{2}_{q}. \end{aligned}$$

Finally, the last part of Proposition 1 states:

Under \(H^{\alpha _{2}}_{A}:\alpha _{20}=\alpha ^{*}_{2}+\tau _{2}/\sqrt{n}\) and irrespective of the value of \(\alpha _{3}\)

$$\begin{aligned} RS^{*}_{\alpha _{2}}\overset{d}{\rightarrow }\chi ^{2}_{q}(\nu _{4}), \end{aligned}$$

where \(\nu _{4}\equiv \nu _{4}(\tau _{2})=\tau _{2}^{'}J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2}-\tau _{2}^{'}J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}}J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}}\tau _{2}\) is the non-centrality parameter. To prove this, note that similarly as in (4.23), the distribution of \(\sqrt{n}d_{\kappa }(\tilde{\theta })\) under \(H^{\alpha _{2}}_{A}\) and \(H^{\alpha _{3}}_{0}\) can be obtained by substituting \(\tau _{3}=0\) in (4.22) as

$$\begin{aligned} \sqrt{n}d_{\kappa }(\tilde{\theta })\overset{d}{\rightarrow }N\Big [\begin{pmatrix} J_{\alpha _{2}\cdot \alpha _{1}}\tau _{2}\\ J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}}\tau _{2} \end{pmatrix},\,\begin{pmatrix} J_{\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}\\ J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}} &{} J_{\alpha _{3}\cdot \alpha _{1}} \end{pmatrix}\Big ]. \end{aligned}$$
(4.24)

Thus, the distribution of \(d^{*}_{\alpha _{2}}\) can be obtained from (4.24) as

$$\begin{aligned} \sqrt{n}d^{*}_{\alpha _{2}}(\tilde{\theta })&\overset{d}{\rightarrow }N\big [(J_{\alpha _{2}\cdot \alpha _{1}}-J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}}J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}})\tau _{2},\,J_{\alpha _{2}\cdot \alpha _{1}}-J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}}J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}}\big ], \end{aligned}$$

which implies

$$\begin{aligned} RS^{*}_{\alpha _{2}}\overset{d}{\rightarrow }\chi ^{2}_{r}(\nu _{4}), \end{aligned}$$

where \(\nu _{4}\equiv \nu _{4}(\tau _{2})=\tau _{2}^{'}(J_{\alpha _{2}\cdot \alpha _{1}}-J_{\alpha _{2}\alpha _{3}\cdot \alpha _{1}}J^{-1}_{\alpha _{3}\cdot \alpha _{1}}J_{\alpha _{3}\alpha _{2}\cdot \alpha _{1}})\tau _{2}\).

1.2 Proof of Corollaries

In this section we derive the non centrality parameters of the tests in the context of our model as given in Corollaries 2 and 3.

Proof of Corollary 2

The first part of Corollary 2 states that

Under \(H^{\lambda }_{A}:\lambda _{0}=\frac{\tau _{2}}{\sqrt{n}}\) and \(H^{\delta }_{A}:\delta _{0}=\frac{\tau _{3}}{\sqrt{n}}\)

$$\begin{aligned} RS_{\lambda }\overset{d}{\rightarrow } \chi ^{2}_{1}(\nu _{9}), \end{aligned}$$

where \(\nu _{9}\equiv \nu _{9}(\tau _{2},\,\tau _{3})=\tau ^{2}_{2}J_{\lambda \cdot \psi } + 2\tau _{2}\tau _{3}J_{\lambda \delta \cdot \psi } + \tau ^{2}_{3}\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\lambda \cdot \psi }}\). Comparing with our general set-up in Sect. 3, here we have \(\alpha _{2}\equiv \lambda\), \(\alpha _{3}\equiv \delta\) and the set of nuisance parameters as \(\alpha _{1}\equiv \psi =(\phi ,\,\beta ^{'},\,\gamma ^{'},\,\sigma _{\xi }^{2},\,\sigma _{2}^{2})^{'}\). Therefore, substituting these in \(\nu _{3}\) of Proposition 1 we get

$$\begin{aligned} \nu _{9}&=\tau ^{'}_{2}J_{\lambda \cdot \psi }\tau _{2} + 2\tau ^{'}_{2}J_{\lambda \delta \cdot \psi }\tau _{3} + \tau ^{'}_{3}J^{'}_{\lambda \delta \cdot \psi }J^{-1}_{\lambda \cdot \psi }J_{\lambda \delta \cdot \psi }\tau _{3}\\&=\tau ^{2}_{2}J_{\lambda \cdot \psi } + 2\tau _{2}\tau _{3}J_{\lambda \delta \cdot \psi } + \tau ^{2}_{3}\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\lambda \cdot \psi }}, \end{aligned}$$

The second part of Corollary 2 states

Under \(H^{\lambda }_{A}:\lambda _{0}=\frac{\tau _{2}}{\sqrt{n}}\)

$$\begin{aligned} RS^{*}_{\lambda }\overset{d}{\rightarrow }\chi ^{2}_{1}(\nu _{10}), \end{aligned}$$

where \(\nu _{10}\equiv \nu _{10}(\tau _{2})=\tau ^{2}_{2}[J_{\lambda \cdot \psi }-\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\delta \cdot \psi }}]\).

The proof for the non-centrality parameter directly follows from \(\nu _{4}\) in the last part of Proposition 1.

$$\begin{aligned} \nu _{10}&=\tau ^{'}_{2}J_{\lambda \cdot \psi }\tau _{2} - \tau ^{'}_{2}J_{\lambda \delta \cdot \psi }J^{-1}_{\delta \cdot \psi }J_{\lambda \delta \cdot \psi }\tau _{2} \\&=\tau ^{2}_{2}\big [J_{\lambda \cdot \psi }-\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\delta \cdot \psi }}\big ]. \end{aligned}$$

\(\square\)

Proof of Corollary 3

Similarly as in Corollary 2, we obtain the non centrality parameters in Corollary 3 by substituting \(\alpha _{2}\equiv \delta\) and \(\alpha _{3}\equiv \lambda\) and \(\alpha _{1}\equiv \psi =(\phi ,\,\beta ^{'},\,\gamma ^{'},\,\sigma _{\xi }^{2},\,\sigma _{2}^{2})^{'}\).

The first part of Corollary 3 says

Under \(H^{\delta }_{A}:\delta _{0}=\frac{\tau _{3}}{\sqrt{n}}\) and \(H^{\lambda }_{A}:\lambda _{0}=\frac{\tau _{2}}{\sqrt{n}}\)

$$\begin{aligned} RS_{\delta }\overset{d}{\rightarrow } \chi ^{2}_{1}(\nu _{11}), \end{aligned}$$

where \(\nu _{11}\equiv \nu _{11}(\tau _{2},\,\tau _{3})=\tau ^{2}_{3}J_{\delta \cdot \psi } + 2\tau _{2}\tau _{3}J_{\lambda \delta \cdot \psi } + \tau ^{2}_{2}\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\delta \cdot \psi }}\).

Following Proposition 1 we have

$$\begin{aligned} \nu _{11}&=\tau ^{'}_{3}J_{\delta \cdot \psi }\tau _{3} + 2\tau ^{'}_{3}J_{\delta \delta \cdot \psi }\tau _{2} + \tau ^{'}_{2}J^{'}_{\delta \lambda \cdot \psi }J^{-1}_{\delta \cdot \psi }J_{\delta \lambda \cdot \psi }\tau _{2}\\&=\tau ^{3}_{2}J_{\delta \cdot \psi } + 2\tau _{3}\tau _{2}J_{\delta \lambda \cdot \psi } + \tau ^{2}_{2}\frac{J^{2}_{\delta \lambda \cdot \psi }}{J_{\delta \cdot \psi }}. \end{aligned}$$

The second part of Corollary 3 states

Under \(H^{\delta }_{A}:\delta =\frac{\tau _{3}}{\sqrt{n}}\) we have

$$\begin{aligned} RS^{*}_{\delta }\overset{d}{\rightarrow }\chi ^{2}_{1}(\nu _{12}), \end{aligned}$$

where \(\nu _{12}\equiv \nu _{12}(\tau _{3})=\tau ^{2}_{3}[J_{\delta \cdot \psi }-\frac{J^{2}_{\lambda \delta \cdot \psi }}{J_{\lambda \cdot \psi }}]\).

Again from Proposition 1 we directly obtain

$$\begin{aligned} \nu _{12}&=\tau ^{'}_{3}J_{\delta \cdot \psi }\tau _{3} - \tau ^{'}_{3}J_{\delta \lambda \cdot \psi }J^{-1}_{\lambda \cdot \psi }J_{\delta \lambda \cdot \psi }\tau _{3} \\&=\tau ^{2}_{3}\big [J_{\delta \cdot \psi }-\frac{J^{2}_{\delta \lambda \cdot \psi }}{J_{\lambda \cdot \psi }}\big ]. \end{aligned}$$

\(\square\)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Koley, M., Bera, A.K. Testing for spatial dependence in a spatial autoregressive (SAR) model in the presence of endogenous regressors. J Spat Econometrics 3, 11 (2022). https://doi.org/10.1007/s43071-022-00026-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43071-022-00026-7

Keywords

JEL Classification

Navigation