Abstract
Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. This paper establishes testing procedures to detect a possible common change in means of the panels. To this end, we consider a ratio type test statistic and derive its asymptotic distribution under the no change null hypothesis. Moreover, we prove the consistency of the test under the alternative. The main advantage of such an approach is that the variance of the observations neither has to be known nor estimated. On the other hand, the correlation structure is required to be calculated. To overcome this issue, a bootstrap technique is proposed in the way of a completely data driven approach without any tuning parameters. The validity of the bootstrap algorithm is shown. As a by-product of the developed tests, we introduce a common break point estimate and prove its consistency. The results are illustrated through a simulation study. An application of the procedure to actuarial data is presented.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The problem of an unknown common change in means of the panels is studied here, where the panel data consist of \(N\) panels and each panel contains \(T\) observations over time. Various values of the change are possible for each panel at some unknown common time \(\tau =1,\ldots ,N\). The panels are considered to be independent, but this restriction can be weakened. In spite of that, observations within the panel are usually not independent. It is supposed that a common unknown dependence structure is present over the panels.
Tests for change point detection in the panel data have been proposed only in case when the panel size \(T\) is sufficiently large, i.e., \(T\) increases over all limits from an asymptotic point of view, cf. Chan et al. (2013) or Horváth and Hušková (2012). However, the change point estimation has already been studied for finite \(T\) not depending on the number of panels \(N\), see Bai (2010). The remaining task is to develop testing procedures to decide whether a common change point is present or not in the panels, while taking into account that the length \(T\) of each observation regime is fixed and can be relatively small.
1.1 Motivation
Structural changes in panel data—especially common breaks in means—are wide spread phenomena. Our primary motivation comes from non-life insurance business, where associations in many countries uniting several insurance companies collect claim amounts paid by every insurance company each year. Such a database of cumulative claim payments can be viewed as panel data, where insurance company \(i=1,\ldots ,N\) provides the total claim amount \(Y_{i,t}\) paid in year \(t=1,\ldots ,T\) into the common database. The members of the association can consequently profit from the joint database.
For the whole association it is important to know, whether a possible change in the claim amounts occurred during the observed time horizon. Usually, the time period is relatively short, e.g., 10–15 years. To be more specific, a widely used and very standard actuarial method for predicting future claim amounts—called chain ladder—assumes a kind of stability of the historical claim amounts. The formal necessary and sufficient condition is derived in Pešta and Hudecová (2012). This paper shows a way how to test for a possible historical instability.
2 Panel change point model
Let us consider the panel change point model
where \(\sigma >0\) is an unknown variance-scaling parameter and \(T\) is fixed, not depending on \(N\). The possible common change point time is denoted by \(\tau \in \{1,\ldots ,T\}\). A situation where \(\tau =T\) corresponds to no change in means of the panels. The means \(\mu _i\) are panel-individual. The amount of the break in mean, which can also differ for every panel, is denoted by \(\delta _i\). Furthermore, it is assumed that the sequences of panel disturbances \(\{\varepsilon _{i,t}\}_t\) are independent and within each panel the errors form a weakly stationary sequence with a common correlation structure. This can be formalized in the following assumption.
Assumption A1
The vectors \([\varepsilon _{i,1},\ldots ,\varepsilon _{i,T}]^{\top }\) existing on a probability space \((\varOmega ,\mathcal {F},\mathsf {P})\) are \(iid\) for \(i=1,\ldots ,N\) with \(\mathsf {E}\varepsilon _{i,t}=0\) and \(\mathsf {Var}\,\varepsilon _{i,t}=1\), having the autocorrelation function
which is independent of the lag \(s\), the cumulative autocorrelation function
and the shifted cumulative correlation function
for all \(i=1,\ldots ,N\) and \(t,v=1,\ldots ,T\).
The sequence \(\{\varepsilon _{i,t}\}_{t=1}^T\) can be viewed as a part of a weakly stationary process. Note that the dependent errors within each panel do not necessarily need to be linear processes. For example, GARCH processes as error sequences are allowed as well. The assumption of independent panels can indeed be relaxed, but it would make the setup much more complex. Consequently, probabilistic tools for dependent data need to be used (e.g., suitable versions of the central limit theorem). Nevertheless, assuming, that the claim amounts for different insurance companies are independent, is reasonable. Moreover, the assumption of a common homoscedastic variance parameter \(\sigma \) can be generalized by introducing weights \(w_{i,t}\), which are supposed to be known. Being particular in actuarial practice, it would mean to normalize the total claim amount by the premium received, since bigger insurance companies are expected to have higher variability in total claim amounts paid.
It is required to test the null hypothesis of no change in the means
against the alternative that at least one panel has a change in mean
3 Test statistic and asymptotic results
We propose a ratio type statistic to test \(H_0\) against \(H_1\), because this type of statistic does not require estimation of the nuisance parameter for the variance. Generally, this is due to the fact that the variance parameter simply cancels out from the nominator and denominator of the statistic. In spite of that, the common variance could be estimated from all the panels, of which we possess a sufficient number. Nevertheless, we aim to construct a valid and completely data driven testing procedure without interfering estimation and plug-in estimates instead of nuisance parameters. A bootstrap add-on is going to serve this purpose as it is seen later on.
For surveys on ratio type test statistics, we refer to Chen and Tian (2014), Csörgő and Horváth (1997), Horváth et al. (2009), Liu et al. (2008), and Madurkayová (2011). Our particular panel change point test statistic is
where \(\bar{Y}_{i,t}\) is the average of the first \(t\) observations in panel \(i\) and \(\widetilde{Y}_{i,t}\) is the average of the last \(T-t\) observations in panel \(i\), i.e.,
An alternative way for testing the change in panel means could be a usage of CUSUM type statistics. For example, a maximum or minimum of a sum (not a ratio) of properly standardized or modified sums from our test statistic \(\mathcal {R}_N(T)\). The theory, which follows, can be appropriately rewritten for such cases.
Firstly, we derive the behavior of the test statistics under the null hypothesis.
Theorem 1
(Under null) Under hypothesis \(H_0\) and Assumption A1
where \(Z_t:=X_T-X_t\) and \([X_1,\ldots ,X_T]^{\top }\) is a multivariate normal random vector with zero mean and covariance matrix \(\varvec{\varLambda }=\{\lambda _{t,v}\}_{t,v=1}^{T,T}\) such that
The limiting distribution does not depend on the variance nuisance parameter \(\sigma \), but it depends on the unknown correlation structure of the panel change point model, which has to be estimated for testing purposes. The way of its estimation is shown in Sect. 4.1. Furthermore, Theorem 1 is just a theoretical mid-step for the bootstrap test, where the correlation structure need not to be known. That is why the presence of unknown quantities in the asymptotic distribution is not troublesome.
Note that in case of independent observations within the panel, the correlation structure and, hence, the covariance matrix \(\varvec{\varLambda }\) is simplified such that \(r(t)=t\) and \(R(t,v)=0\).
Next, we show how the test statistic behaves under the alternative.
Assumption A2
\(\lim _{N\rightarrow \infty }\frac{1}{\sqrt{N}}\left| \sum _{i=1}^N \delta _i\right| =\infty \).
Theorem 2
(Under alternative) If \(\tau \le T-3\), then under Assumptions A1, A2 and alternative \(H_1\)
Assumption A2 is satisfied, for instance, if \(0<\delta \le \delta _i\,\forall i\) (a common lower change point threshold) and \(\delta \sqrt{N}\rightarrow \infty ,\, N\rightarrow \infty \). Another suitable example of \(\delta _i\)s for the condition in Assumption A2, can be \(0<\delta _i=KN^{-1/2+\eta }\) for some \(K>0\) and \(\eta >0\). Or \(\delta _i=Ci^{\alpha -1}\sqrt{N}\) may be used as well, where \(\alpha \ge 0\) and \(C>0\). The assumption \(\tau \le T-3\) means that there are at least three observations in the panel after the change point. It is also possible to redefine the test statistic by interchanging the nominator and the denominator of \(\mathcal {R}_{N}(T)\). Afterwards, Theorem 2 for the modified test statistic would require three observations before the change point, i.e., \(\tau \ge 3\).
Theorem 2 says that in presence of a structural change in the panel means, the test statistic explodes above all bounds. Hence, the procedure is consistent and the asymptotic distribution from Theorem 1 can be used to construct the test.
4 Change point estimation
Despite the fact that the aim of the paper is to establish testing procedures for detection of a panel mean change, it is necessary to construct a consistent estimate for a possible change point. There are two reasons for that: Firstly, the estimation of the covariance matrix \(\varvec{\varLambda }\) from Theorem 1 requires panels as vectors with elements having common mean (i.e., without a jump). Secondly, the bootstrap procedure, introduced later on, requires centered residuals to be resampled.
A consistent estimate of the change point in the panel data is proposed in Bai (2010), but under circumstances that the change occurred for sure. In our situation, we do not know whether a change occurs or not. Therefore, we modify the estimate proposed by Bai (2010) in the following way. If the panel means change somewhere inside \(\{2,\ldots ,T-1\}\), let the estimate consistently select this change. If there is no change in panel means, the estimate points out the very last time point \(T\) with probability going to one. In other words, the value of the change point estimate can be \(T\) meaning no change. This is in contrast with Bai (2010), where \(T\) is not reachable.
Let us define the estimate of \(\tau \):
Now, we show the desired property of consistency for the proposed change point estimate under the following assumptions.
Assumption C1
\(L<\lim _{N\rightarrow \infty }\frac{1}{N}\sum _{i=1}^N\delta _i^2<\infty \), where \(L=-\infty \) if \(\tau =T\) and \(L=\max _{t=\tau +1,\ldots ,T}\frac{\sigma ^2t^2}{\tau (t-\tau )}\left( \frac{r(\tau )}{\tau ^2}-\frac{r(t)}{t^2}\right) \) otherwise.
Assumption C2
\(\mathsf {E}\varepsilon _{1,t}^4<\infty ,\,t\in \{1,\ldots ,T\}\).
Theorem 3
(Change point estimate consistency) Suppose that \(\tau \ne 1\) and the sequence \(\{r(t)/t^2\}_{t=2}^T\) is decreasing. Then under Assumptions A1, C1, and C2
Assumption C1 assures that the values of changes have to be large enough compared to the variability of the random noise in the panels and to the strength of dependencies within the panels as well. On one hand, Assumption C1 implies the usual assumption \(\lim _{N\rightarrow \infty }\frac{1}{\sqrt{N}}\sum _{i=1}^N\delta _i^2=\infty \) in change point analysis, cf. Bai (2010) or Horváth and Hušková (2012). On the other hand, Assumption C1 assures that \(\lim _{N\rightarrow \infty }\frac{1}{N^2}\sum _{i=1}^N\delta _i^2=0\), which is not present when the panel size \(T\) is considered as unbounded, i.e., \(T\rightarrow \infty \). Here, this second part is needed to control the asymptotic boundedness of the variability of \(\frac{1}{t}\sum _{i=1}^N\sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,t})^2\), because a finite \(T\) cannot do that.
Similarly as in the previous section, Assumption C1 is satisfied for \(0<\delta \le \delta _i<\Delta ,\forall i\) (a common lower and upper bound for the change amount) and suitable \(\sigma \) and \(r(t)\). Assumptions A2 and C1 are generally incomparable. The monotonicity assumption from Theorem 3 in not very restrictive at all. For example in case of independent observations within the panel, this assumption is automatically fulfilled, since \(\{1/t\}_{t=2}^T\) is decreasing. Moreover, the weaker the dependency within the panel, the faster the decrease of \(r(t)/t^2\).
One can check the proof of Theorem 3 and see that Assumption C1 can be replaced by more restrictive assumptions \(\lim _{N\rightarrow \infty }\frac{1}{N}\sum _{i=1}^N\delta _i^2=\infty \) and \(\lim _{N\rightarrow \infty }\frac{1}{N^2}\sum _{i=1}^N\delta _i^2=0\). This first assumption might be considered as too strong, because a common value of \(\delta =\delta _i\) for all \(i\) does not fulfill it.
Various competing consistent estimates of a possible change point can be suggested, e.g., the maximizer of \(\sum _{i=1}^N\left[ \sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,T})\right] ^2\). To show the consistency, one needs to postulate different assumptions on the cumulative autocorrelation function and shifted cumulative correlation function compared to Theorem 3 and this may be rather complex.
4.1 Estimation of the correlation structure
Since the panels are considered to be independent and the number of panels may be sufficiently large, one can estimate the correlation structure of the errors \([\varepsilon _{1,1},\ldots ,\varepsilon _{1,T}]^{\top }\) empirically. We base the errors’ estimates on residuals
Then, the empirical version of the autocorrelation function is
Consequently, the kernel estimation of the cumulative autocorrelation function and shifted cumulative correlation function is adopted in lines with Andrews (1991):
where \(h>0\) stands for the window size and \(\kappa \) belongs to a class of kernels given by
Since the variance parameter \(\sigma \) is not present in the limiting distribution of Theorem 1, it neither has to be estimated nor known. Nevertheless, one can use \(\widehat{\sigma }^2:=\frac{1}{NT}\sum _{i=1}^{N} \sum _{s=1}^{T}\widehat{e}_{i,s}^2\).
5 Bootstrap and hypothesis testing
A wide range of literature has been published on bootstrapping in the change point problem, e.g., Hušková and Kirch (2012) or Hušková et al. (2008). We build up the bootstrap test on the resampling with replacement of row vectors \(\{[\widehat{e}_{i,1},\ldots ,\widehat{e}_{i,T}]\}_{i=1,\ldots ,N}\) corresponding to the panels. This provides bootstrapped row vectors \(\{[\widehat{e}_{i,1}^*,\ldots ,\widehat{e}_{i,T}^*]\}_{i=1,\ldots ,N}\). Then, the bootstrapped residuals \(\widehat{e}_{i,t}^*\) are centered by their conditional expectation \(\frac{1}{N}\sum _{i=1}^N\widehat{e}_{i,t}\) yielding
The bootstrap test statistic is just a modification of the original statistic \(\mathcal {R}_N(T)\), where the original observations \(Y_{i,t}\) are replaced by their bootstrap counterparts \(\widehat{Y}_{i,t}^*\):
such that
An algorithm for the bootstrap is illustratively shown in Procedure 1 and its validity will be proved in Theorem 4.
5.1 Validity of the resampling procedure
The idea behind bootstrapping is to mimic the original distribution of the test statistic in some sense with the distribution of the bootstrap test statistic, conditionally on the original data denoted by \(\mathbb {Y}\equiv \{Y_{i,t}\}_{i,t=1}^{N,T}\).
First of all, two simple and just technical assumptions are needed.
Assumption B1
\(\{\varepsilon _{i,t}\}_t\) possesses the lagged cumulative correlation function
Assumption B2
\(\lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=\tau ]=1\).
Assumption B1 is not really an assumption, actually it is only a notation. Notice that \(S(t,v,1)\equiv R(t,v)\). Assumption B2 is satisfied for our estimate proposed in (3), if the assumptions of Theorem 2 hold. Assumption B2 is postulated in a rather broader sense, because we want to allow any other consistent estimate of \(\tau \) to be used instead.
Realize that it is not known, whether the common panel means’ change occurred or not. In other words, one does not know whether the data come from the null or the alternative hypothesis. Therefore, the following theorem holds under \(H_0\) as well as \(H_1\).
Theorem 4
(Bootstrap justification) Under Assumptions A1, B1, B2, and C2
where \(\mathcal {Z}_t:=\mathcal {X}_T-\mathcal {X}_t\) and \([\mathcal {X}_1,\ldots ,\mathcal {X}_T]^{\top }\) is a multivariate normal random vector with zero mean and covariance matrix \(\varvec{\varGamma }=\left\{ \gamma _{t,v}(\tau )\right\} _{t,v=1}^{T,T}\) such that
and
The validity of the bootstrap test is assured by Theorem 4. Indeed, the conditional asymptotic distribution of the bootstrap test statistic is a functional of a multivariate normal distribution under the null as well as under the alternative. It does not converge to infinity (in probability) under the alternative. That is why it can be used for correctly rejecting the null in favor of the alternative, having sufficiently large \(N\). Moreover, the following theorem states that the conditional distribution of the bootstrap test statistic and the unconditional distribution of the original test statistic coincide. And that is the reason why the bootstrap test should approximately keep the same level as the original test based on the asymptotics from Theorem 1.
Theorem 5
(Bootstrap test consistency) Under Assumptions A1, B2, C2 and hypothesis \(H_0\), the asymptotic distribution of \(\mathcal {R}_{N}(T)\) from Theorem 1 and the asymptotic distribution of \(\mathcal {R}_{N}^*(T)|\mathbb {Y}\) from Theorem 4 coincide.
Now, the simulated (empirical) distribution of the bootstrap test statistic can be used to calculate the bootstrap critical value, which will be compared to the value of the original test statistic in order to reject the null or not.
Finally, note that one cannot think about any local alternative in this setup, because \(\tau \) has a discrete and finite support.
6 Simulations
A simulation experiment was performed to study the finite sample properties of the asymptotic and bootstrap test statistics for a common change in panel means. In particular, the interest lies in the empirical sizes of the proposed tests under the null hypothesis and in the empirical rejection rate (power) under the alternative. Random samples of panel data (\(5000\) each time) are generated from the panel change point model (1). The panel size is set to \(T=10\) and \(T=25\) in order to demonstrate the performance of the testing approaches in case of small and intermediate panel length. The number of panels considered is \(N=50\) and \(N=200\).
The correlation structure within each panel is modeled via random vectors generated from iid, AR(1), and GARCH(1,1) sequences. The considered AR(1) process has coefficient \(\phi =0.3\). In case of GARCH(1,1) process, we use coefficients \(\alpha _0=1\), \(\alpha _1=0.1\), and \(\beta _1=0.2\), which according to (Lindner 2009, [Example 1]) gives a strictly stationary process. In all three sequences, the innovations are obtained as iid random variables from a standard normal \(\mathsf {N}(0,1)\) or Student \(t_5\) distribution. Simulation scenarios are produced as all possible combinations of the above mentioned settings.
When using the asymptotic distribution from Theorem 1, the covariance matrix is estimated as proposed in Sect. 4.1 using the Parzen kernel
Several values of the smoothing window width \(h\) are tried from the interval [2, 5] and all of them work fine providing comparable results. To simulate the asymptotic distribution of the test statistics, 2000 multivariate random vectors are generated using the pre-estimated covariance matrix.
The bootstrap approach does not need to estimate the covariance structure. The number of bootstrap replications used is 2000. To access the theoretical results under \(H_0\) numerically, Table 1 provides the empirical specificity (one minus size) of the tests for both the asymptotic and bootstrap version of the panel change point test, where the significance level is \(\alpha =5\,\%\).
It may be seen that both approaches (using asymptotic and bootstrap distribution) are close to the theoretical value of specificity .95. As expected, the best results are achieved in case of independence within the panel, because there is no information overlap between two consecutive observations. The precision of not rejecting the null is increasing as the number of panels is getting higher and the panel is getting longer as well.
The performance of both testing procedures under \(H_1\) in terms of the empirical rejection rates is shown in Table 2, where the change point is set to \(\tau =\lfloor T/2 \rfloor \) and the change sizes \(\delta _i\) are independently uniform on [1, 3] in \(33\,\%\), \(66\,\%\) or in all panels.
One can conclude that the power of both tests increases as the panel size and the number of panels increase, which is straightforward and expected. It should be noticed that numerical instability issues may appear for larger \(T\), when generating from a \(T\)-variate normal distribution. Moreover, higher power is obtained when a larger portion of panels is subject to have a change in mean. The test power drops when switching from independent observations within the panel to dependent ones. Innovations with heavier tails (i.e., \(t_5\)) yield smaller power than innovations with lighter tails. Generally, the bootstrap outperforms the classical asymptotics in all scenarios.
Let us mention that for finite sections of processes with a stronger dependence structure than taken into account in the simulation scenarios, Assumption C1 does not have to be fulfilled. For example, Assumption C1 is violated for AR(1) with coefficient \(\phi =0.9\), \(\delta _i=2\), \(\sigma =1\), standard normal or Student \(t_5\) innovations, and \(\tau =5\) for \(T=10\) or \(\tau =12\) for \(T=25\). Here, the dependency under the considered variability is too strong compared to the change size. It is rather difficult to detect possible changes in such a setup.
Finally, an early change point is discussed very briefly. We stay with standard normal innovations, iid observations within the panel, the size of changes \(\delta _i\) being independently uniform on \([1,3]\) in all panels, and the change point is \(\tau =3\) in case of \(T=10\) and \(\tau =5\) for \(T=25\). The empirical sensitivities of both tests for small values of \(\tau \) are shown in Table 3.
When the change point is not in the middle of the panel, the power of the test generally falls down. The source of such decrease is that the left or right part of the panel possesses less observations with constant mean, which leads to a decrease of precision in the correlation estimation in case of the asymptotic test and in the change point estimation in case of the bootstrap test. Nevertheless, the bootstrap test again outperforms the asymptotic version and, moreover, provides solid results even for early or late change points (the late change points are not numerically demonstrated here).
7 Real data analysis
As mentioned in the introduction, our primary motivation for testing the panel mean change comes from the insurance business. The data set is provided by the National Association of Insurance Commissioners (NAIC) database, see Meyers and Shi (2011). We concentrate on the ‘Commercial auto/truck liability/medical’ insurance line of business. The data collect records from \(N=157\) insurance companies (one extreme insurance company was omitted from the analysis). Each insurance company provides \(T=10\) yearly total claim amounts starting from year 1988 up to year 1997. Figure 1 graphically shows series of claim amounts for 20 selected insurance companies (a plot with all 157 panels would be cluttered).
The data are considered as panel data in the way that each insurance company corresponds to one panel, which is formed by the company’s yearly total claim amounts. The length of the panel is quite short. This is very typical in insurance business, because considering longer panels may invoke incomparability between the early claim amounts and the late ones due to changing market or policies’ conditions over time.
We want to test whether or not a change in the claim amounts occurred in a common year, assuming that the claim amounts are approximately constant in the years before and after the possible change for every insurance company. Our ratio type test statistic gives \(\mathcal {R}_{157}(10)=39.9\). The asymptotic critical value is 52.4 and the bootstrap critical value equals 203.1. These values mean that we do not reject the hypothesis of no change in panel means in both cases. The striking difference between the two critical values may come from the inefficient correlation structure estimation (since \(T=10\) is quite short) or from violation of the model assumptions.
That is why we also try to take the logarithms of claim amounts and to consider log amounts as the panel data observations. Nevertheless, we do not reject the hypothesis of no change in the panel means (i.e., means of log amounts) again. Additionally to that, one can consider normalizing the claim amounts by the premium received by company \(i\) in year \(t\). That is thinking of panel data \(Y_{i,t}/p_{i,t}\), where \(p_{i,t}\) is the mentioned premium. This may yield a stabilization of series’ variability, which corresponds to the assumption of a common variance. In spite of that, we again do not reject the null (neither by the asymptotic test, nor by the bootstrap one). For the sake of completeness, we may reveal that our estimate of the panel change point provides value \(\widehat{\tau }_N=10\) meaning no change in panels.
8 Conclusions
In this paper, we consider the change point problem in panel data with fixed panel size. Occurrence of common breaks in panel means is tested. We introduce a ratio type test statistic and derive its asymptotic properties. Under the null hypothesis of no change, the test statistic weakly converges to a functional of the multivariate normal random vector with zero mean and covariance structure depending on the intra-panel covariances. As shown in the paper, these covariances can be estimated and, consequently, used for testing whether a change in means occurred or not. This is indeed feasible, because the test statistic under the alternative converges to infinity in probability.
The secondary aim of the paper lies in proposing a consistent change point estimate, which is straightforwardly used for bootstrapping the test statistic. We establish the asymptotic behavior of the bootstrap version of the test statistic, regardless of the fact whether the data come from the null or the alternative hypothesis. Moreover, the asymptotic distribution of the bootstrap test statistic coincides with the original test statistic’s limiting distribution. This provides justification for the bootstrap method. One of the main goals is to obtain a completely data driven testing approach whether the means remain the same during the observation period or not. The ratio type test statistic allows us to omit a variance estimation and the bootstrap technique overcomes estimation of the correlation structure. Hence, neither nuisance nor smoothing parameters are present in the whole testing process, which makes it very simple for practical use. Furthermore, the whole stochastic theory behind requires relatively simple assumptions, which are not too restrictive.
A simulation study illustrates that even for small panel size, both presented approaches—based on traditional asymptotics and on bootstrapping—work fine. One may judge that both methods keep the significance level under the null, while various simulation scenarios are considered. Besides that, the power of the test is slightly higher in case of the bootstrap. Finally, the proposed methods are applied to insurance data, for which the change point analysis in panel data provides an appealing approach.
8.1 Discussion
First of all, it has to be noted that the non-ratio CUSUM type test statistic can be used instead of the ratio type, but this requires to estimate the variance of the observations. The statements of theorems and proofs would become even less complicated. Omitting the usage of the bootstrap test statistic can especially be unreliable in short panels from a computational point of view. This is due to the fact that the bootstrap overcomes the issue of estimating the correlation structure.
Furthermore, our setup can be modified by considering large panel size, i.e., \(T\rightarrow \infty \). Consequently, the whole theory leads to convergences to functionals of Gaussian processes with a covariance structure derived in a very similar fashion as for fixed \(T\). However, our motivation is to develop techniques for fixed and relatively small panel size.
Dependent panels may be taken into account and the presented work might be generalized for some kind of asymptotic independence over the panels or prescribed dependence among the panels. Nevertheless, our incentive is determined by a problem from non-life insurance, where the association of insurance companies consists of a relatively high number of insurance companies. Thus, the portfolio of yearly claims is so diversified, that the panels corresponding to insurance companies’ yearly claims may be viewed as independent and neither natural ordering nor clustering has to be assumed.
References
Andrews DWK (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59(3):817–858
Bai J (2010) Common breaks in means and variances for panel data. J Econom 157(1):78–92
Billingsley P (1986) Probability and measure, 2nd edn. Wiley, New York
Chan J, Horváth L, Hušková M (2013) Change-point detection in panel data. J Stat Plan Inference 143(5):955–970
Chen Z, Tian Z (2014) Ratio tests for variance change in nonparametric regression. Stat J Theor Appl Stat 48(1):1–16
Csörgő M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Chichester
Horváth L, Horváth Z, Hušková M (2009) Ratio tests for change point detection. In: Balakrishnan N, Peña EA, Silvapulle MJ (eds) Beyond parametrics in interdisciplinary research: Festschrift in honor of professor Pranab K. Sen, vol 1. IMS Collections, Beachwood, pp 293–304
Horváth L, Hušková M (2012) Change-point detection in panel data. J Time Ser Anal 33(4):631–648
Hušková M, Kirch C (2012) Bootstrapping sequential change-point tests for linear regression. Metrika 75(5):673–708
Hušková M, Kirch C, Prášková Z, Steinebach J (2008) On the detection of changes in autoregressive time series, II. Resampling procedures. J Stat Plan Inference 138(6):1697–1721
Katz ML (1963) Note on the Berry–Esseen theorem. Ann Math Stat 34(3):1107–1108
Lindner AM (2009) Stationarity, mixing, distributional properties and moments of GARCH(p, q)-processes. In: Andersen TG, Davis RA, Kreiss JP, Mikosch T (eds) Handbook of financial time series. Springer, Berlin, pp 481–496
Liu Y, Zou C, Zhang R (2008) Empirical likelihood ratio test for a change-point in linear regression model. Commun Stat Theory Methods 37(16):2551–2563
Madurkayová B (2011) Ratio type statistics for detection of changes in mean. Acta Univ Carol Math Phys 52(1):47–58
Meyers GG, Shi P (2011) Loss reserving data pulled from NAIC Schedule P. http://www.casact.org/research/index.cfm?fa=loss_reserves_data. [Online; Updated September 01, 2011; Accessed June 10, 2014]
Pešta M, Hudecová Š (2012) Asymptotic consistency and inconsistency of the chain ladder. Insur Math Econ 51(2):472–479
Acknowledgments
The authors thank two anonymous referees and the Associate Editor for the suggestions that improved this paper. This paper was written with the support of the Czech Science Foundation Project GAČR No. P201/13/12994P.
Author information
Authors and Affiliations
Corresponding author
Additional information
This paper was written with the support of the Czech Science Foundation Project GAČR No. P201/13/12994P.
Appendices
Appendix 1: Supporting theorems
Suppose that \(\{\varvec{\xi }_n\}_{n=1}^{\infty }\) is a sequence of random variables/vectors existing on a probability space \((\varOmega ,\mathcal {F},\mathsf {P})\). A bootstrap version of \(\varvec{\xi }\equiv [\varvec{\xi }_1,\ldots ,\varvec{\xi }_n]^{\top }\) is its (randomly) resampled sequence with replacement—denoted by \(\varvec{\xi }^*\equiv [\varvec{\xi }_1^*,\ldots ,\varvec{\xi }_n^*]^{\top }\)—with the same length, where for each \(i\in \{1,\ldots ,n\}\) it holds that \(\mathsf {P}_{\varvec{\xi }}^*[\varvec{\xi }_i^*=\varvec{\xi }_j]\equiv \mathsf {P}[\varvec{\xi }_i^*=\varvec{\xi }_j|\varvec{\xi }] =1/n,\,j=1,\ldots ,n\). In the sequel, \(\mathsf {P}_{\varvec{\xi }}^*\) denotes the conditional probability given \({\varvec{\xi }}\). So, \(\varvec{\xi }_i^*\) has a discrete uniform distribution on \(\{\varvec{\xi }_1,\ldots ,\varvec{\xi }_n\}\) for every \(i=1,\ldots ,n\). The conditional expectation and variance given \({\varvec{\xi }}\) are denoted by \(\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\) and \(\mathsf {Var}\,_{\mathsf {P}_{\varvec{\xi }}^*}\).
If a statistic has an approximate normal distribution, one may be interested in the asymptotic comparison of the bootstrap distribution with the original one. A tool for assessing such an approximate closeness can be a bootstrap central limit theorem for triangular arrays.
Theorem 6
(Bootstrap CLT for triangular arrays) Let \(\{\xi _{n,k_n}\}_{n=1}^{\infty }\) be a triangular array of zero mean random variables on the same probability space such that the elements of the vector \([\xi _{n,1},\ldots ,\xi _{n,k_n}]^{\top }\) are iid for every \(n\in \mathbb {N}\) satisfying
and \(k_n\rightarrow \infty \) as \(n\rightarrow \infty \). Suppose that \(\varvec{\xi }^*\equiv [\xi _{n,1}^*,\ldots ,\xi _{n,k_n}^*]^{\top }\) is the bootstrapped version of \(\varvec{\xi }\equiv [\xi _{n,1},\ldots ,\xi _{n,k_n}]^{\top }\) and denote
If
then
Theorem 7
(Bootstrap multivariate CLT for triangular arrays) Let \(\{\varvec{\xi }_{n,k_n}\}_{n=1}^{\infty }\) be a triangular array of zero mean \(q\)-dimensional random vectors on the same probability space such that the elements of the vector sequence \(\{\varvec{\xi }_{n,1},\ldots ,\varvec{\xi }_{n,k_n}\}\) are iid for every \(n\in \mathbb {N}\) satisfying
where \(\varvec{\xi }_{n,1}\equiv [\xi _{n,1}^{(1)},\ldots ,\xi _{n,1}^{(q)}]^{\top }\in \mathbb {R}^q,\,n\in \mathbb {N}\) and \(k_n\rightarrow \infty \) as \(n\rightarrow \infty \). Assume that \(\varvec{\varXi }^*\equiv [\varvec{\xi }_{n,1}^*,\ldots ,\varvec{\xi }_{n,k_n}^*]^{\top }\) is the bootstrapped version of \(\varvec{\varXi }\equiv [\varvec{\xi }_{n,1},\ldots ,\varvec{\xi }_{n,k_n}]^{\top }\). Denote
If
then
Appendix 2: Proofs
Proof (of Theorem 1)
Let us define
Using the multivariate Lindeberg-Lévy CLT for a sequence of \(T\)-dimensional iid random vectors \(\{[\sum _{s=1}^1\varepsilon _{i,s},\ldots , \sum _{s=1}^T\varepsilon _{i,s}]^{\top }\}_{i\in \mathbb {N}}\), we have under \(H_0\)
since \(\mathsf {Var}\,[\sum _{s=1}^1\varepsilon _{1,s},\ldots ,\sum _{s=1}^T\varepsilon _{1,s}]^{\top }=\varvec{\varLambda }\). Indeed, the \(t\)-th diagonal element of the covariance matrix \(\varvec{\varLambda }\) is
and the upper off-diagonal element on position \((t,v)\) is
Moreover, let us define the reverse analogue to \(U_N(t)\), i.e.,
Hence,
and, consequently,
Using the Cramér–Wold device, we end up with
\(\square \)
Proof (of Theorem 2)
Let \(t=\tau +1\). Then, under alternative \(H_1\)
where \(\bar{\varepsilon }_{i,\tau +1}=\frac{1}{\tau }\sum _{v=1}^{\tau +1}\varepsilon _{i,v}\).
Since there is no change after \(\tau +1\) and \(\tau \le T-3\), then by Theorem 1 we have
\(\square \)
Proof (of Theorem 3)
Let us define \(S_N^{(i)}(t):=\frac{1}{t}\sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,t})^2\) and, consequently, \(S_N(t):=\frac{1}{N}\sum _{i=1}^N S_N^{(i)}(t)\). Then,
where \(\bar{\varepsilon }_{i,t}=\frac{1}{t}\sum _{s=1}^t\varepsilon _{i,s}\). By the definition of the cumulative autocorrelation function, we have for \(2\le t\le \tau \)
In the other case when \(t>\tau \), one can calculate
Realize that \(S_N^{(i)}(t)-\mathsf {E}S_N^{(i)}(t)\) are independent with zero mean for fixed \(t\) and \(i=1,\ldots ,N\). Due to Assumption C2, for \(2\le t\le \tau \) it holds
where \(C_1(t,\sigma )>0\) is some constant not depending on \(N\). If \(t>\tau \), then
where \(C_j(t,\tau ,\sigma )>0\) does not depend on \(N\) for \(j=2,3,4\).
The Chebyshev inequality provides \(S_N(t)-\mathsf {E}S_N(t)=\mathcal {O}_{\mathsf {P}}\left( \sqrt{\mathsf {Var}\,S_N(t)}\right) \) as \(N\rightarrow \infty \). According to Assumption C1 and the Cauchy-Schwarz inequality, we have
Since the index set \(\{1,\ldots ,T\}\) is finite and \(\tau \) is finite as well, then
where \(K_j(\sigma )>0\) are constants not depending on \(N\) for \(j=1,2,3,4\). Thus, we also have uniform stochastic boundedness, i.e.,
Adding and subtracting, one has
The above inequality holds for each \(t\in \{2,\ldots ,T\}\) and, particularly, it holds for \(\widehat{\tau }_N\). Note that \(\widehat{\tau }_N=\arg \max _tS_N(t)\). Hence, \(S_N(\tau )-S_N(\widehat{\tau }_N)\le 0\). Therefore,
If \(\widehat{\tau }_N>\tau \), then the left hand side of (9) is \(\mathcal {O}_{\mathsf {P}}(1)\) as \(N\rightarrow \infty \), but the right hand side is unbounded because of Assumption C1. So, if \(\widehat{\tau }_N\le \tau \), then
which yields due to the monotonicity of \(r(t)/t^2\) that \(\mathsf {P}[\widehat{\tau }_N=\tau ]\rightarrow 1\) as \(N\rightarrow \infty \). \(\square \)
Proof (of Theorem 4)
Let us define \(\widehat{\epsilon }_{i,t}:=\sigma ^{-1}\sum _{s=1}^t\widehat{e}_{i,s}\), \(\widehat{\epsilon }_{i,t}^*:=\sigma ^{-1}\sum _{s=1}^t\widehat{e}_{i,s}^*\),
and
Realize that \(\widehat{\epsilon }_{i,t}\) depends on \(\widehat{\tau }_N\) and, hence, it depends on \(N\). Thus, \(\widehat{\epsilon }_{i,t}\equiv \widehat{\epsilon }_{i,t}(N)\). Since Assumption C2 holds, then according to the bootstrap multivariate CLT for triangular arrays (Theorem 7) of \(T\)-dimensional vectors \(\varvec{\xi }_{N,i}=[\widehat{\epsilon }_{i,1}(N),\ldots ,\widehat{\epsilon }_{i,T}(N)]^{\top }\) with \(k_N=N\), we have
where \(\varvec{\varGamma }_N=\mathsf {Var}\,[\widehat{\epsilon }_{i,1},\ldots , \widehat{\epsilon }_{i,T}]^{\top }\).
Now, it is sufficient to realize that \([\widehat{U}_N(1),\ldots ,\widehat{U}_N(T)]^{\top }\) has an approximate multivariate normal distribution with zero mean and covariance matrix \(\varvec{\varGamma }=\lim _{N\rightarrow \infty }\varvec{\varGamma }_N\). Using the law of total variance,
Since \(\lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=\tau ]=1\) and \(\mathsf {E}[\widehat{e}_{i,t}|\widehat{\tau }_N=\tau ]=0\), then
Similarly with the covariance, i.e., after applying the law of total covariance, we have
Note that
where
Taking into account the definitions of \(r(t)\), \(R(t,v)\), and \(S(t,v,d)\) together with some simple algebra, we obtain that \(\mathsf {Var}\,[\widehat{\epsilon }_{i,s}|\widehat{\tau }_N=\tau ]=\gamma _{t,t}(\tau )\) and \(\mathsf {Cov}\,\left( \widehat{\epsilon }_{i,t},\widehat{\epsilon }_{i,v}|\widehat{\tau }_N=\tau \right) =\gamma _{t,v}(\tau )\) for \(t<v\), where the elements \(\gamma _{t,t}(\tau )\) and \(\gamma _{t,v}(\tau )\) are as in the statement of Theorem 4.
Then the sum in the nominator of \(\mathcal {R}_N^*(T)\) can be alternatively rewritten as
Concerning the denominator of \(\mathcal {R}_N^*(T)\), one needs to perform a similar calculation as in the proof of Theorem 1 with \(V_N(t)\), i.e., to define \(\widehat{V}_N(t)\) and \(\widehat{V}_N^*(t)\) analogously to \(\widehat{U}_N(t)\) and \(\widehat{U}_N^*(t)\) as \(V_N(t)\) is to \(U_N(t)\). Applying the Cramér–Wold theorem completes the proof. \(\square \)
Proof (of Theorem 5)
Recall the notation from the proof of Theorem 4. Under \(H_0\), B2, and C2 it holds
Then in view of (4),
\(\square \)
Proof (of Theorem 6)
The Lyapunov condition (Billingsley 1986, [p. 371]) for a triangular array of random variables \(\{\xi _{n,k_n}\}_{n=1}^{\infty }\) is satisfied due to (5) and (6), i.e., for \(\omega =2\):
Therefor, the CLT for \(\{\xi _{n,k_n}\}_{n=1}^{\infty }\) holds and
Now, to prove the theorem, it suffices to show the following three statements:
-
(i)
\(\sup _{x\in \mathbb {R}}\left| \mathsf {P}_{\varvec{\xi }}^*\left[ \frac{\sqrt{k_n}}{\sqrt{\mathsf {Var}\,_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^*}}\left( \bar{\xi }_n^*-\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\bar{\xi }_n^*\right) \le x\right] \!-\!\int _{-\infty }^x\frac{1}{\sqrt{2\pi }}\exp \left\{ -\frac{t^2}{2}\right\} \text{ d }t\right| \xrightarrow [n\rightarrow \infty ]{\mathsf {P}}0\);
-
(ii)
\(\mathsf {Var}\,_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^*-\varsigma _n^2\xrightarrow [n\rightarrow \infty ]{\mathsf {P}}0\);
-
(iii)
\(\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\bar{\xi }_n^*=\bar{\xi }_n,\, [\mathsf {P}]-a.s.\)
Proving (iii) is trivial, because \(\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\bar{\xi }_n^*=\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^*=k_n^{-1}\sum _{i=1}^{k_n}\xi _{n,i}=\bar{\xi }_n,\, [\mathsf {P}]\)-\(a.s.\)
Let us calculate the conditional variance of the bootstrapped variable \(\xi _{n,1}^*\): \(\mathsf {Var}\,_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^*=\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^{*2}-(\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}\xi _{n,1}^*)^2=k_n^{-1}\sum _{i=1}^{k_n}\xi _{n,i}^2-\left( k_n^{-1}\sum _{i=1}^{k_n}\xi _{n,i}\right) ^2,\, [\mathsf {P}]\)-\(a.s.\) The weak law of large numbers together with (5) provides
and
The last result of the WLLN is true, because (5) implies
Thus (ii) is proved.
The Berry–Esseen–Katz theorem (see Katz 1963) with \(g(x)=|x|^{\epsilon },\,\epsilon >0\) for the bootstrapped sequence of \(iid\) (with respect to \(\mathsf {P}^*\)) random variables \(\{\xi _{n,i}^*\}_{i=1}^{k_n}\) results in
for all \(n\in \mathbb {N}\) where \(C>0\) is an absolute constant.
The Jensen inequality and Minkowski inequality provide an upper bound for the nominator from the right-hand side of (10):
The right-hand side of the previously derived upper bound is uniformly bounded in probability \(\mathsf {P}\), because of Markov’s inequality and (5). Indeed, for fixed \(\eta >0\)
and
Since \(\mathsf {E}_{\mathsf {P}_{\varvec{\xi }}^*}|\xi _{n,1}^*-\mathsf {E}_{\mathsf {P}^*}\xi _{n,1}^*|^{2+\epsilon }\) is bounded in probability \(\mathsf {P}\) uniformly over \(n\) and the denominator of the right-hand side of (10) is uniformly bounded away from zero due to (6), then the left-hand side of (10) converges in probability \(\mathsf {P}\) to zero as \(n\) tends to infinity. So, (i) is proved as well. \(\square \)
Proof (of Theorem 7)
According to the Cramér–Wold theorem, it is sufficient to ensure that all assumptions of one-dimensional bootstrap CLT (6) for triangular arrays are valid for any linear combination of the elements of the random vector \(\varvec{\xi }_{n,1},\,n\in \mathbb {N}\).
For arbitrary fixed \(\mathbf {t}\in \mathbb {R}^q\) using the Jensen inequality, we get
Hence, assumption (7) implies assumption (5) for the random variables \(\{\mathbf {t}^{\top }\varvec{\xi }_{n,k_n}\}_{n\in \mathbb {N}}\).
Similarly, assumption (8) implies assumption (6) for such an arbitrary linear combination, i.e., positive definiteness of the matrix \(\varvec{\varGamma }\) yields
\(\square \)
Rights and permissions
About this article
Cite this article
Peštová, B., Pešta, M. Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 78, 665–689 (2015). https://doi.org/10.1007/s00184-014-0522-8
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-014-0522-8