1 Introduction

The problem of an unknown common change in means of the panels is studied here, where the panel data consist of \(N\) panels and each panel contains \(T\) observations over time. Various values of the change are possible for each panel at some unknown common time \(\tau =1,\ldots ,N\). The panels are considered to be independent, but this restriction can be weakened. In spite of that, observations within the panel are usually not independent. It is supposed that a common unknown dependence structure is present over the panels.

Tests for change point detection in the panel data have been proposed only in case when the panel size \(T\) is sufficiently large, i.e., \(T\) increases over all limits from an asymptotic point of view, cf. Chan et al. (2013) or Horváth and Hušková (2012). However, the change point estimation has already been studied for finite \(T\) not depending on the number of panels \(N\), see Bai (2010). The remaining task is to develop testing procedures to decide whether a common change point is present or not in the panels, while taking into account that the length \(T\) of each observation regime is fixed and can be relatively small.

1.1 Motivation

Structural changes in panel data—especially common breaks in means—are wide spread phenomena. Our primary motivation comes from non-life insurance business, where associations in many countries uniting several insurance companies collect claim amounts paid by every insurance company each year. Such a database of cumulative claim payments can be viewed as panel data, where insurance company \(i=1,\ldots ,N\) provides the total claim amount \(Y_{i,t}\) paid in year \(t=1,\ldots ,T\) into the common database. The members of the association can consequently profit from the joint database.

For the whole association it is important to know, whether a possible change in the claim amounts occurred during the observed time horizon. Usually, the time period is relatively short, e.g., 10–15 years. To be more specific, a widely used and very standard actuarial method for predicting future claim amounts—called chain ladder—assumes a kind of stability of the historical claim amounts. The formal necessary and sufficient condition is derived in Pešta and Hudecová (2012). This paper shows a way how to test for a possible historical instability.

2 Panel change point model

Let us consider the panel change point model

$$\begin{aligned} Y_{i,t}=\mu _{i}+{\delta _{i}}{\mathcal {I}}\{t>\tau \}+\sigma \varepsilon _{i,t},\quad 1\le i\le N,\, 1\le t\le T; \end{aligned}$$
(1)

where \(\sigma >0\) is an unknown variance-scaling parameter and \(T\) is fixed, not depending on \(N\). The possible common change point time is denoted by \(\tau \in \{1,\ldots ,T\}\). A situation where \(\tau =T\) corresponds to no change in means of the panels. The means \(\mu _i\) are panel-individual. The amount of the break in mean, which can also differ for every panel, is denoted by \(\delta _i\). Furthermore, it is assumed that the sequences of panel disturbances \(\{\varepsilon _{i,t}\}_t\) are independent and within each panel the errors form a weakly stationary sequence with a common correlation structure. This can be formalized in the following assumption.

Assumption A1

The vectors \([\varepsilon _{i,1},\ldots ,\varepsilon _{i,T}]^{\top }\) existing on a probability space \((\varOmega ,\mathcal {F},\mathsf {P})\) are \(iid\) for \(i=1,\ldots ,N\) with \(\mathsf {E}\varepsilon _{i,t}=0\) and \(\mathsf {Var}\,\varepsilon _{i,t}=1\), having the autocorrelation function

$$\begin{aligned} \rho _t=\mathsf {Corr}\,\left( \varepsilon _{i,s},\varepsilon _{i,s+t}\right) = \mathsf {Cov}\,\left( \varepsilon _{i,s},\varepsilon _{i,s+t}\right) ,\quad \forall s\in \{1,\ldots ,T-t\}, \end{aligned}$$

which is independent of the lag \(s\), the cumulative autocorrelation function

$$\begin{aligned} r(t)=\mathsf {Var}\,\sum _{s=1}^t \varepsilon _{i,s}=\sum _{|s|<t}(t-|s|)\rho _s, \end{aligned}$$

and the shifted cumulative correlation function

$$\begin{aligned} R(t,v)=\mathsf {Cov}\,\left( \sum _{s=1}^t\varepsilon _{i,s}, \sum _{u=t+1}^v\varepsilon _{i,u}\right) =\sum _{s=1}^t\sum _{u=t+1}^v\rho _{u-s},\quad t<v \end{aligned}$$

for all \(i=1,\ldots ,N\) and \(t,v=1,\ldots ,T\).

The sequence \(\{\varepsilon _{i,t}\}_{t=1}^T\) can be viewed as a part of a weakly stationary process. Note that the dependent errors within each panel do not necessarily need to be linear processes. For example, GARCH processes as error sequences are allowed as well. The assumption of independent panels can indeed be relaxed, but it would make the setup much more complex. Consequently, probabilistic tools for dependent data need to be used (e.g., suitable versions of the central limit theorem). Nevertheless, assuming, that the claim amounts for different insurance companies are independent, is reasonable. Moreover, the assumption of a common homoscedastic variance parameter \(\sigma \) can be generalized by introducing weights \(w_{i,t}\), which are supposed to be known. Being particular in actuarial practice, it would mean to normalize the total claim amount by the premium received, since bigger insurance companies are expected to have higher variability in total claim amounts paid.

It is required to test the null hypothesis of no change in the means

$$\begin{aligned} H_0:\,\tau =T \end{aligned}$$

against the alternative that at least one panel has a change in mean

$$\begin{aligned} H_1:\,\tau <T\quad \text{ and }\quad \exists i\in \{1,\ldots ,N\}:\,\delta _i\ne 0. \end{aligned}$$

3 Test statistic and asymptotic results

We propose a ratio type statistic to test \(H_0\) against \(H_1\), because this type of statistic does not require estimation of the nuisance parameter for the variance. Generally, this is due to the fact that the variance parameter simply cancels out from the nominator and denominator of the statistic. In spite of that, the common variance could be estimated from all the panels, of which we possess a sufficient number. Nevertheless, we aim to construct a valid and completely data driven testing procedure without interfering estimation and plug-in estimates instead of nuisance parameters. A bootstrap add-on is going to serve this purpose as it is seen later on.

For surveys on ratio type test statistics, we refer to Chen and Tian (2014), Csörgő and Horváth (1997), Horváth et al. (2009), Liu et al. (2008), and Madurkayová (2011). Our particular panel change point test statistic is

$$\begin{aligned} \mathcal {R}_N(T)=\max _{t=2,\ldots ,T-2}\frac{\max _{s=1,\ldots ,t}\left| \sum _{i=1}^N\left[ \sum _{r=1}^s\left( Y_{i,r}-\bar{Y}_{i,t}\right) \right] \right| }{\max _{s=t,\ldots ,T-1}\left| \sum _{i=1}^N\left[ \sum _{r=s+1}^T\left( Y_{i,r}-\widetilde{Y}_{i,t}\right) \right] \right| }, \end{aligned}$$

where \(\bar{Y}_{i,t}\) is the average of the first \(t\) observations in panel \(i\) and \(\widetilde{Y}_{i,t}\) is the average of the last \(T-t\) observations in panel \(i\), i.e.,

$$\begin{aligned} \bar{Y}_{i,t}=\frac{1}{t}\sum _{s=1}^t Y_{i,s}\quad \quad \text{ and }\quad \quad \widetilde{Y}_{i,t}=\frac{1}{T-t}\sum _{s=t+1}^T Y_{i,s}. \end{aligned}$$

An alternative way for testing the change in panel means could be a usage of CUSUM type statistics. For example, a maximum or minimum of a sum (not a ratio) of properly standardized or modified sums from our test statistic \(\mathcal {R}_N(T)\). The theory, which follows, can be appropriately rewritten for such cases.

Firstly, we derive the behavior of the test statistics under the null hypothesis.

Theorem 1

(Under null) Under hypothesis \(H_0\) and Assumption A1

$$\begin{aligned} \mathcal {R}_N(T)\xrightarrow [N\rightarrow \infty ]{\fancyscript{D}}\max _{t=2,\ldots ,T-2}\frac{\max _{s=1,\ldots ,t}\left| X_s-\frac{s}{t}X_t\right| }{\max _{s=t,\ldots ,T-1}\left| Z_s-\frac{T-s}{T-t}Z_t\right| }, \end{aligned}$$

where \(Z_t:=X_T-X_t\) and \([X_1,\ldots ,X_T]^{\top }\) is a multivariate normal random vector with zero mean and covariance matrix \(\varvec{\varLambda }=\{\lambda _{t,v}\}_{t,v=1}^{T,T}\) such that

$$\begin{aligned} \lambda _{t,t}=r(t)\quad \text{ and }\quad \lambda _{t,v}=r(t)+R(t,v),\,\, t<v. \end{aligned}$$

The limiting distribution does not depend on the variance nuisance parameter \(\sigma \), but it depends on the unknown correlation structure of the panel change point model, which has to be estimated for testing purposes. The way of its estimation is shown in Sect. 4.1. Furthermore, Theorem 1 is just a theoretical mid-step for the bootstrap test, where the correlation structure need not to be known. That is why the presence of unknown quantities in the asymptotic distribution is not troublesome.

Note that in case of independent observations within the panel, the correlation structure and, hence, the covariance matrix \(\varvec{\varLambda }\) is simplified such that \(r(t)=t\) and \(R(t,v)=0\).

Next, we show how the test statistic behaves under the alternative.

Assumption A2

\(\lim _{N\rightarrow \infty }\frac{1}{\sqrt{N}}\left| \sum _{i=1}^N \delta _i\right| =\infty \).

Theorem 2

(Under alternative) If \(\tau \le T-3\), then under Assumptions A1, A2 and alternative \(H_1\)

$$\begin{aligned} \mathcal {R}_{N}(T)\xrightarrow [N\rightarrow \infty ]{\mathsf {P}}\infty . \end{aligned}$$
(2)

Assumption A2 is satisfied, for instance, if \(0<\delta \le \delta _i\,\forall i\) (a common lower change point threshold) and \(\delta \sqrt{N}\rightarrow \infty ,\, N\rightarrow \infty \). Another suitable example of \(\delta _i\)s for the condition in Assumption A2, can be \(0<\delta _i=KN^{-1/2+\eta }\) for some \(K>0\) and \(\eta >0\). Or \(\delta _i=Ci^{\alpha -1}\sqrt{N}\) may be used as well, where \(\alpha \ge 0\) and \(C>0\). The assumption \(\tau \le T-3\) means that there are at least three observations in the panel after the change point. It is also possible to redefine the test statistic by interchanging the nominator and the denominator of \(\mathcal {R}_{N}(T)\). Afterwards, Theorem 2 for the modified test statistic would require three observations before the change point, i.e., \(\tau \ge 3\).

Theorem 2 says that in presence of a structural change in the panel means, the test statistic explodes above all bounds. Hence, the procedure is consistent and the asymptotic distribution from Theorem 1 can be used to construct the test.

4 Change point estimation

Despite the fact that the aim of the paper is to establish testing procedures for detection of a panel mean change, it is necessary to construct a consistent estimate for a possible change point. There are two reasons for that: Firstly, the estimation of the covariance matrix \(\varvec{\varLambda }\) from Theorem 1 requires panels as vectors with elements having common mean (i.e., without a jump). Secondly, the bootstrap procedure, introduced later on, requires centered residuals to be resampled.

A consistent estimate of the change point in the panel data is proposed in Bai (2010), but under circumstances that the change occurred for sure. In our situation, we do not know whether a change occurs or not. Therefore, we modify the estimate proposed by Bai (2010) in the following way. If the panel means change somewhere inside \(\{2,\ldots ,T-1\}\), let the estimate consistently select this change. If there is no change in panel means, the estimate points out the very last time point \(T\) with probability going to one. In other words, the value of the change point estimate can be \(T\) meaning no change. This is in contrast with Bai (2010), where \(T\) is not reachable.

Let us define the estimate of \(\tau \):

$$\begin{aligned} \widehat{\tau }_N:=\arg \max _{t=2,\ldots ,T} \frac{1}{t}\sum _{i=1}^N\sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,t})^2. \end{aligned}$$
(3)

Now, we show the desired property of consistency for the proposed change point estimate under the following assumptions.

Assumption C1

\(L<\lim _{N\rightarrow \infty }\frac{1}{N}\sum _{i=1}^N\delta _i^2<\infty \), where \(L=-\infty \) if \(\tau =T\) and \(L=\max _{t=\tau +1,\ldots ,T}\frac{\sigma ^2t^2}{\tau (t-\tau )}\left( \frac{r(\tau )}{\tau ^2}-\frac{r(t)}{t^2}\right) \) otherwise.

Assumption C2

\(\mathsf {E}\varepsilon _{1,t}^4<\infty ,\,t\in \{1,\ldots ,T\}\).

Theorem 3

(Change point estimate consistency) Suppose that \(\tau \ne 1\) and the sequence \(\{r(t)/t^2\}_{t=2}^T\) is decreasing. Then under Assumptions A1, C1, and C2

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=\tau ]=1. \end{aligned}$$

Assumption C1 assures that the values of changes have to be large enough compared to the variability of the random noise in the panels and to the strength of dependencies within the panels as well. On one hand, Assumption C1 implies the usual assumption \(\lim _{N\rightarrow \infty }\frac{1}{\sqrt{N}}\sum _{i=1}^N\delta _i^2=\infty \) in change point analysis, cf. Bai (2010) or Horváth and Hušková (2012). On the other hand, Assumption C1 assures that \(\lim _{N\rightarrow \infty }\frac{1}{N^2}\sum _{i=1}^N\delta _i^2=0\), which is not present when the panel size \(T\) is considered as unbounded, i.e., \(T\rightarrow \infty \). Here, this second part is needed to control the asymptotic boundedness of the variability of \(\frac{1}{t}\sum _{i=1}^N\sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,t})^2\), because a finite \(T\) cannot do that.

Similarly as in the previous section, Assumption C1 is satisfied for \(0<\delta \le \delta _i<\Delta ,\forall i\) (a common lower and upper bound for the change amount) and suitable \(\sigma \) and \(r(t)\). Assumptions A2 and C1 are generally incomparable. The monotonicity assumption from Theorem 3 in not very restrictive at all. For example in case of independent observations within the panel, this assumption is automatically fulfilled, since \(\{1/t\}_{t=2}^T\) is decreasing. Moreover, the weaker the dependency within the panel, the faster the decrease of \(r(t)/t^2\).

One can check the proof of Theorem 3 and see that Assumption C1 can be replaced by more restrictive assumptions \(\lim _{N\rightarrow \infty }\frac{1}{N}\sum _{i=1}^N\delta _i^2=\infty \) and \(\lim _{N\rightarrow \infty }\frac{1}{N^2}\sum _{i=1}^N\delta _i^2=0\). This first assumption might be considered as too strong, because a common value of \(\delta =\delta _i\) for all \(i\) does not fulfill it.

Various competing consistent estimates of a possible change point can be suggested, e.g., the maximizer of \(\sum _{i=1}^N\left[ \sum _{s=1}^t(Y_{i,s}-\bar{Y}_{i,T})\right] ^2\). To show the consistency, one needs to postulate different assumptions on the cumulative autocorrelation function and shifted cumulative correlation function compared to Theorem 3 and this may be rather complex.

4.1 Estimation of the correlation structure

Since the panels are considered to be independent and the number of panels may be sufficiently large, one can estimate the correlation structure of the errors \([\varepsilon _{1,1},\ldots ,\varepsilon _{1,T}]^{\top }\) empirically. We base the errors’ estimates on residuals

$$\begin{aligned} \widehat{e}_{i,t}:=\left\{ \begin{array}{ll} Y_{i,t}-\bar{Y}_{i,\widehat{\tau }_N},&{}\quad t\le \widehat{\tau }_N,\\ Y_{i,t}-\widetilde{Y}_{i,\widehat{\tau }_N},&{}\quad t>\widehat{\tau }_N. \end{array} \right. \end{aligned}$$
(4)

Then, the empirical version of the autocorrelation function is

$$\begin{aligned} \widehat{\rho }_t:=\frac{1}{\widehat{\sigma }^2 NT}\sum _{i=1}^N\sum _{s=1}^{T-t}\widehat{e}_{i,s}\widehat{e}_{i,s+t}. \end{aligned}$$

Consequently, the kernel estimation of the cumulative autocorrelation function and shifted cumulative correlation function is adopted in lines with Andrews (1991):

$$\begin{aligned} \widehat{r}(t)&=\sum _{|s|<t}(t-|s|)\kappa \left( \frac{s}{h}\right) \widehat{\rho }_s,\\ \widehat{R}(t,v)&=\sum _{s=1}^t\sum _{u=t+1}^v\kappa \left( \frac{u-s}{h}\right) \widehat{\rho }_{u-s},\quad t<v; \end{aligned}$$

where \(h>0\) stands for the window size and \(\kappa \) belongs to a class of kernels given by

$$\begin{aligned}&\left\{ \kappa (\cdot ):\,\mathbb {R}\rightarrow [-1,1]\,\big |\,\kappa (0)=1,\,\kappa (x)=\kappa (-x),\,\forall x,\,\int _{-\infty }^{+\infty }\kappa ^2(x)\text{ d }x<\infty ,\right. \\&\quad \quad \quad \left. \kappa (\cdot )\, \text{ is } \text{ continuos } \text{ at } 0 \text{ and } \text{ at } \text{ all } \text{ but } \text{ a } \text{ finite } \text{ number } \text{ of } \text{ other } \text{ points }\right\} . \end{aligned}$$

Since the variance parameter \(\sigma \) is not present in the limiting distribution of Theorem 1, it neither has to be estimated nor known. Nevertheless, one can use \(\widehat{\sigma }^2:=\frac{1}{NT}\sum _{i=1}^{N} \sum _{s=1}^{T}\widehat{e}_{i,s}^2\).

5 Bootstrap and hypothesis testing

A wide range of literature has been published on bootstrapping in the change point problem, e.g., Hušková and Kirch (2012) or Hušková et al. (2008). We build up the bootstrap test on the resampling with replacement of row vectors \(\{[\widehat{e}_{i,1},\ldots ,\widehat{e}_{i,T}]\}_{i=1,\ldots ,N}\) corresponding to the panels. This provides bootstrapped row vectors \(\{[\widehat{e}_{i,1}^*,\ldots ,\widehat{e}_{i,T}^*]\}_{i=1,\ldots ,N}\). Then, the bootstrapped residuals \(\widehat{e}_{i,t}^*\) are centered by their conditional expectation \(\frac{1}{N}\sum _{i=1}^N\widehat{e}_{i,t}\) yielding

$$\begin{aligned} \widehat{Y}_{i,t}^*:=\widehat{e}_{i,t}^*-\frac{1}{N} \sum _{i=1}^N\widehat{e}_{i,t}. \end{aligned}$$

The bootstrap test statistic is just a modification of the original statistic \(\mathcal {R}_N(T)\), where the original observations \(Y_{i,t}\) are replaced by their bootstrap counterparts \(\widehat{Y}_{i,t}^*\):

$$\begin{aligned} \mathcal {R}_{N}^*(T)=\max _{t=2,\ldots ,T-2}\frac{\max _{s=1,\ldots ,t}\left| \sum _{i=1}^N\left[ \sum _{r=1}^s\left( \widehat{Y}_{i,r}^*-\bar{\widehat{Y}}_{i,t}^*\right) \right] \right| }{\max _{s=t,\ldots ,T-1}\left| \sum _{i=1}^N\left[ \sum _{r=s+1}^T\left( \widehat{Y}_{i,r}^*- \widetilde{\widehat{Y}}_{i,t}^*\right) \right] \right| }, \end{aligned}$$

such that

$$\begin{aligned} \bar{\widehat{Y}}_{i,t}^*=\frac{1}{t}\sum _{s=1}^t \widehat{Y}_{i,s}^*\quad \text{ and }\quad \widetilde{\widehat{Y}}_{i,t}^*= \frac{1}{T-t}\sum _{s=t+1}^T \widehat{Y}_{i,s}^*. \end{aligned}$$

An algorithm for the bootstrap is illustratively shown in Procedure 1 and its validity will be proved in Theorem 4.

figure a

5.1 Validity of the resampling procedure

The idea behind bootstrapping is to mimic the original distribution of the test statistic in some sense with the distribution of the bootstrap test statistic, conditionally on the original data denoted by \(\mathbb {Y}\equiv \{Y_{i,t}\}_{i,t=1}^{N,T}\).

First of all, two simple and just technical assumptions are needed.

Assumption B1

\(\{\varepsilon _{i,t}\}_t\) possesses the lagged cumulative correlation function

$$\begin{aligned} S(t,v,d)=\mathsf {Cov}\,\left( \sum _{s=1}^t\varepsilon _{i,s},\sum _{u=t+d}^v\varepsilon _{i,u}\right) =\sum _{s=1}^t\sum _{u=t+d}^v\rho _{u-s},\quad \forall i\in \mathbb {N}. \end{aligned}$$

Assumption B2

\(\lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=\tau ]=1\).

Assumption B1 is not really an assumption, actually it is only a notation. Notice that \(S(t,v,1)\equiv R(t,v)\). Assumption B2 is satisfied for our estimate proposed in (3), if the assumptions of Theorem 2 hold. Assumption B2 is postulated in a rather broader sense, because we want to allow any other consistent estimate of \(\tau \) to be used instead.

Realize that it is not known, whether the common panel means’ change occurred or not. In other words, one does not know whether the data come from the null or the alternative hypothesis. Therefore, the following theorem holds under \(H_0\) as well as \(H_1\).

Theorem 4

(Bootstrap justification) Under Assumptions A1, B1, B2, and C2

$$\begin{aligned} \mathcal {R}_{N}^*(T)|\mathbb {Y}\xrightarrow [N\rightarrow \infty ]{\fancyscript{D}}\max _{t=2,\ldots ,T-2}\frac{\max _{s=1,\ldots ,t}\left| \mathcal {X}_s-\frac{s}{t}\mathcal {X}_t\right| }{\max _{s=t,\ldots ,T-1}\left| \mathcal {Z}_s-\frac{T-s}{T-t}\mathcal {Z}_t\right| }\quad \text{ in } \text{ probability } \mathsf {P}, \end{aligned}$$

where \(\mathcal {Z}_t:=\mathcal {X}_T-\mathcal {X}_t\) and \([\mathcal {X}_1,\ldots ,\mathcal {X}_T]^{\top }\) is a multivariate normal random vector with zero mean and covariance matrix \(\varvec{\varGamma }=\left\{ \gamma _{t,v}(\tau )\right\} _{t,v=1}^{T,T}\) such that

$$\begin{aligned} \gamma _{t,t}(\tau )=\left\{ \begin{array}{l} r(t)+\frac{t^2}{\tau ^2}r(\tau )-\frac{2t}{\tau }[r(t)+R(t,\tau )],\\ \quad t<\tau ;\\ 0,\quad t=\tau ;\\ r(t-\tau )+\frac{(t-\tau )^2}{(T-\tau )^2} r(T-\tau )-\frac{2(t-\tau )}{T-\tau }\left[ r(t-\tau )+R(t-\tau ,T-\tau )\right] ,\\ \quad t>\tau ; \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \gamma _{t,v}(\tau )=\left\{ \begin{array}{l} 0,\quad t=\tau \text{ or } v=\tau ,\\ r(t)+R(t,v)+\frac{tv}{\tau ^2}r(\tau )-\frac{v}{\tau }[r(t)+R(t,\tau )]\\ \quad -\frac{t}{\tau }[r(v)+R(v,\tau )],\quad t<v<\tau ;\\ S(t,v,\tau +1-t)+\frac{t(v-\tau )}{\tau (T-\tau )}R(\tau ,T)\\ \quad -\frac{v-\tau }{T-\tau }S(t,T,\tau +1-t)-\frac{t}{\tau }R(\tau ,v),\quad t<\tau <v;\\ r(t-\tau )+R(t-\tau ,v-\tau )+\frac{(t-\tau )(v-\tau )}{(T-\tau )^2}r(T-\tau )\\ \quad -\frac{v-\tau }{T-\tau }[r(t-\tau )+R(t-\tau ,T-\tau )]\\ \quad -\frac{t-\tau }{T-\tau }[r(v-\tau )+R(v-\tau ,T-\tau )],\quad \tau <t<v. \end{array} \right. \end{aligned}$$

The validity of the bootstrap test is assured by Theorem 4. Indeed, the conditional asymptotic distribution of the bootstrap test statistic is a functional of a multivariate normal distribution under the null as well as under the alternative. It does not converge to infinity (in probability) under the alternative. That is why it can be used for correctly rejecting the null in favor of the alternative, having sufficiently large \(N\). Moreover, the following theorem states that the conditional distribution of the bootstrap test statistic and the unconditional distribution of the original test statistic coincide. And that is the reason why the bootstrap test should approximately keep the same level as the original test based on the asymptotics from Theorem 1.

Theorem 5

(Bootstrap test consistency) Under Assumptions A1, B2, C2 and hypothesis \(H_0\), the asymptotic distribution of \(\mathcal {R}_{N}(T)\) from Theorem 1 and the asymptotic distribution of \(\mathcal {R}_{N}^*(T)|\mathbb {Y}\) from Theorem 4 coincide.

Now, the simulated (empirical) distribution of the bootstrap test statistic can be used to calculate the bootstrap critical value, which will be compared to the value of the original test statistic in order to reject the null or not.

Finally, note that one cannot think about any local alternative in this setup, because \(\tau \) has a discrete and finite support.

6 Simulations

A simulation experiment was performed to study the finite sample properties of the asymptotic and bootstrap test statistics for a common change in panel means. In particular, the interest lies in the empirical sizes of the proposed tests under the null hypothesis and in the empirical rejection rate (power) under the alternative. Random samples of panel data (\(5000\) each time) are generated from the panel change point model (1). The panel size is set to \(T=10\) and \(T=25\) in order to demonstrate the performance of the testing approaches in case of small and intermediate panel length. The number of panels considered is \(N=50\) and \(N=200\).

The correlation structure within each panel is modeled via random vectors generated from iid, AR(1), and GARCH(1,1) sequences. The considered AR(1) process has coefficient \(\phi =0.3\). In case of GARCH(1,1) process, we use coefficients \(\alpha _0=1\), \(\alpha _1=0.1\), and \(\beta _1=0.2\), which according to (Lindner 2009, [Example 1]) gives a strictly stationary process. In all three sequences, the innovations are obtained as iid random variables from a standard normal \(\mathsf {N}(0,1)\) or Student \(t_5\) distribution. Simulation scenarios are produced as all possible combinations of the above mentioned settings.

When using the asymptotic distribution from Theorem 1, the covariance matrix is estimated as proposed in Sect. 4.1 using the Parzen kernel

$$\begin{aligned} \kappa _{P}(x)=\left\{ \begin{array}{ll} 1-6x^2+6|x|^3, &{}\quad 0\le |x|\le 1/2;\\ 2(1-|x|)^3, &{}\quad 1/2\le |x|\le 1;\\ 0, &{}\quad \text{ otherwise }. \end{array} \right. \end{aligned}$$

Several values of the smoothing window width \(h\) are tried from the interval [2, 5] and all of them work fine providing comparable results. To simulate the asymptotic distribution of the test statistics, 2000 multivariate random vectors are generated using the pre-estimated covariance matrix.

The bootstrap approach does not need to estimate the covariance structure. The number of bootstrap replications used is 2000. To access the theoretical results under \(H_0\) numerically, Table 1 provides the empirical specificity (one minus size) of the tests for both the asymptotic and bootstrap version of the panel change point test, where the significance level is \(\alpha =5\,\%\).

Table 1 Empirical specificity (\(1-\)size) of the test under \(H_0\) using the asymptotic and the bootstrap critical values, considering a significance level of \(5\,\%\)

It may be seen that both approaches (using asymptotic and bootstrap distribution) are close to the theoretical value of specificity .95. As expected, the best results are achieved in case of independence within the panel, because there is no information overlap between two consecutive observations. The precision of not rejecting the null is increasing as the number of panels is getting higher and the panel is getting longer as well.

The performance of both testing procedures under \(H_1\) in terms of the empirical rejection rates is shown in Table 2, where the change point is set to \(\tau =\lfloor T/2 \rfloor \) and the change sizes \(\delta _i\) are independently uniform on [1, 3] in \(33\,\%\), \(66\,\%\) or in all panels.

Table 2 Empirical sensitivity (power) of the test under \(H_1\) using the asymptotic and the bootstrap critical values, considering a significance level of \(5\,\%\)

One can conclude that the power of both tests increases as the panel size and the number of panels increase, which is straightforward and expected. It should be noticed that numerical instability issues may appear for larger \(T\), when generating from a \(T\)-variate normal distribution. Moreover, higher power is obtained when a larger portion of panels is subject to have a change in mean. The test power drops when switching from independent observations within the panel to dependent ones. Innovations with heavier tails (i.e., \(t_5\)) yield smaller power than innovations with lighter tails. Generally, the bootstrap outperforms the classical asymptotics in all scenarios.

Let us mention that for finite sections of processes with a stronger dependence structure than taken into account in the simulation scenarios, Assumption C1 does not have to be fulfilled. For example, Assumption C1 is violated for AR(1) with coefficient \(\phi =0.9\), \(\delta _i=2\), \(\sigma =1\), standard normal or Student \(t_5\) innovations, and \(\tau =5\) for \(T=10\) or \(\tau =12\) for \(T=25\). Here, the dependency under the considered variability is too strong compared to the change size. It is rather difficult to detect possible changes in such a setup.

Finally, an early change point is discussed very briefly. We stay with standard normal innovations, iid observations within the panel, the size of changes \(\delta _i\) being independently uniform on \([1,3]\) in all panels, and the change point is \(\tau =3\) in case of \(T=10\) and \(\tau =5\) for \(T=25\). The empirical sensitivities of both tests for small values of \(\tau \) are shown in Table 3.

Table 3 Empirical sensitivity of the test for small values of \(\tau \) under \(H_1\) using the asymptotic and the bootstrap critical values, considering a significance level of \(5\,\%\)

When the change point is not in the middle of the panel, the power of the test generally falls down. The source of such decrease is that the left or right part of the panel possesses less observations with constant mean, which leads to a decrease of precision in the correlation estimation in case of the asymptotic test and in the change point estimation in case of the bootstrap test. Nevertheless, the bootstrap test again outperforms the asymptotic version and, moreover, provides solid results even for early or late change points (the late change points are not numerically demonstrated here).

7 Real data analysis

As mentioned in the introduction, our primary motivation for testing the panel mean change comes from the insurance business. The data set is provided by the National Association of Insurance Commissioners (NAIC) database, see Meyers and Shi (2011). We concentrate on the ‘Commercial auto/truck liability/medical’ insurance line of business. The data collect records from \(N=157\) insurance companies (one extreme insurance company was omitted from the analysis). Each insurance company provides \(T=10\) yearly total claim amounts starting from year 1988 up to year 1997. Figure 1 graphically shows series of claim amounts for 20 selected insurance companies (a plot with all 157 panels would be cluttered).

Fig. 1
figure 1

Development of yearly total claim amounts for 20 selected insurance companies

The data are considered as panel data in the way that each insurance company corresponds to one panel, which is formed by the company’s yearly total claim amounts. The length of the panel is quite short. This is very typical in insurance business, because considering longer panels may invoke incomparability between the early claim amounts and the late ones due to changing market or policies’ conditions over time.

We want to test whether or not a change in the claim amounts occurred in a common year, assuming that the claim amounts are approximately constant in the years before and after the possible change for every insurance company. Our ratio type test statistic gives \(\mathcal {R}_{157}(10)=39.9\). The asymptotic critical value is 52.4 and the bootstrap critical value equals 203.1. These values mean that we do not reject the hypothesis of no change in panel means in both cases. The striking difference between the two critical values may come from the inefficient correlation structure estimation (since \(T=10\) is quite short) or from violation of the model assumptions.

That is why we also try to take the logarithms of claim amounts and to consider log amounts as the panel data observations. Nevertheless, we do not reject the hypothesis of no change in the panel means (i.e., means of log amounts) again. Additionally to that, one can consider normalizing the claim amounts by the premium received by company \(i\) in year \(t\). That is thinking of panel data \(Y_{i,t}/p_{i,t}\), where \(p_{i,t}\) is the mentioned premium. This may yield a stabilization of series’ variability, which corresponds to the assumption of a common variance. In spite of that, we again do not reject the null (neither by the asymptotic test, nor by the bootstrap one). For the sake of completeness, we may reveal that our estimate of the panel change point provides value \(\widehat{\tau }_N=10\) meaning no change in panels.

8 Conclusions

In this paper, we consider the change point problem in panel data with fixed panel size. Occurrence of common breaks in panel means is tested. We introduce a ratio type test statistic and derive its asymptotic properties. Under the null hypothesis of no change, the test statistic weakly converges to a functional of the multivariate normal random vector with zero mean and covariance structure depending on the intra-panel covariances. As shown in the paper, these covariances can be estimated and, consequently, used for testing whether a change in means occurred or not. This is indeed feasible, because the test statistic under the alternative converges to infinity in probability.

The secondary aim of the paper lies in proposing a consistent change point estimate, which is straightforwardly used for bootstrapping the test statistic. We establish the asymptotic behavior of the bootstrap version of the test statistic, regardless of the fact whether the data come from the null or the alternative hypothesis. Moreover, the asymptotic distribution of the bootstrap test statistic coincides with the original test statistic’s limiting distribution. This provides justification for the bootstrap method. One of the main goals is to obtain a completely data driven testing approach whether the means remain the same during the observation period or not. The ratio type test statistic allows us to omit a variance estimation and the bootstrap technique overcomes estimation of the correlation structure. Hence, neither nuisance nor smoothing parameters are present in the whole testing process, which makes it very simple for practical use. Furthermore, the whole stochastic theory behind requires relatively simple assumptions, which are not too restrictive.

A simulation study illustrates that even for small panel size, both presented approaches—based on traditional asymptotics and on bootstrapping—work fine. One may judge that both methods keep the significance level under the null, while various simulation scenarios are considered. Besides that, the power of the test is slightly higher in case of the bootstrap. Finally, the proposed methods are applied to insurance data, for which the change point analysis in panel data provides an appealing approach.

8.1 Discussion

First of all, it has to be noted that the non-ratio CUSUM type test statistic can be used instead of the ratio type, but this requires to estimate the variance of the observations. The statements of theorems and proofs would become even less complicated. Omitting the usage of the bootstrap test statistic can especially be unreliable in short panels from a computational point of view. This is due to the fact that the bootstrap overcomes the issue of estimating the correlation structure.

Furthermore, our setup can be modified by considering large panel size, i.e., \(T\rightarrow \infty \). Consequently, the whole theory leads to convergences to functionals of Gaussian processes with a covariance structure derived in a very similar fashion as for fixed \(T\). However, our motivation is to develop techniques for fixed and relatively small panel size.

Dependent panels may be taken into account and the presented work might be generalized for some kind of asymptotic independence over the panels or prescribed dependence among the panels. Nevertheless, our incentive is determined by a problem from non-life insurance, where the association of insurance companies consists of a relatively high number of insurance companies. Thus, the portfolio of yearly claims is so diversified, that the panels corresponding to insurance companies’ yearly claims may be viewed as independent and neither natural ordering nor clustering has to be assumed.