Open Access
This article is

- freely available
- re-usable

*Risks*
**2017**,
*5*(1),
7;
doi:10.3390/risks5010007

Article

Change Point Estimation in Panel Data without Boundary Issue

^{1}

Department of Medical Informatics and Biostatistics, Institute of Computer Science, The Czech Academy of Sciences, Pod Vodárenskou věží 271/2, 18207 Prague 8, Czech Republic

^{2}

Department of Probability and Mathematical Statistics, Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 18675 Prague 8, Czech Republic

^{†}

These authors contributed equally to this work.

^{*}

Author to whom correspondence should be addressed.

Academic Editor:
Qihe Tang

Received: 28 August 2016 / Accepted: 17 January 2017 / Published: 22 January 2017

## Abstract

**:**

Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented, as well.

Keywords:

change point; estimation; consistency; panel data; short panels; boundary issue; structural change; bootstrap; non-life insurance; change in claim amountsMSC:

62F10; 62F40; 62H12; 62H15; 62E20; 62P05JEL:

C330; C130; C120; C150; G220## 1. Introduction and Main Aims

The problem of an unknown common change in means of the panels is studied, where the panel data consist of N panels, and each panel contains T observations over time. Various values of the change are possible for each panel at some unknown common time $\tau =1,\dots ,T$. The panels are considered to be independent, although this restriction can be weakened. On the other hand, within the panels, the observations are generally not assumed to be independent. This is in accordance with typical assumptions that one can make about real data. A common dependence structure is supposed to be present over the panels. Our main goal is to construct an estimator of a possible change point, which is consistent even in the case of no structural break.

#### 1.1. Current State of the Art

Tests for change point detection in panel data have been proposed by [1] for sufficiently large panel sizes T, i.e., the limiting results were derived under the assumption that T increases over all limits. Testing procedures for the change in panel means with T fixed, which can be relatively small or moderate, were considered in [2]. The change point estimation in panel data for fixed, as well as for unbounded T was studied by [3]. However, the panel change point estimator in [3] is derived only for a situation, where one knows for sure that the change in means occurred within the given time period. This restriction can become insurmountable for some further utilization of the change point estimator, as will be demonstrated later in this paper. In [4], a consistent change point estimator was introduced, requiring no definite knowledge about the existence of the change point in the given panel data. In the case of no change being present, the estimator picks the last observation, which means that no structural break is identified. However, this estimator has several disadvantages. It assumes a certain kind of homoscedasticity in the panels. Further, it does not take into account the possibility that the change may occur right after the first time point. It also assumes conditions that may be viewed as too complicated with regard to verification and model checking. The remaining task is, therefore, to develop a change point estimator that is consistent regardless of the change’s presence/absence. Moreover, such an estimator would gain from allowing heteroscedasticity in the panels, having a broader scope of applications. Besides that, the applicability of the estimator is enhanced by simple consistency conditions and no boundary issue. The boundary issue means that the change point can neither be detected nor estimated when being close to the beginning or to the end of the observation regime.

Further on, Kim [5,6] dealt with the change point estimator under cross-sectional dependence in the panels modeled by a common factor and expanded the estimation problem for more complicated types of structural changes. The first and second order asymptotics that can be used to derive consistent confidence intervals for the time of change in panel data were established by [7]. The panel length T was considered as unbounded and depending on the number of panels N. However, there is some literature on the short panel change point framework where also weighting functions, as we employ later on, are suggested, cf. [8].

#### 1.2. Motivation in Non-Life Insurance

Structural changes in panel data, especially common breaks in means, are wide-spread phenomena. Our primary motivation comes from the non-life insurance business, where associations in many countries uniting several insurance companies collect claim amounts paid by every insurance company each year. Such a database of cumulative claim payments can be viewed as panel data, where insurance company $i=1,\dots ,N$ provides the total claim amount ${Y}_{i,t}$ paid in year $t=1,\dots ,T$ into the common database. The members of the association can consequently profit from the joint database.

For the whole association, it is important to know whether a possible change in the claim amounts occurred during the observed time horizon. Usually, the time period is relatively short, e.g., 10–15 years. To be more specific, a widely-used and very standard actuarial method for predicting future claim amounts, called chain ladder, assumes a kind of stability of the historical claim amounts. The formal necessary and sufficient condition is derived in [9]. This paper shows a way to detect a possible historical instability.

#### 1.3. Structure of the Paper

The remainder of the paper is organized as follows. Section 2 introduces an abrupt change point model together with stochastic assumptions. An estimator for the change point in panel means is proposed in Section 3. Consequently, the consistency of the considered change point estimator is derived, which covers the first main theoretical contribution. Section 4 contains a simulation study that illustrates the finite sample performance of the estimator. It numerically emphasizes the advantages and disadvantages of the proposed approach. The second main theoretical contribution lies in the panel correlation structure estimation and in the bootstrap add-on justification for hypothesis testing, all provided in Section 5. A practical application of the developed approach to an actuarial problem is presented in Section 6. Proofs are given in the Appendix.

## 2. Abrupt Change in Panel Data

Let us consider the panel change point model
where ${\sigma}_{i}>0$ are unknown variance-scaling panel-specific parameters and T is fixed, not depending on N. The possible common change point time is denoted by $\tau \in \{1,\dots ,T\}$. A situation where $\tau =T$ corresponds to no change in means of the panels. The means ${\mu}_{i}$ are panel individual. The amount of the break in the mean, which can also differ for every panel, is denoted by ${\delta}_{i}$. There is at most one change per panel in Model (1), and the type of change in the panel mean is abrupt.

$${Y}_{i,t}={\mu}_{i}+{\delta}_{i}\mathcal{I}\{t>\tau \}+{\sigma}_{i}{\epsilon}_{i,t},\phantom{\rule{1.em}{0ex}}1\le i\le N,\phantom{\rule{0.166667em}{0ex}}1\le t\le T;$$

Furthermore, it is assumed that the sequences of panel disturbances ${\left\{{\epsilon}_{i,t}\right\}}_{t}$ are independent. At the same time, the errors within each panel form a weakly-stationary sequence with a common correlation structure. This can be formalized in the following assumption.

**Assumption**

**1.**

The vectors ${[{\epsilon}_{i,1},\dots ,{\epsilon}_{i,T}]}^{\top}$ existing on a probability space $(\Omega ,\mathcal{F},\mathsf{P})$ are $iid$ for $i=1,\dots ,N$ with $\mathsf{E}{\epsilon}_{i,t}=0$ and $\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\epsilon}_{i,t}=1$, having the autocorrelation function
which is independent of the lag s, the cumulative autocorrelation function
and the shifted cumulative correlation function
for all $i=1,\dots ,N$ and $t,v=1,\dots ,T$. The covariance matrix $\mathbf{\Lambda}:=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\left[{\sum}_{s=1}^{1}{\epsilon}_{1,s},\dots ,{\sum}_{s=1}^{T}{\epsilon}_{1,s}\right]}^{\top}$ is non-singular.

$${\rho}_{t}=\mathsf{Corr}\phantom{\rule{0.166667em}{0ex}}\left({\epsilon}_{i,s},{\epsilon}_{i,s+t}\right)=\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left({\epsilon}_{i,s},{\epsilon}_{i,s+t}\right),\phantom{\rule{1.em}{0ex}}\forall s\in \{1,\dots ,T-t\},$$

$$r\left(t\right)=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\sum _{s=1}^{t}{\epsilon}_{i,s}=\sum _{\left|s\right|<t}(t-|s\left|\right){\rho}_{s},$$

$$R(t,v)=\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left(\sum _{s=1}^{t}{\epsilon}_{i,s},\sum _{u=t+1}^{v}{\epsilon}_{i,u}\right)=\sum _{s=1}^{t}\sum _{u=t+1}^{v}{\rho}_{u-s},\phantom{\rule{1.em}{0ex}}t<v$$

The sequence ${\left\{{\epsilon}_{i,t}\right\}}_{t=1}^{T}$ can be viewed as a part of a weakly-stationary process. Note that the within-panel dependent errors do not necessarily need to be linear processes. GARCH processes are a plausible alternative, for instance.

The assumption of independent panels can be relaxed. It would, however, make the setup much more complex, cf. [5]. Consequently, probabilistic tools for dependent data need to be used (e.g., suitable versions of the central limit theorem). Nevertheless, assuming that the claim amounts for different insurance companies are independent is reasonable with regard to real-life experience.

**Assumption**

**2.**

There exist constants $\underline{\sigma},\overline{\sigma}>0$ not depending on N, such that

$$\underline{\sigma}\le {\sigma}_{i}\le \overline{\sigma},\phantom{\rule{1.em}{0ex}}1\le i\le N.$$

The assumption of the bounded panel variances from both below and above allows for heteroscedasticity between the panels. In the case when the equiboundedness cannot be satisfied, the panel model can be generalized by introducing weights ${w}_{i,t}$, which are supposed to be known. Subsequently, claim ratios ${Y}_{i,t}/{w}_{i,t}$ can be modeled. Being particular in actuarial practice, it would mean normalizing the total claim amount by the premium received (considered as the weight), since bigger insurance companies are expected to have higher variability in total claim amounts paid.

## 3. Change Point Estimator

A consistent estimator of the change point in panel data is proposed in [3], but under circumstances that the change occurred for sure. In our situation, we do not know whether a change has occurred or not. Therefore, we modify the estimate proposed by [3] in the following way. If the panel means change somewhere inside $\{1,\dots ,T-1\}$, let the estimate select this break point. If there is no change in panel means, the estimator points out the very last time point T with the probability going to one. In other words, the value of the change point estimate can be T, meaning no change. This is in contrast to [3], where T is not reachable.

Our estimator of the time of change τ in panel data is defined as
where ${Y}_{i,t}$ is the average of the first t observations in panel i and ${\tilde{Y}}_{i,t}$ is the average of the last $T-t$ observations in panel i, i.e.,
By convention, the value of an empty sum is zero. A sequence of positive weights ${\left\{w\left(t\right)\right\}}_{t=0}^{T}$ is specified later on.

$${\widehat{\tau}}_{N}:=arg\underset{t=1,\dots ,T}{min}\sum _{i=1}^{N}\left\{\frac{1}{w\left(t\right)}\sum _{s=1}^{t}{({Y}_{i,s}-{Y}_{i,t})}^{2}+\frac{1}{w(T-t)}\sum _{s=t+1}^{T}{({Y}_{i,s}-{\tilde{Y}}_{i,t})}^{2}\right\},$$

$${Y}_{i,t}=\frac{1}{t}\sum _{s=1}^{t}{Y}_{i,s}\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}{\tilde{Y}}_{i,t}=\frac{1}{T-t}\sum _{s=t+1}^{T}{Y}_{i,s}.$$

#### 3.1. Consistency

We postulate additional assumptions on the panel change point model (1) in order to derive the estimator’s consistency. The following conditions take into account that the length T of the observation regime is fixed; that the length T does not depend on the number of panels N; and that the length T can even be relatively small.

**Assumption**

**3.**

Let $g\left(t\right):=\frac{t}{w\left(t\right)}\left(1-\frac{r\left(t\right)}{{t}^{2}}\right)$ for $t\in \{1,\dots ,T\}$, $g\left(0\right)\equiv 0$, and

$$\begin{array}{cc}\hfill \underset{N\to \infty}{lim}\frac{1}{\sqrt{N}}\left\{\frac{\tau}{\tau +1}\sum _{i=1}^{N}{\delta}_{i}^{2}-\left(g\left(\tau \right)+g(T-\tau )\right)\underset{t=1,\dots ,T}{max}w\left(t\right)\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}& =\infty ,\hfill \\ \hfill \underset{N\to \infty}{lim}\frac{1}{\sqrt{N}}\left\{\frac{T-\tau}{T-\tau +1}\sum _{i=1}^{N}{\delta}_{i}^{2}-\left(g\left(\tau \right)+g(T-\tau )\right)\underset{t=1,\dots ,T}{max}w\left(t\right)\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}& =\infty .\hfill \end{array}$$

**Assumption**

**4.**

${lim}_{N\to \infty}\frac{1}{{N}^{2}}{\sum}_{i=1}^{N}{\delta}_{i}^{2}=0$.

**Assumption**

**5.**

$\mathsf{E}{\epsilon}_{1,t}^{4}<\infty ,\phantom{\rule{0.166667em}{0ex}}t\in \{1,\dots ,T\}$.

**Theorem 1**(Change point estimator consistency)

**.**

Under Assumptions 1–5,

$$\underset{N\to \infty}{lim}\mathsf{P}[{\widehat{\tau}}_{N}=\tau ]=1.$$

The formally-postulated estimator’s consistency in Theorem 1 can be practically interpreted: as one observes more and more panels, the probability that the proposed estimator is different from the true unknown change point gets smaller and smaller.

Assumption 3 is not restrictive at all, although it may be seen as a complicated one. For example in the case of independent observations within the panel (i.e., $r\left(t\right)=t$) and the weight function $w\left(t\right)={t}^{q},$ $q\ge 2$ for $t\in \{1,\dots ,T\}$, $w\left(0\right)=1$, the sequence ${\left\{g\left(t\right)\right\}}_{t=2}^{T}$ becomes ${\{{t}^{1-q}-{t}^{-q}\}}_{t=2}^{T}$ and is non-increasing. Then, Assumption 3 is automatically fulfilled, if $q=2$ and $\frac{1}{\sqrt{N}}{\sum}_{i=1}^{N}\left({\delta}_{i}^{2}-{T}^{2}{\sigma}_{i}^{2}\right)\to \infty $ as $N\to \infty $. This also gives us an idea how to choose the weights $w\left(t\right)$. Condition Assumptions 1 and 3, which we impose on the model errors, only pertain to the correlation structure. Hence, our results hold for nearly all stationary time series models of interest, including nonlinear time series, like the ARCH and GARCH processes. Moreover, Assumption 3 controls the trade-off between the size of breaks and the variability of errors. It may be considered as a detectability assumption, because it specifies the value of the signal-to-noise ratio for finding the consistent estimator.

Assumptions 3 and 4 are satisfied, for instance, if $0<\delta \le {\delta}_{i}\le \Delta $ for all i’s (a common lower and upper threshold for the means’ shifts), ${\delta}^{2}=\mathcal{O}\left({N}^{\zeta}\right)$, $\zeta >0$ and ${\Delta}^{2}/N\to 0$ as $N\to \infty $ (bearing in mind Assumption 2). Another suitable example of ${\delta}_{i}$’s for the conditions in Assumptions 3 and 4 can be ${\delta}_{i}=K{i}^{\eta}$ for some $K>0$ and $0<\eta <1/2$. Condition Assumptions 3 and 4 do not require each panel to have a break. Sometimes, a more restrictive assumption can be assumed instead of Assumptions 3 and 4, e.g.,
On the one hand, this assumption might be considered as too strong, because a common fixed (not depending on N) value of $\delta ={\delta}_{i}$ for all i’s does not fulfill (3). On the other hand, (3) is satisfied when ${\delta}_{j}^{2}/N\to \infty $ as $N\to \infty $ for some $j\in \mathbb{N}$ and ${\delta}_{i}=0$ for all $i\ne j$. This stands for a situation when all of the panels do not change in mean, except one panel having a sufficiently large change in mean with respect to the number of panels. Let us notice that one could replace Assumption 3 with a stronger assumption from (3), but it would mean the detectability relation disappearing between the size of breaks and the variability of errors. One would also lose an idea of how to choose the weights. Furthermore, Assumptions E1 and E2 from [4] are more restrictive than Assumption 3, which makes the presented approach even more general.

$$\underset{N\to \infty}{lim}\frac{1}{N}\sum _{i=1}^{N}{\delta}_{i}^{2}=\infty .$$

Various competing consistent estimators of a possible change point can be suggested, e.g., the maximizer of ${\sum}_{i=1}^{N}{\left[{\sum}_{s=1}^{t}({Y}_{i,s}-t{Y}_{i,T})\right]}^{2}$, as in [7]. To show the consistency of this estimator, one needs to postulate different assumptions on the cumulative autocorrelation function, and this may be rather complex.

## 4. Simulation Study

A simulation experiment was performed to study the finite sample properties of the change point estimator for a common abrupt change in panel means. In particular, the interest lies in the empirical distributions of the proposed estimator visualized via histograms. Random samples of panel data (2000 each time) are generated from the panel change point model (1). The panel size is set to $T=10$ in order to demonstrate the performance of the estimator in the case of small panel length. The number of panels considered is $N=2,5,10,20,50$.

The correlation structure within each panel is modeled via random vectors generated from iid, AR(1) and GARCH(1,1) sequences. The considered AR(1) process has coefficient $\varphi =0.3$. In the case of the GARCH(1,1) process, we use coefficients ${\alpha}_{0}=1$, ${\alpha}_{1}=0.1$ and ${\beta}_{1}=0.2$, which, according to ([10], Example 1), give a strictly stationary process. In all three sequences, the innovations are obtained as iid random variables from a standard normal $N(0,1)$ or Student ${t}_{5}$ distribution multiplied by a suitable constant so that the errors possess unit variance (see Assumption 1). The variance-scaling parameters are kept constant for all panels, i.e., ${\sigma}_{i}=\sigma $ for all i. The sequence of weights is chosen as ${\{w\left(t\right)={t}^{2}\}}_{t=1}^{10}$ and $w\left(0\right)=1$. Monte Carlo simulation scenarios are produced as all possible combinations of the above-mentioned settings, and a selection of the results is listed below.

Firstly, we examine the impact of the errors’ distribution and the correlation structure on the change point estimator. Figure 1 contains six different structures of model disturbances, where $\tau =8$ (depicted by the dotted vertical line), $N=20$, $\sigma =0.2$, and all of the panels are subject to the break of value ${\delta}_{i}\sim U[0,2]$ (i.e., the breaks are independently and uniformly distributed on $[0,2]$).

It can be concluded that the precision of our change point estimator is satisfactory even for a relatively small number of panels regardless of the errors’ structure. Furthermore, innovations with heavier tails yield less precise estimators than innovations with lighter tails (i.e., compare Figure 1a,c,e, versus Figure 1b,d,f). One may notice that the AR(1) errors’ model gives the best estimator’s precision from three correlation structures. This should not be considered as a surprise, because our chosen AR(1) model has a positive autoregression coefficient ($\varphi =0.3$), and therefore, the values of $g\left(\tau \right)$ and $g(T-\tau )$ from Assumption 3 are larger than the values corresponding to the iid errors’ structure. Hence, one can say that the detectability Assumption 3 is satisfied more easily. Loosely speaking, the stronger the positive correlations within the panel, the more “deterministic the behavior” of the random noise and the better the estimator’s precision.

Figure 2 demonstrates that the proposed estimator works reasonably for various locations of the unknown change point. Particularly, six values of the common change point (again, depicted by the dotted vertical line) are chosen ($\tau =1,2,5,8,9,10$) with $N=20$, $\sigma =0.2$; $75\%$ of the panels have a break ${\delta}_{i}\sim U[0,2]$, and the panel disturbances come from AR(1) with $N(0,1)$ innovations. Recall that $\tau =10$ corresponds to the ‘no change’ situation, and the empirical distribution of the estimator concentrates mainly at the last time point, which is in coherence with the change point formulation from (1).

In Figure 3, the impact of the number of panels ($N=2,5,10,20$) is investigated when $\tau =9$, $\sigma =0.2$; $50\%$ of the panels have a break ${\delta}_{i}\sim U[0,2]$, and the panel disturbances are AR(1) with ${t}_{5}$ innovations. It is clear that the precision of ${\widehat{\tau}}_{N}$ improves markedly as N increases. A higher number of panels, i.e., $N=50$, were also taken into account, and then, $100\%$ precision was achieved. Moreover, longer panels were also simulated (e.g., $T=25$), but these results are not presented here. This is due to the simple reason that the precision of the estimator increases as the panel size gets bigger, which is straightforward and expected.

Figure 4 shows the effect of panel variability on the estimator’s performance. In particular, various values of the variance-scaling parameter are considered ($\sigma =0.1,0.2,0.5,1.0$), where $\tau =1$, $N=10$; all of the panels have a break ${\delta}_{i}\sim U[0,2]$, and the panel disturbances come from GARCH(1,1) with $N(0,1)$ innovations. It can be seen that the less volatile the observations, the more precise the change point estimate. The panel’s variability under the considered dependency can be too high compared to the change size. Then, it would be rather difficult to detect a possible change, as for instance in Figure 4d, which corresponds to $\sigma =1.0$.

In Figure 5, we examine how different portions of the panels with a change in mean influence the estimator’s precision. Four cases were considered: $25\%$, $50\%$, $75\%$; and all of the panels have a break ${\delta}_{i}\sim U[0,2]$. Here, $\tau =5$, $N=20$, $\sigma =0.2$; and the panel disturbances are GARCH(1,1) with ${t}_{5}$ innovations. One can conclude that higher precision is obtained when a larger portion of panels is subject to change in the mean. If a small number of panels contain a break (for example, Figure 5a), then the change point estimator does not perform well.

## 5. Theoretical Usage in Hypothesis Testing

Estimation of structural breaks can become an important mid-step in many statistical procedures, e.g., estimation of the panel correlation structure or bootstrapping in hypothesis testing for the change point.

A possible theoretical application of the change point estimation can be motivated as follows. It is required to test the null hypothesis of no change in the means:
against the alternative that at least one panel has a change in mean:

$${H}_{0}:\phantom{\rule{0.166667em}{0ex}}\tau =T$$

$${H}_{1}:\phantom{\rule{0.166667em}{0ex}}\tau <T\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}\exists i\in \{1,\dots ,N\}:\phantom{\rule{0.166667em}{0ex}}{\delta}_{i}\ne 0.$$

Generally, a test statistic ${\mathcal{S}}_{N,T}$ for the change point detection may be constructed as a continuous function of the sums of cumulative residuals, cf. [2]. In particular,
where $\mathcal{S}(\xb7,\xb7):\phantom{\rule{0.166667em}{0ex}}{\mathbb{R}}^{T\times (T-1)/2}\times {\mathbb{R}}^{T\times (T-1)/2}\to \mathbb{R}$ is continuous. To illustrate, a ratio type test statistic, discussed in [11],
is generated by the continuous function

$${\mathcal{S}}_{N,T}\equiv \mathcal{S}\left({\left\{\frac{1}{\sqrt{N}}\sum _{i=1}^{N}\sum _{r=1}^{s}({Y}_{i,r}-{Y}_{i,t})\right\}}_{s=1,t=2}^{t-1,T},{\left\{\frac{1}{\sqrt{N}}\sum _{i=1}^{N}\sum _{r=s+1}^{T}({Y}_{i,r}-{\tilde{Y}}_{i,t})\right\}}_{s=t,t=1}^{T-1,T-1}\right),$$

$${\mathcal{S}}_{N,T}=\underset{t=2,\dots ,T-2}{max}\frac{{max}_{s=1,\dots ,t-1}\left|{\sum}_{i=1}^{N}{\sum}_{r=1}^{s}({Y}_{i,r}-{Y}_{i,t})\right|}{{max}_{s=t,\dots ,T-1}\left|{\sum}_{i=1}^{N}{\sum}_{r=s+1}^{T}({Y}_{i,r}-{\tilde{Y}}_{i,t})\right|}$$

$$\mathcal{S}\left({\left\{{a}_{s,t}\right\}}_{s=1,t=2}^{t-1,T},{\left\{{b}_{s,t}\right\}}_{s=t,t=1}^{T-1,T-1}\right)=\underset{t=2,\dots ,T-2}{max}\frac{{max}_{s=1,\dots ,t-1}\left|{a}_{s,t}\right|}{{max}_{s=t,\dots ,T-1}\left|{b}_{s,t}\right|}.$$

The test statistic under the null would typically have a known limiting distribution (up to some unknown parameters), which is a function of a Gaussian vector or process. However, its correlation structure is unknown and needs to be estimated. Alternatively, one can avoid the estimation of the correlation structure (which may be considered as a nuisance parameter) by applying a suitable bootstrap procedure. The consistent change point estimator plays also an important role in the validity of the bootstrap algorithm.

#### 5.1. Estimation of Correlation Structure

Since the panels are considered to be independent and the number of panels may be sufficiently large, one can estimate the correlation structure of the errors ${[{\epsilon}_{1,1},\dots ,{\epsilon}_{1,T}]}^{\top}$ empirically. We base the errors’ estimates on residuals
One may notice that the estimators that cannot result in the last time point are less suitable in the calculation of residuals.

$${\widehat{e}}_{i,t}:=\left\{\begin{array}{cc}{Y}_{i,t}-{Y}_{i,{\widehat{\tau}}_{N}},\hfill & t\le {\widehat{\tau}}_{N},\hfill \\ {Y}_{i,t}-{\tilde{Y}}_{i,{\widehat{\tau}}_{N}},\hfill & t>{\widehat{\tau}}_{N}.\hfill \end{array}\right.$$

Then, the empirical version of the autocorrelation function is
where ${\widehat{\sigma}}_{i}^{2}:=\frac{1}{T}{\sum}_{s=1}^{T}{\widehat{e}}_{i,s}^{2}$ is the estimate of the variance parameter ${\sigma}_{i}^{2}$. Consequently, the cumulative autocorrelation function and shifted cumulative correlation function are estimated by

$${\widehat{\rho}}_{t}:=\frac{1}{N(T-t)}\sum _{i=1}^{N}\frac{1}{{\widehat{\sigma}}_{i}^{2}}\sum _{s=1}^{T-t}{\widehat{e}}_{i,s}{\widehat{e}}_{i,s+t},$$

$$\widehat{r}\left(t\right)=\sum _{\left|s\right|<t}(t-|s\left|\right){\widehat{\rho}}_{s}\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}\widehat{R}(t,v)=\sum _{s=1}^{t}\sum _{u=t+1}^{v}{\widehat{\rho}}_{u-s},\phantom{\rule{1.em}{0ex}}t<v.$$

#### 5.2. Bootstrapping

A wide range of literature has been published on bootstrapping in the change point problem, e.g., [2,12,13]. We build up the bootstrap test on the resampling with the replacement of row vectors ${\left\{[{\widehat{e}}_{i,1},\dots ,{\widehat{e}}_{i,T}]\right\}}_{i=1,\dots ,N}$ corresponding to the individual panels. This provides bootstrapped row vectors ${\left\{[{\widehat{e}}_{i,1}^{*},\dots ,{\widehat{e}}_{i,T}^{*}]\right\}}_{i=1,\dots ,N}$. Then, the bootstrapped residuals ${\widehat{e}}_{i,t}^{*}$ are centered by their conditional expectation $\frac{1}{N}{\sum}_{i=1}^{N}{\widehat{e}}_{i,t}$, yielding
The bootstrap test statistic is just a modification of the original statistic ${\mathcal{S}}_{N,T}$, where the original observations ${Y}_{i,t}$ are replaced by their bootstrap counterparts ${\widehat{Y}}_{i,t}^{*}$:
such that

$${\widehat{Y}}_{i,t}^{*}:={\widehat{e}}_{i,t}^{*}-\frac{1}{N}\sum _{i=1}^{N}{\widehat{e}}_{i,t}.$$

$${\mathcal{S}}_{N,T}^{*}\equiv \mathcal{S}\left({\left\{\frac{1}{\sqrt{N}}\sum _{i=1}^{N}\sum _{r=1}^{s}({\widehat{Y}}_{i,r}^{*}-{\widehat{Y}}_{i,t}^{*})\right\}}_{s=1,t=2}^{t-1,T},{\left\{\frac{1}{\sqrt{N}}\sum _{i=1}^{N}\sum _{r=s+1}^{T}({\widehat{Y}}_{i,r}^{*}-{\tilde{\widehat{Y}}}_{i,t}^{*})\right\}}_{s=t,t=1}^{T-1,T-1}\right),$$

$${\widehat{Y}}_{i,t}^{*}=\frac{1}{t}\sum _{s=1}^{t}{\widehat{Y}}_{i,s}^{*}\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}{\tilde{\widehat{Y}}}_{i,t}^{*}=\frac{1}{T-t}\sum _{s=t+1}^{T}{\widehat{Y}}_{i,s}^{*}.$$

The idea behind bootstrapping is to mimic the original distribution of the test statistic by the distribution of the bootstrap test statistic, conditionally on the original data denoted by $\mathbb{Y}\equiv {\left\{{Y}_{i,t}\right\}}_{i,t=1}^{N,T}$. Recall that it is not known whether some common change in panel means occurred or not. In other words, one does not know whether the data come from the null or the alternative hypothesis.

**Theorem 2**(Bootstrap justification)

**.**

Suppose that Assumptions 1, 2 and 5 hold. Then,

- (i)
- under ${H}_{0}$,$${\mathcal{S}}_{N,T}\underset{N\to \infty}{\overset{\mathcal{D}}{\to}}\mathcal{L};$$
- (ii)
- under additional Assumptions 3, 4 and under ${H}_{0}$, as well as under ${H}_{1}$,$${\mathcal{S}}_{N,T}^{*}|\mathbb{Y}\underset{N\to \infty}{\overset{\mathcal{D}}{\to}}{\mathcal{L}}^{*}\phantom{\rule{1.em}{0ex}}in\text{}probability\text{}\mathsf{P};$$
- (iii)
- under additional Assumptions 3, 4 and under ${H}_{0}$, $\mathcal{L}$ and ${\mathcal{L}}^{*}$ coincide.

The validity of the bootstrap test is assured by Theorem 2. Indeed, the conditional asymptotic distribution of the bootstrap test statistic does not converge to infinity (in probability) under the alternative. In other words, the second part of Theorem 2 holds under ${H}_{0}$, as well as ${H}_{1}$. In contrast to the bootstrap version of the test statistics, the original test statistic typically explodes over all bounds under the alternative. That is why the bootstrap test statistic can be used correctly to reject the null in favor of the alternative, having sufficiently large N. Moreover, Theorem 2 states that the conditional distribution of the bootstrap test statistic and the unconditional distribution of the original test statistic coincide under the null. Additionally, that is the reason why the bootstrap test approximately keeps the same level as the original test based on the asymptotics (i.e., the test based on the asymptotic distribution of ${\mathcal{S}}_{N,T}$).

A practical choice of the test statistic ${\mathcal{S}}_{N,T}$ can be obtained from, e.g., [14]:
or

$${\mathcal{S}}_{N,T}=\underset{t=2,\dots ,T-2}{max}\frac{{\sum}_{s=1}^{t-1}{\left[{\sum}_{i=1}^{N}{\sum}_{r=1}^{s}({Y}_{i,r}-{Y}_{i,t})\right]}^{2}}{{\sum}_{s=t}^{T-1}{\left[{\sum}_{i=1}^{N}{\sum}_{r=s+1}^{T}({Y}_{i,r}-{\tilde{Y}}_{i,t})\right]}^{2}}$$

$${\mathcal{S}}_{N,T}=\underset{t=2,\dots ,T-2}{max}\frac{{max}_{s=1,\dots ,t-1}{\sum}_{i=1}^{N}{\sum}_{r=1}^{s}({Y}_{i,r}-{Y}_{i,t})-{min}_{s=1,\dots ,t-1}{\sum}_{i=1}^{N}{\sum}_{r=1}^{s}({Y}_{i,r}-{Y}_{i,t})}{{max}_{s=t,\dots ,T-1}{\sum}_{i=1}^{N}{\sum}_{r=s+1}^{T}({Y}_{i,r}-{\tilde{Y}}_{i,t})-{min}_{s=t,\dots ,T-1}{\sum}_{i=1}^{N}{\sum}_{r=s+1}^{T}({Y}_{i,r}-{\tilde{Y}}_{i,t})}.$$

Theorem 2 assures that the previously-described bootstrap algorithm can be used in hypothesis testing (change point detection) for the above-mentioned test statistics. Now, the simulated (empirical) distribution of the bootstrap test statistic can be used to calculate the bootstrap critical value, which will be compared to the value of the original test statistic in order to reject the null or not.

## 6. Practical Application in Non-Life Insurance

As mentioned in the Introduction, our primary motivation for the change point estimation in panel data comes from the non-life insurance business. The dataset is provided by the National Association of Insurance Commissioners (NAIC) database; see [15]. We concentrate on the ‘private passenger auto liability/medical’ insurance line of business. The data collect records from $N=146$ insurance companies. Each insurance company provides $T=10$ yearly total claim amounts starting from year 1988 up to year 1997. One can consider normalizing the claim amounts by the premium received by company i in year t. That is thinking of panel data ${Y}_{i,t}/{p}_{i,t}$, where ${p}_{i,t}$ is the mentioned premium. This may yield a stabilization of series’ variability, which corresponds to Assumption 2 of bounded variances. Figure 6 graphically shows series of the normalized claim amounts.

The data are considered as panel data in the way that each insurance company corresponds to one panel, which is formed by the company’s yearly total claim amounts normalized by the earned premium. The length of the panel is quite short. This is very typical in insurance business, because considering longer panels may invoke incomparability between the early and the late claim amounts due to changing market or policies’ conditions over time.

We want to estimate a possible change in the normalized claim amounts occurring in a common year, assuming that the normalized claim amounts are approximately constant in the years before and after the possible change for every insurance company. Our change point estimator gives ${\widehat{\tau}}_{146}=9$ (i.e., year 1996) using the sequence of weights ${\left\{{t}^{2}\right\}}_{t=1}^{10}$ and $w\left(0\right)=1$, which corresponds to the increased values for the last observed year 1997 shown in Figure 6.

An interesting finding comes out, when one concentrates only on the second half of the observation period; see Figure 7. The shortening of the time window up to years 1993–1997 yields the same change position, i.e., year 1996.

Furthermore, the empirical cumulative autocorrelation function can be obtained. The correlation matrix is estimated as proposed in Section 5.1. The subsequence from the empirical cumulative autocorrelation function is obtained ${\left\{\widehat{r}\left(t\right)\right\}}_{t=1}^{10}=\{1.0,2.5,3.6,4.6,5.6,6.6,7.6,8.6,9.6,10.6\}$.

Dependent panels may be taken into account, and the presented work might be generalized for some kind of asymptotic independence over the panels or prescribed dependence among the panels. Nevertheless, our incentive is determined by a problem from non-life insurance, where the association of insurance companies consists of a relatively high number of insurance companies. Thus, the portfolio of yearly claims is so diversified, that the panels corresponding to insurance companies’ yearly claims may be viewed as independent, and neither natural ordering, nor clustering has to be assumed.

## 7. Results and Conclusions

The change point problem in panel data with a fixed panel size is considered in this paper. A possible occurrence of common breaks in panel means is estimated. We introduce the change point estimator without the boundary issue, meaning that it can estimate the change close to the extremities of the studied time interval. The consistency of the estimator is proven regardless of the presence/absence of the change in panel means under relatively simple conditions.

The simulation study illustrates that the proposed change point estimator behaves sufficiently well for small panel sizes, a relatively moderate number of panels and various errors’ dependence structures. The theoretical usage of the change point estimator is outlined for the estimation of the within-panel correlation structure and for the validity of the bootstrap procedures in hypothesis testing. Finally, the proposed method is applied to an actuarial problem, for which the change point analysis in panel data provides an appealing approach.

## Acknowledgments

Institutional support to Barbora Peštová was provided by RVO:67985807. The research of Michal Pešta was supported by the Czech Science Foundation project GAČR No. 15-04774Y. The authors would like to thank three anonymous referees for careful reading of the paper and for providing suggestions that improved this paper.

## Author Contributions

Both authors equally contributed to invention of the proposed change point estimator, to the derivation of its consistency, to designing and programming the simulation study and to showing its theoretical and practical applications.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Proofs

**Proof**

**of Theorem 1.**

Let us define
and, consequently, ${S}_{N}\left(t\right):=\frac{1}{N}{\sum}_{i=1}^{N}\left\{{S}_{N}^{(i,L)}\left(t\right)+{S}_{N}^{(i,R)}\left(t\right)\right\}$. Then,
where ${\epsilon}_{i,t}=\frac{1}{t}{\sum}_{s=1}^{t}{\epsilon}_{i,s}$. Similarly,
where ${\tilde{\epsilon}}_{i,t}=\frac{1}{T-t}{\sum}_{s=t+1}^{T}{\epsilon}_{i,s}$ for $t<T$ and ${\tilde{\epsilon}}_{i,T}\equiv 0$. By the definition of the cumulative autocorrelation function, we have for $2\le t\le \tau $:
Clearly, ${S}_{N}^{(i,L)}\left(1\right)=0$. In the remaining case when $t>\tau $, one can calculate
By the definition of an empty sum, ${S}_{N}^{(i,R)}\left(T\right)=0$ and, moreover, ${S}_{N}^{(i,R)}(T-1)=0$. For $T-1>t>\tau $,
The same result is obtained for $T=\tau >t$. In the remaining case $T-1\ge \tau \ge t$, such that $T-1>t$,

$${S}_{N}^{(i,L)}\left(t\right):=\frac{1}{w\left(t\right)}\sum _{s=1}^{t}{({Y}_{i,s}-{Y}_{i,t})}^{2},\phantom{\rule{1.em}{0ex}}{S}_{N}^{(i,R)}\left(t\right):=\frac{1}{w(T-t)}\sum _{s=t+1}^{T}{({Y}_{i,s}-{\tilde{Y}}_{i,t})}^{2}$$

$${S}_{N}^{(i,L)}\left(t\right)=\left\{\begin{array}{cc}\frac{{\sigma}_{i}^{2}}{w\left(t\right)}{\sum}_{s=1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2},\hfill & t\le \tau ,\hfill \\ \frac{1}{w\left(t\right)}\left[{\sum}_{s=1}^{\tau}{({\sigma}_{i}{\epsilon}_{i,s}-{\sigma}_{i}{\epsilon}_{i,t}-\frac{t-\tau}{t}{\delta}_{i})}^{2}+{\sum}_{s=\tau +1}^{t}{({\sigma}_{i}{\epsilon}_{i,s}-{\sigma}_{i}{\epsilon}_{i,t}+\frac{\tau}{t}{\delta}_{i})}^{2}\right],\hfill & t>\tau ;\hfill \end{array}\right.$$

$$\begin{array}{c}{S}_{N}^{(i,R)}\left(t\right)\hfill \\ \hfill =\left\{\begin{array}{cc}\frac{{\sigma}_{i}^{2}}{w(T-t)}{\sum}_{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2},\hfill & t>\tau ,\hfill \\ \frac{1}{w(T-t)}\left[{\sum}_{s=t+1}^{\tau}{({\sigma}_{i}{\epsilon}_{i,s}-{\sigma}_{i}{\tilde{\epsilon}}_{i,t}-\frac{T-\tau}{T-t}{\delta}_{i})}^{2}+{\sum}_{s=\tau +1}^{T}{({\sigma}_{i}{\epsilon}_{i,s}-{\sigma}_{i}{\tilde{\epsilon}}_{i,t}+\frac{\tau -t}{T-t}{\delta}_{i})}^{2}\right],\hfill & t\le \tau <T,\hfill \\ \frac{{\sigma}_{i}^{2}}{w(T-t)}{\sum}_{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2},\hfill & t\le \tau =T;\hfill \end{array}\right.\end{array}$$

$$\mathsf{E}{S}_{N}^{(i,L)}\left(t\right)=\frac{{\sigma}_{i}^{2}}{w\left(t\right)}\sum _{s=1}^{t}\mathsf{E}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}=\frac{{\sigma}_{i}^{2}}{w\left(t\right)}\sum _{s=1}^{t}\left[1-\frac{2}{t}\sum _{r=1}^{t}\mathsf{E}{\epsilon}_{i,s}{\epsilon}_{i,r}+\frac{1}{{t}^{2}}r\left(t\right)\right]=\frac{{\sigma}_{i}^{2}}{w\left(t\right)}\left(t-\frac{r\left(t\right)}{t}\right).$$

$$\begin{array}{cc}\hfill \mathsf{E}{S}_{N}^{(i,L)}\left(t\right)& =\frac{{\sigma}_{i}^{2}}{w\left(t\right)}\left(t-\frac{r\left(t\right)}{t}\right)+\frac{\tau}{w\left(t\right)}{\left(\frac{t-\tau}{t}\right)}^{2}{\delta}_{i}^{2}+\frac{t-\tau}{w\left(t\right)}{\left(\frac{\tau}{t}\right)}^{2}{\delta}_{i}^{2}\hfill \\ & =\frac{{\sigma}_{i}^{2}t}{w\left(t\right)}\left(1-\frac{r\left(t\right)}{{t}^{2}}\right)+\frac{\tau (t-\tau )}{tw\left(t\right)}{\delta}_{i}^{2}.\hfill \end{array}$$

$$\begin{array}{cc}\hfill \mathsf{E}{S}_{N}^{(i,R)}\left(t\right)& =\frac{{\sigma}_{i}^{2}}{w(T-t)}\sum _{s=t+1}^{T}\mathsf{E}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}\hfill \\ & =\frac{{\sigma}_{i}^{2}}{w(T-t)}\sum _{s=t+1}^{T}\left[1-\frac{2}{T-t}\sum _{r=t+1}^{T}\mathsf{E}{\epsilon}_{i,s}{\epsilon}_{i,r}+\frac{1}{{(T-t)}^{2}}r(T-t)\right]\hfill \\ & =\frac{{\sigma}_{i}^{2}}{w(T-t)}\left(T-t-\frac{r(T-t)}{T-t}\right).\hfill \end{array}$$

$$\begin{array}{cc}\hfill \mathsf{E}{S}_{N}^{(i,R)}\left(t\right)& =\frac{{\sigma}_{i}^{2}}{w(T-t)}\left(T-t-\frac{r(T-t)}{T-t}\right)+\frac{\tau -t}{w(T-t)}{\left(\frac{T-\tau}{T-t}\right)}^{2}{\delta}_{i}^{2}+\frac{T-\tau}{w(T-t)}{\left(\frac{\tau -t}{T-t}\right)}^{2}{\delta}_{i}^{2}\hfill \\ & =\frac{{\sigma}_{i}^{2}(T-t)}{w(T-t)}\left(1-\frac{r(T-t)}{{(T-t)}^{2}}\right)+\frac{(\tau -t)(T-\tau )}{(T-t)w(T-t)}{\delta}_{i}^{2}.\hfill \end{array}$$

Realize that ${S}_{N}^{(i,L)}\left(t\right)+{S}_{N}^{(i,R)}\left(t\right)-\mathsf{E}{S}_{N}^{(i,L)}\left(t\right)-\mathsf{E}{S}_{N}^{(i,R)}\left(t\right)$ are independent with zero mean for fixed t and $i=1,\dots ,N$, but they are not identically distributed. Due to Assumption 5 and the stochastic Cauchy–Schwarz inequality, for $t\le \tau <T$, it holds
where ${C}_{1}(t,\tau ,\underline{\sigma},\overline{\sigma})>0$, ${C}_{2}(t,\tau ,\underline{\sigma},\overline{\sigma})\ge 0$ and ${C}_{3}(t,\tau ,\underline{\sigma},\overline{\sigma})\ge 0$ are some constants not depending on N. If $t<\tau =T$, then
where ${C}_{4}(t,\tau ,\underline{\sigma},\overline{\sigma})>0$ does not depend on N. In the case of $t=\tau =T$, we also have $\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(T\right)\le \frac{1}{N}{C}_{5}(T,\tau ,\underline{\sigma},\overline{\sigma})$, where ${C}_{5}(T,\tau ,\underline{\sigma},\overline{\sigma})>0$ does not depend on N. Finally, if $t>\tau $, then
where ${D}_{1}(t,\tau ,\underline{\sigma},\overline{\sigma})>0$, ${D}_{2}(t,\tau ,\underline{\sigma},\overline{\sigma})\ge 0$ and ${D}_{3}(t,\tau ,\underline{\sigma},\overline{\sigma})\ge 0$ do not depend on N. The Chebyshev inequality provides ${S}_{N}\left(t\right)-\mathsf{E}{S}_{N}\left(t\right)={\mathcal{O}}_{\mathcal{P}}\left(\sqrt{\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(t\right)}\right)$ as $N\to \infty $. According to Assumption 4 and the Cauchy–Schwarz inequality, we have
Since the index set $\{1,\dots ,T\}$ is finite and τ is finite, as well, then
where ${K}_{1}(\underline{\sigma},\overline{\sigma})>0$, ${K}_{2}(\underline{\sigma},\overline{\sigma})\ge 0$, ${K}_{3}(\underline{\sigma},\overline{\sigma})\ge 0$ and ${K}_{4}(\underline{\sigma},\overline{\sigma})>0$ are constants not depending on N. Thus, we also have uniform stochastic boundedness, i.e.,

$$\begin{array}{cc}\hfill \mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(t\right)& =\frac{1}{{N}^{2}}\sum _{i=1}^{N}\{\frac{{\sigma}_{i}^{4}}{{w}^{2}\left(t\right)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[\sum _{s=1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}\right]\hfill \\ & \phantom{\rule{1.em}{0ex}}+\frac{1}{{w}^{2}(T-t)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}[{\sigma}_{i}^{2}\sum _{s=t+1}^{\tau}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}-2\frac{T-\tau}{T-t}{\sigma}_{i}{\delta}_{i}\sum _{s=t+1}^{\tau}({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})+{\left(\frac{T-\tau}{T-t}\right)}^{2}{\delta}_{i}^{2}\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}+{\sigma}_{i}^{2}\sum _{s=\tau +1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}+2\frac{\tau -t}{T-t}{\sigma}_{i}{\delta}_{i}\sum _{s=\tau +1}^{T}({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})+{\left(\frac{\tau -t}{T-t}\right)}^{2}{\delta}_{i}^{2}]\hfill \\ & \phantom{\rule{1.em}{0ex}}+\frac{2{\sigma}_{i}^{2}}{w\left(t\right)w(T-t)}\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}[\sum _{s=1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2},{\sigma}_{i}^{2}\sum _{s=t+1}^{\tau}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}-2\frac{T-\tau}{T-t}{\sigma}_{i}{\delta}_{i}\sum _{s=t+1}^{\tau}({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}+{\left(\frac{T-\tau}{T-t}\right)}^{2}{\delta}_{i}^{2}+{\sigma}_{i}^{2}\sum _{s=\tau +1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}+2\frac{\tau -t}{T-t}{\sigma}_{i}{\delta}_{i}\sum _{s=\tau +1}^{T}({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})+{\left(\frac{\tau -t}{T-t}\right)}^{2}{\delta}_{i}^{2}\left]\right\}\hfill \\ & \le \frac{1}{N}{C}_{1}(t,\tau ,\underline{\sigma},\overline{\sigma})+\frac{1}{{N}^{2}}{C}_{2}(t,\tau ,\underline{\sigma},\overline{\sigma})\sum _{i=1}^{N}{\delta}_{i}^{2}+\frac{1}{{N}^{2}}{C}_{3}(t,\tau ,\underline{\sigma},\overline{\sigma})\left|\sum _{i=1}^{N}{\delta}_{i}\right|,\hfill \end{array}$$

$$\begin{array}{c}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(t\right)=\frac{1}{{N}^{2}}\sum _{i=1}^{N}\{\frac{{\sigma}_{i}^{4}}{{w}^{2}\left(t\right)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[\sum _{s=1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}\right]+\frac{{\sigma}_{i}^{4}}{{w}^{2}(T-t)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[\sum _{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}\right]\hfill \\ \hfill +\frac{2{\sigma}_{i}^{4}}{w\left(t\right)w(T-t)}\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left[\sum _{s=1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2},\sum _{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}\right]\}\le \frac{1}{N}{C}_{4}(t,\tau ,\underline{\sigma},\overline{\sigma}),\end{array}$$

$$\begin{array}{cc}\hfill \mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(t\right)& =\frac{1}{{N}^{2}}\sum _{i=1}^{N}\{\frac{{\sigma}_{i}^{4}}{{w}^{2}(T-t)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[\sum _{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2}\right]+\frac{1}{{w}^{2}\left(t\right)}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}[{\sigma}_{i}^{2}\sum _{s=1}^{\tau}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}-2\frac{t-\tau}{t}{\sigma}_{i}{\delta}_{i}\sum _{s=1}^{\tau}({\epsilon}_{i,s}-{\epsilon}_{i,t})+{\left(\frac{t-\tau}{t}\right)}^{2}{\delta}_{i}^{2}+{\sigma}_{i}^{2}\sum _{s=\tau +1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}+2\frac{\tau}{t}{\sigma}_{i}{\delta}_{i}\sum _{s=\tau +1}^{t}({\epsilon}_{i,s}-{\epsilon}_{i,t})+{\left(\frac{\tau}{t}\right)}^{2}{\delta}_{i}^{2}]\hfill \\ & \phantom{\rule{1.em}{0ex}}+\frac{2{\sigma}_{i}^{2}}{w(T-t)w\left(t\right)}\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}[\sum _{s=t+1}^{T}{({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,t})}^{2},{\sigma}_{i}^{2}\sum _{s=1}^{\tau}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}-2\frac{t-\tau}{t}{\sigma}_{i}{\delta}_{i}\sum _{s=1}^{\tau}({\epsilon}_{i,s}-{\epsilon}_{i,t})\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}+{\left(\frac{t-\tau}{t}\right)}^{2}{\delta}_{i}^{2}+{\sigma}_{i}^{2}\sum _{s=\tau +1}^{t}{({\epsilon}_{i,s}-{\epsilon}_{i,t})}^{2}+2\frac{\tau}{t}{\sigma}_{i}{\delta}_{i}\sum _{s=\tau +1}^{t}({\epsilon}_{i,s}-{\epsilon}_{i,t})+{\left(\frac{\tau}{t}\right)}^{2}{\delta}_{i}^{2}\left]\right\}\hfill \\ & \le \frac{1}{N}{D}_{1}(t,\tau ,\underline{\sigma},\overline{\sigma})+\frac{1}{{N}^{2}}{D}_{2}(t,\tau ,\underline{\sigma},\overline{\sigma})\sum _{i=1}^{N}{\delta}_{i}^{2}+\frac{1}{{N}^{2}}{D}_{3}(t,\tau ,\underline{\sigma},\overline{\sigma})\left|\sum _{i=1}^{N}{\delta}_{i}\right|,\hfill \end{array}$$

$$\frac{1}{{N}^{2}}\left|\sum _{i=1}^{N}{\delta}_{i}\right|\le \frac{1}{N}\sqrt{\frac{1}{N}\sum _{i=1}^{N}{\delta}_{i}^{2}}\to 0,\phantom{\rule{1.em}{0ex}}N\to \infty .$$

$$\underset{1\le t\le T}{max}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{S}_{N}\left(t\right)\le \frac{1}{N}{K}_{1}(\underline{\sigma},\overline{\sigma})+{K}_{2}(\underline{\sigma},\overline{\sigma})\frac{1}{{N}^{2}}\sum _{i=1}^{N}{\delta}_{i}^{2}+{K}_{3}(\underline{\sigma},\overline{\sigma})\frac{1}{{N}^{2}}\left|\sum _{i=1}^{N}{\delta}_{i}\right|\le \frac{1}{N}{K}_{4}(\underline{\sigma},\overline{\sigma}),$$

$$\underset{1\le t\le T}{max}|{S}_{N}\left(t\right)-\mathsf{E}{S}_{N}\left(t\right)|={\mathcal{O}}_{\mathcal{P}}\left(\frac{1}{\sqrt{N}}\right),\phantom{\rule{1.em}{0ex}}N\to \infty .$$

Adding and subtracting, one has
for each $t\in \{1,\dots ,T\}$. Particularly, Inequality (A1) holds for ${\widehat{\tau}}_{N}$. Note that ${\widehat{\tau}}_{N}=arg{min}_{t}{S}_{N}\left(t\right)$. Hence, ${S}_{N}\left({\widehat{\tau}}_{N}\right)-{S}_{N}\left(\tau \right)\le 0$. Therefore,
where $g\left(t\right)=\frac{t}{w\left(t\right)}\left(1-\frac{r\left(t\right)}{{t}^{2}}\right)\ge 0$ for $t\in \{1,\dots ,T\}$ and $g\left(0\right)\equiv 0$. Since the expression in (A2) is ${\mathcal{O}}_{\mathcal{P}}\left(1\right)$ as $N\to \infty $, we have $\mathcal{I}\{{\widehat{\tau}}_{N}>\tau \}\stackrel{\mathsf{P}}{\to}0$, as well as $\mathcal{I}\{{\widehat{\tau}}_{N}<\tau \}\stackrel{\mathsf{P}}{\to}0$, due to Assumption 3 applied in (A3) and (A4). Hence, $[{\widehat{\tau}}_{N}=\tau ]\to 1$ as $N\to \infty $. ☐

$$\begin{array}{cc}& {S}_{N}\left(t\right)-{S}_{N}\left(\tau \right)={S}_{N}\left(t\right)-\mathsf{E}{S}_{N}\left(t\right)-[{S}_{N}\left(\tau \right)-\mathsf{E}{S}_{N}\left(\tau \right)]+\mathsf{E}{S}_{N}\left(t\right)-\mathsf{E}{S}_{N}\left(\tau \right)\hfill \\ & \ge -2\underset{1\le r\le T}{max}|{S}_{N}\left(r\right)-\mathsf{E}{S}_{N}\left(r\right)|+\mathsf{E}{S}_{N}\left(t\right)-\mathsf{E}{S}_{N}\left(\tau \right)\hfill \\ & =-2\underset{1\le r\le T}{max}|{S}_{N}\left(r\right)-\mathsf{E}{S}_{N}\left(r\right)|+\frac{1}{N}\left(\sum _{i=1}^{N}{\sigma}_{i}^{2}\right)[\frac{t}{w\left(t\right)}\left(1-\frac{r\left(t\right)}{{t}^{2}}\right)-\frac{\tau}{w\left(\tau \right)}\left(1-\frac{r\left(\tau \right)}{{\tau}^{2}}\right)\hfill \\ & \phantom{\rule{1.em}{0ex}}+\mathcal{I}\{t<T\}\frac{T-t}{w(T-t)}\left(1-\frac{r(T-t)}{{(T-t)}^{2}}\right)-\mathcal{I}\{\tau <T\}\frac{T-\tau}{w(T-\tau )}\left(1-\frac{r(T-\tau )}{{(T-\tau )}^{2}}\right)]\hfill \\ & \phantom{\rule{1.em}{0ex}}+\frac{1}{N}\left(\sum _{i=1}^{N}{\delta}_{i}^{2}\right)\left[\mathcal{I}\{t>\tau \}\frac{\tau (t-\tau )}{tw\left(t\right)}+\mathcal{I}\{t<\tau \}\frac{(\tau -t)(T-\tau )}{(T-t)w(T-t)}\right]\hfill \end{array}$$

$$\begin{array}{cc}\hfill & 2\sqrt{N}\underset{1\le r\le T}{max}|{S}_{N}\left(r\right)-\mathsf{E}{S}_{N}\left(r\right)|\hfill \\ & \ge \left[\mathcal{I}\{{\widehat{\tau}}_{N}>\tau \}\frac{\tau ({\widehat{\tau}}_{N}-\tau )}{{\widehat{\tau}}_{N}w\left({\widehat{\tau}}_{N}\right)}+\mathcal{I}\{{\widehat{\tau}}_{N}<\tau \}\frac{(\tau -{\widehat{\tau}}_{N})(T-\tau )}{(T-{\widehat{\tau}}_{N})w(T-{\widehat{\tau}}_{N})}\right]\frac{1}{\sqrt{N}}\sum _{i=1}^{N}{\delta}_{i}^{2}\hfill \\ & \phantom{\rule{1.em}{0ex}}+[\frac{{\widehat{\tau}}_{N}}{w\left({\widehat{\tau}}_{N}\right)}\left(1-\frac{r\left({\widehat{\tau}}_{N}\right)}{{\widehat{\tau}}_{N}^{2}}\right)-\frac{\tau}{w\left(\tau \right)}\left(1-\frac{r\left(\tau \right)}{{\tau}^{2}}\right)\hfill \\ & \phantom{\rule{1.em}{0ex}}+\mathcal{I}\{{\widehat{\tau}}_{N}<T\}\frac{T-{\widehat{\tau}}_{N}}{w(T-{\widehat{\tau}}_{N})}\left(1-\frac{r(T-{\widehat{\tau}}_{N})}{{(T-{\widehat{\tau}}_{N})}^{2}}\right)-\mathcal{I}\{\tau <T\}\frac{T-\tau}{w(T-\tau )}\left(1-\frac{r(T-\tau )}{{(T-\tau )}^{2}}\right)]\frac{1}{\sqrt{N}}\sum _{i=1}^{N}{\sigma}_{i}^{2}\hfill \\ & =\mathcal{I}\{{\widehat{\tau}}_{N}>\tau \}\frac{1}{\sqrt{N}}\left\{\frac{\tau ({\widehat{\tau}}_{N}-\tau )}{{\widehat{\tau}}_{N}w\left({\widehat{\tau}}_{N}\right)}\sum _{i=1}^{N}{\delta}_{i}^{2}+\left[g\left({\widehat{\tau}}_{N}\right)-g\left(\tau \right)+g(T-{\widehat{\tau}}_{N})-g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}\hfill \\ & \phantom{\rule{1.em}{0ex}}+\mathcal{I}\{{\widehat{\tau}}_{N}<\tau \}\frac{1}{\sqrt{N}}\left\{\frac{(\tau -{\widehat{\tau}}_{N})(T-\tau )}{(T-{\widehat{\tau}}_{N})w(T-{\widehat{\tau}}_{N})}\sum _{i=1}^{N}{\delta}_{i}^{2}+\left[g\left({\widehat{\tau}}_{N}\right)-g\left(\tau \right)+g(T-{\widehat{\tau}}_{N})-g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}\hfill \\ & \ge \mathcal{I}\{{\widehat{\tau}}_{N}>\tau \}\frac{1}{\sqrt{N}}\left\{\frac{\tau}{w\left({\widehat{\tau}}_{N}\right)}\left(1-\frac{\tau}{{\widehat{\tau}}_{N}}\right)\sum _{i=1}^{N}{\delta}_{i}^{2}-\left[g\left(\tau \right)+g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}\hfill \\ & \phantom{\rule{1.em}{0ex}}+\mathcal{I}\{{\widehat{\tau}}_{N}<\tau \}\frac{1}{\sqrt{N}}\left\{\frac{T-\tau}{w(T-{\widehat{\tau}}_{N})}\left(1-\frac{T-\tau}{T-{\widehat{\tau}}_{N}}\right)\sum _{i=1}^{N}{\delta}_{i}^{2}-\left[g\left(\tau \right)+g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}\hfill \end{array}$$

$$\begin{array}{cc}& \ge \mathcal{I}\{{\widehat{\tau}}_{N}>\tau \}\frac{1}{\sqrt{N}}\left\{\frac{\tau}{(\tau +1){max}_{t=1,\dots ,T}w\left(t\right)}\sum _{i=1}^{N}{\delta}_{i}^{2}-\left[g\left(\tau \right)+g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\}\hfill \end{array}$$

$$\begin{array}{cc}& \phantom{\rule{1.em}{0ex}}+\mathcal{I}\{{\widehat{\tau}}_{N}<\tau \}\frac{1}{\sqrt{N}}\left\{\frac{T-\tau}{(T-\tau +1){max}_{t=1,\dots ,T}w\left(t\right)}\sum _{i=1}^{N}{\delta}_{i}^{2}-\left[g\left(\tau \right)+g(T-\tau )\right]\sum _{i=1}^{N}{\sigma}_{i}^{2}\right\},\hfill \end{array}$$

**Proof**

**of Theorem 2.**

(i) Let us define
Using the multivariate Lyapunov CLT for a sequence of T-dimensional independent random vectors ${\left\{{\sigma}_{i}{\left[{\sum}_{s=1}^{1}{\epsilon}_{i,s},\dots ,{\sum}_{s=1}^{T}{\epsilon}_{i,s}\right]}^{\top}\right\}}_{i\in \mathbb{N}}$, we have under ${H}_{0}$:
such that
where $\mathbf{\Lambda}=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\left[{\sum}_{s=1}^{1}{\epsilon}_{1,s},\dots ,{\sum}_{s=1}^{T}{\epsilon}_{1,s}\right]}^{\top}$ is the positive definite covariance matrix with respect to Assumption 1 and $0<N{\underline{\sigma}}^{2}\le {\varsigma}_{N}^{2}:={\sum}_{i=1}^{N}{\sigma}_{i}^{2}\le N{\overline{\sigma}}^{2}$ according to Assumption 2. The limiting T-dimensional random vector ${[{X}_{1},\dots ,{X}_{T}]}^{\top}$ has a multivariate normal distribution with zero mean and the identity covariance matrix. The Lyapunov condition is satisfied due to the Jensen inequality, the Cramér–Wold theorem and Assumption 5, i.e.,
for arbitrary fixed $\mathbf{0}\ne \mathbf{a}={[{a}_{1},\dots ,{a}_{T}]}^{\top}\in {\mathbb{R}}^{T}$ and some $0<\chi \le 2$, where
is a positive constant not depending on N. The t-th diagonal element of the covariance matrix
and the upper off-diagonal element on position $(t,v)$ is

$${U}_{N}\left(t\right):=\sum _{i=1}^{N}\sum _{s=1}^{t}({Y}_{i,s}-{\mu}_{i}).$$

$${\mathbf{\Sigma}}_{N}^{-1/2}{[{U}_{N}\left(1\right),\dots ,{U}_{N}\left(T\right)]}^{\top}\underset{N\to \infty}{\overset{\mathcal{D}}{\to}}{[{X}_{1},\dots ,{X}_{T}]}^{\top},$$

$${\mathbf{\Sigma}}_{N}=\sum _{i=1}^{N}{\sigma}_{i}^{2}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\left[\sum _{s=1}^{1}{\epsilon}_{i,s},\dots ,\sum _{s=1}^{T}{\epsilon}_{i,s}\right]}^{\top}={\varsigma}_{N}^{2}\mathbf{\Lambda},$$

$$\begin{array}{cc}\hfill & {\left({\mathbf{a}}^{\top}{\mathbf{\Sigma}}_{N}\mathbf{a}\right)}^{-\frac{2+\chi}{2}}\sum _{i=1}^{N}\mathsf{E}{\left|{\mathbf{a}}^{\top}{\sigma}_{i}{\left[\sum _{s=1}^{1}{\epsilon}_{i,s},\dots ,\sum _{s=1}^{T}{\epsilon}_{i,s}\right]}^{\top}\right|}^{2+\chi}\hfill \\ & ={\varsigma}_{N}^{-2-\chi}{\left({\mathbf{a}}^{\top}\mathbf{\Lambda}\mathbf{a}\right)}^{-\frac{2+\chi}{2}}\sum _{i=1}^{N}{\sigma}_{i}^{2+\chi}\mathsf{E}{\left|\sum _{t=1}^{T}{a}_{t}\sum _{s=1}^{t}{\epsilon}_{i,s}\right|}^{2+\chi}\hfill \\ & \le {T}^{1+\chi}{\varsigma}_{N}^{-2-\chi}{\left({\mathbf{a}}^{\top}\mathbf{\Lambda}\mathbf{a}\right)}^{-\frac{2+\chi}{2}}\sum _{i=1}^{N}{\sigma}_{i}^{2+\chi}\sum _{t=1}^{T}\mathsf{E}{\left({a}_{t}\sum _{s=1}^{t}{\epsilon}_{i,s}\right)}^{2+\chi}\hfill \\ & \le {T}^{1+\chi}{\varsigma}_{N}^{-2-\chi}{\left({\mathbf{a}}^{\top}\mathbf{\Lambda}\mathbf{a}\right)}^{-\frac{2+\chi}{2}}\sum _{i=1}^{N}{\sigma}_{i}^{2+\chi}\sum _{t=1}^{T}|{a}_{t}{|}^{2+\chi}{t}^{1+\chi}\sum _{s=1}^{t}\mathsf{E}{\left|{\epsilon}_{i,s}\right|}^{2+\chi}\hfill \\ & \le \varrho {\varsigma}_{N}^{-2-\chi}\sum _{i=1}^{N}{\sigma}_{i}^{2+\chi}\le \varrho {\left(N{\underline{\sigma}}^{2}\right)}^{-\frac{2+\chi}{2}}N{\overline{\sigma}}^{2+\chi}=\varrho {\underline{\sigma}}^{-2-\chi}{\overline{\sigma}}^{2+\chi}{N}^{-\frac{\chi}{2}}\to 0,\phantom{\rule{1.em}{0ex}}N\to \infty \hfill \end{array}$$

$$\varrho ={T}^{1+\chi}{\left({\mathbf{a}}^{\top}\mathbf{\Lambda}\mathbf{a}\right)}^{-\frac{2+\chi}{2}}\sum _{t=1}^{T}|{a}_{t}{|}^{2+\chi}{t}^{1+\chi}\sum _{s=1}^{t}\mathsf{E}{\left|{\epsilon}_{1,s}\right|}^{2+\chi}$$

**Λ**is
$$\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\sum _{s=1}^{t}{\epsilon}_{1,s}=r\left(t\right)$$

$$\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left(\sum _{s=1}^{t}{\epsilon}_{1,s},\sum _{u=1}^{v}{\epsilon}_{1,u}\right)=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\sum _{s=1}^{t}{\epsilon}_{1,s}+\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left(\sum _{s=1}^{t}{\epsilon}_{1,s},\sum _{u=t+1}^{v}{\epsilon}_{1,u}\right)=r\left(t\right)+R(t,v),\phantom{\rule{1.em}{0ex}}t<v.$$

Moreover, let us define the reverse analogue of ${U}_{N}\left(t\right)$, i.e.,
Hence,
and, consequently,
for $t<T$. Then, under ${H}_{0}$:
where ${Z}_{t}:={X}_{T}-{X}_{t}$ and
for $\tilde{\mathbf{\Lambda}}=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\left[{\sum}_{s=2}^{T}{\epsilon}_{1,s},\dots ,{\sum}_{s=T}^{T}{\epsilon}_{1,s}\right]}^{\top}$. Using the continuous mapping theorem, we end up with
such that the law $\mathcal{L}$ corresponds to the distribution of

$${V}_{N}\left(t\right):=\sum _{i=1}^{N}\sum _{s=t+1}^{T}({Y}_{i,s}-{\mu}_{i})={U}_{N}\left(T\right)-{U}_{N}\left(t\right).$$

$${U}_{N}\left(s\right)-\frac{s}{t}{U}_{N}\left(t\right)=\sum _{i=1}^{N}\left\{\sum _{r=1}^{s}\left[\left({Y}_{i,r}-{\mu}_{i}\right)-\frac{1}{t}\sum _{v=1}^{t}\left({Y}_{i,v}-{\mu}_{i}\right)\right]\right\}=\sum _{i=1}^{N}\sum _{r=1}^{s}\left({Y}_{i,r}-{Y}_{i,t}\right)$$

$${V}_{N}\left(s\right)-\frac{T-s}{T-t}{V}_{N}\left(t\right)=\sum _{i=1}^{N}\left\{\sum _{r=s+1}^{T}\left[\left({Y}_{i,r}-{\mu}_{i}\right)-\frac{1}{T-t}\sum _{v=t+1}^{T}\left({Y}_{i,v}-{\mu}_{i}\right)\right]\right\}=\sum _{i=1}^{N}\sum _{r=s+1}^{T}\left({Y}_{i,r}-{\tilde{Y}}_{i,t}\right),$$

$${\tilde{\mathbf{\Sigma}}}_{N}^{-1/2}{[{V}_{N}\left(1\right),\dots ,{V}_{N}(T-1)]}^{\top}\underset{N\to \infty}{\overset{\mathcal{D}}{\to}}{[{Z}_{1},\dots ,{Z}_{T-1}]}^{\top},$$

$${\tilde{\mathbf{\Sigma}}}_{N}=\sum _{i=1}^{N}{\sigma}_{i}^{2}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\left[\sum _{s=2}^{T}{\epsilon}_{i,s},\dots ,\sum _{s=T}^{T}{\epsilon}_{i,s}\right]}^{\top}={\varsigma}_{N}^{2}\tilde{\mathbf{\Lambda}}$$

$${\mathcal{S}}_{N,T}\underset{N\to \infty}{\overset{\mathcal{D}}{\to}}\mathcal{L}$$

$$\mathcal{S}\left({\left\{{X}_{s}-\frac{s}{t}{X}_{t}\right\}}_{s=1,t=1}^{t-1,T},{\left\{{Z}_{s}-\frac{T-s}{T-t}{Z}_{t}\right\}}_{s=t,t=1}^{T-1,T-1}\right).$$

(ii) Let us define ${\widehat{\u03f5}}_{i,t}:={\sum}_{s=1}^{t}{\widehat{e}}_{i,s}$, ${\widehat{\u03f5}}_{i,t}^{*}:={\sum}_{s=1}^{t}{\widehat{e}}_{i,s}^{*}$,
and
Realize that ${\widehat{\u03f5}}_{i,t}$ depends on ${\widehat{\tau}}_{N}$, and hence, it depends on N.

$${\widehat{U}}_{N}\left(t\right):=\sum _{i=1}^{N}\sum _{s=1}^{t}{\widehat{e}}_{i,s}=\sum _{i=1}^{N}{\widehat{\u03f5}}_{i,t},$$

$${\widehat{U}}_{N}^{*}\left(t\right):=\sum _{i=1}^{N}\sum _{s=1}^{t}{\widehat{Y}}_{i,s}^{*}=\sum _{i=1}^{N}\sum _{s=1}^{t}\left({\widehat{e}}_{i,s}^{*}-\frac{1}{N}\sum _{i=1}^{N}{\widehat{e}}_{i,s}\right)=\sum _{i=1}^{N}\sum _{s=1}^{t}\left({\widehat{e}}_{i,s}^{*}-{\widehat{e}}_{i,s}\right)=\sum _{i=1}^{N}\left({\widehat{\u03f5}}_{i,t}^{*}-{\widehat{\u03f5}}_{i,t}\right).$$

Let us calculate ${lim}_{N\to \infty}{\mathbf{\Gamma}}_{i,N}$, where ${\mathbf{\Gamma}}_{i,N}=\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{[{\widehat{\u03f5}}_{i,1},\dots ,{\widehat{\u03f5}}_{i,T}]}^{\top}$. Using the law of total variance,
Since ${lim}_{N\to \infty}\mathsf{P}[{\widehat{\tau}}_{N}=\tau ]=1$ and $\mathsf{E}\left[{\widehat{e}}_{i,t}\right|{\widehat{\tau}}_{N}=\tau ]=0$, then
Similarly with the covariance, i.e., after applying the law of total covariance, we have
Note that
where ${\epsilon}_{i,t}=\frac{1}{t}{\sum}_{s=1}^{t}{\epsilon}_{i,s}$ and ${\tilde{\epsilon}}_{i,t}=\frac{1}{T-t}{\sum}_{s=t+1}^{T}{\epsilon}_{i,s}$. Taking into account the definitions of $r\left(t\right)$ and $R(t,v)$ together with some simple algebra, we obtain that $\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[{\widehat{\u03f5}}_{i,s}\right|{\widehat{\tau}}_{N}=\tau ]={\sigma}_{i}^{2}{\gamma}_{t,t}\left(\tau \right)$ and $\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left({\widehat{\u03f5}}_{i,t},{\widehat{\u03f5}}_{i,v}|{\widehat{\tau}}_{N}=\tau \right)={\sigma}_{i}^{2}{\gamma}_{t,v}\left(\tau \right)$ for $t<v$, such that
and
where
Thus, ${lim}_{N\to \infty}{\mathbf{\Gamma}}_{i,N}={\sigma}_{i}^{2}\mathbf{\Gamma}\left(\tau \right)$, where the matrix $\mathbf{\Gamma}\left(\tau \right)={\left\{{\gamma}_{t,v}\left(\tau \right)\right\}}_{t,v=1}^{T,T}$ is symmetric and does not depend on i. The matrix $\mathbf{\Gamma}\left(\tau \right)$ is singular. Nevertheless, omitting the τ-th row and the τ-th column from $\mathbf{\Gamma}\left(\tau \right)$, one obtains matrix $\tilde{\mathbf{\Gamma}}\left(\tau \right)$, i.e., $\tilde{\mathbf{\Gamma}}\left(\tau \right):={\mathbf{\Gamma}}_{-\tau ,-\tau}\left(\tau \right)$, which has a full rank of $T-1$ due to Assumption 1 and

$$\begin{array}{c}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\widehat{\u03f5}}_{i,t}=\mathsf{E}\left[\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left\{{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}\}\right]+\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[\mathsf{E}\left\{{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}\}\right]=\sum _{\pi =1}^{T}\mathsf{P}[{\widehat{\tau}}_{N}=\pi ]\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}=\pi ]\hfill \\ \hfill +\sum _{\pi =1}^{T}\mathsf{P}[{\widehat{\tau}}_{N}=\pi ]{\left\{\mathsf{E}\left[{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}=\pi ]\right\}}^{2}-{\left\{\sum _{\pi =1}^{T}\mathsf{P}[{\widehat{\tau}}_{N}=\pi ]\mathsf{E}\left[{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}=\pi ]\right\}}^{2}.\end{array}$$

$$\underset{N\to \infty}{lim}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}{\widehat{\u03f5}}_{i,t}=\underset{N\to \infty}{lim}\mathsf{Var}\phantom{\rule{0.166667em}{0ex}}\left[{\widehat{\u03f5}}_{i,t}\right|{\widehat{\tau}}_{N}=\tau ].$$

$$\underset{N\to \infty}{lim}\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left({\widehat{\u03f5}}_{i,t},{\widehat{\u03f5}}_{i,v}\right)=\underset{N\to \infty}{lim}\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left({\widehat{\u03f5}}_{i,t},{\widehat{\u03f5}}_{i,v}|{\widehat{\tau}}_{N}=\tau \right).$$

$$\left({\widehat{e}}_{i,t}|{\widehat{\tau}}_{N}=\tau \right)=\left\{\begin{array}{cc}{\sigma}_{i}({\epsilon}_{i,t}-{\epsilon}_{i,\tau}),\hfill & t\le \tau ;\hfill \\ {\sigma}_{i}({\epsilon}_{i,t}-{\tilde{\epsilon}}_{i,\tau}),\hfill & t>\tau ;\hfill \end{array}\right.$$

$${\gamma}_{t,t}\left(\tau \right)=\left\{\begin{array}{c}r\left(t\right)+\frac{{t}^{2}}{{\tau}^{2}}r\left(\tau \right)-\frac{2t}{\tau}[r\left(t\right)+R(t,\tau )],\phantom{\rule{1.em}{0ex}}t<\tau ;\hfill \\ 0,\phantom{\rule{1.em}{0ex}}t=\tau ;\hfill \\ r(t-\tau )+\frac{{(t-\tau )}^{2}}{{(T-\tau )}^{2}}r(T-\tau )-\frac{2(t-\tau )}{T-\tau}\left[r(t-\tau )+R(t-\tau ,T-\tau )\right],\phantom{\rule{1.em}{0ex}}t>\tau ;\hfill \end{array}\right.$$

$${\gamma}_{t,v}\left(\tau \right)=\left\{\begin{array}{c}0,\phantom{\rule{1.em}{0ex}}t=\tau \mathrm{or}v=\tau ,\hfill \\ r\left(t\right)+R(t,v)+\frac{tv}{{\tau}^{2}}r\left(\tau \right)-\frac{v}{\tau}[r\left(t\right)+R(t,\tau )]-\frac{t}{\tau}[r\left(v\right)+R(v,\tau )],\phantom{\rule{1.em}{0ex}}t<v<\tau ;\hfill \\ S(t,v,\tau +1-t)+\frac{t(v-\tau )}{\tau (T-\tau )}R(\tau ,T)-\frac{v-\tau}{T-\tau}S(t,T,\tau +1-t)-\frac{t}{\tau}R(\tau ,v),\phantom{\rule{1.em}{0ex}}t<\tau <v;\hfill \\ r(t-\tau )+R(t-\tau ,v-\tau )+\frac{(t-\tau )(v-\tau )}{{(T-\tau )}^{2}}r(T-\tau )-\frac{v-\tau}{T-\tau}[r(t-\tau )+R(t-\tau ,T-\tau )]\hfill \\ \phantom{\rule{1.em}{0ex}}-\frac{t-\tau}{T-\tau}[r(v-\tau )+R(v-\tau ,T-\tau )],\phantom{\rule{1.em}{0ex}}\tau <t<v;\hfill \end{array}\right.$$

$$S(t,v,d)=\mathsf{Cov}\phantom{\rule{0.166667em}{0ex}}\left(\sum _{s=1}^{t}{\epsilon}_{i,s},\sum _{u=t+d}^{v}{\epsilon}_{i,u}\right)=\sum _{s=1}^{t}\sum _{u=t+d}^{v}{\rho}_{u-s},\phantom{\rule{1.em}{0ex}}\forall i\in \mathbb{N}.$$

$$\left({\widehat{\u03f5}}_{i,t}|{\widehat{\tau}}_{N}=\tau \right)=\left\{\begin{array}{cc}{\sigma}_{i}\left({\sum}_{s=1}^{t}{\epsilon}_{i,s}-t{\epsilon}_{i,\tau}\right),\hfill & t<\tau ;\hfill \\ 0,\hfill & t=\tau ;\hfill \\ {\sigma}_{i}\left({\sum}_{s=\tau +1}^{t}{\epsilon}_{i,s}-(t-\tau ){\tilde{\epsilon}}_{i,\tau}\right),\hfill & t>\tau .\hfill \end{array}\right.$$

Let us define random vectors
i.e., they do not contain elements with argument ${\widehat{\tau}}_{N}$. The law of total probability provides
for all $\mathbf{x}\in {\mathbb{R}}^{T-1}$. Since Assumption 5 holds, then according to the bootstrap multivariate CLT by [16] (Theorem 2.4) for (conditionally) independent and not identically distributed zero mean $(T-1)$-dimensional random vectors ${\mathit{\xi}}_{N}=\left[{[{\widehat{\u03f5}}_{i,1},\dots ,{\widehat{\u03f5}}_{i,{\widehat{\tau}}_{N}-1},{\widehat{\u03f5}}_{i,{\widehat{\tau}}_{N}+1},\dots ,{\widehat{\u03f5}}_{i,T}]}^{\top}|{\widehat{\tau}}_{N}=\tau \right]$, we have
for all $\mathbf{x}\in {\mathbb{R}}^{T-1}$. Theorem 1, Relations (A6) and (A7) imply
for all $\mathbf{x}\in {\mathbb{R}}^{T-1}$.

$$\begin{array}{cc}\hfill {\mathbf{U}}_{N}& :={[{U}_{N}\left(1\right),\dots ,{U}_{N}({\widehat{\tau}}_{N}-1),{U}_{N}({\widehat{\tau}}_{N}+1),\dots ,{U}_{N}\left(T\right)]}^{\top},\hfill \\ \hfill {\widehat{\mathbf{U}}}_{N}& :={[{\widehat{U}}_{N}\left(1\right),\dots ,{\widehat{U}}_{N}({\widehat{\tau}}_{N}-1),{\widehat{U}}_{N}({\widehat{\tau}}_{N}+1),\dots ,{\widehat{U}}_{N}\left(T\right)]}^{\top},\hfill \\ \hfill {\widehat{\mathbf{U}}}_{N}^{*}& :={[{\widehat{U}}_{N}^{*}\left(1\right),\dots ,{\widehat{U}}_{N}^{*}({\widehat{\tau}}_{N}-1),{\widehat{U}}_{N}^{*}({\widehat{\tau}}_{N}+1),\dots ,{\widehat{U}}_{N}^{*}\left(T\right)]}^{\top},\hfill \end{array}$$

$$\begin{array}{cc}\hfill & \mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}^{*}\le \mathbf{x}|\mathbb{Y}\right]-\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}\right]\hfill \\ & =\sum _{\pi =1}^{T}\left\{\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}^{*}\le \mathbf{x}|\mathbb{Y},{\widehat{\tau}}_{N}=\pi \right]-\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\pi \right]\right\}\mathsf{P}[{\widehat{\tau}}_{N}=\pi ]\hfill \end{array}$$

$$\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}^{*}\le \mathbf{x}|\mathbb{Y},{\widehat{\tau}}_{N}=\tau \right]-\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\tau \right]\underset{N\to \infty}{\overset{\mathsf{P}}{\to}}0$$

$$\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}^{*}\le \mathbf{x}|\mathbb{Y}\right]-\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}\right]\underset{N\to \infty}{\overset{\mathsf{P}}{\to}}0$$

Using the law of total probability again, we obtain
The consistency result ${lim}_{N\to \infty}\mathsf{P}[{\widehat{\tau}}_{N}=\tau ]=1$ from Theorem 1 and Equation (A9) give
Since the Lyapunov CLT provides that $\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\tau \right]$ has an approximate multivariate normal distribution with zero mean and identity covariance matrix, Relation (A10) gives that the limiting distribution of ${\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}$ is the same. Note that the Lyapunov condition for $\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\tau \right]$ can be checked in a similar manner as in (A5), i.e.,
for arbitrary fixed $\mathbf{0}\ne \mathbf{b}={[{b}_{1},\dots ,{b}_{T-1}]}^{\top}\in {\mathbb{R}}^{T-1}$, some $0<\chi \le 2$ and some positive constant $\tilde{\varrho}$ not depending on N.

$$\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}\right]=\sum _{\pi =1}^{T}\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\pi \right][{\widehat{\tau}}_{N}=\pi ].$$

$$\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}\right]-\mathsf{P}\left[{\varsigma}_{N}^{-1}{\tilde{\mathbf{\Gamma}}}^{-1/2}\left(\tau \right){\widehat{\mathbf{U}}}_{N}\le \mathbf{x}|{\widehat{\tau}}_{N}=\tau \right]\underset{N\to \infty}{\overset{\mathsf{P}}{\to}}0.$$

$$\begin{array}{cc}& {\varsigma}_{N}^{-2-\chi}{\left({\mathbf{b}}^{\top}\tilde{\mathbf{\Gamma}}\left(\tau \right)\mathbf{b}\right)}^{-\frac{2+\chi}{2}}\hfill \\ & \phantom{\rule{1.em}{0ex}}\times \sum _{i=1}^{N}\mathsf{E}{\left|{\mathbf{b}}^{\top}{\sigma}_{i}{\left[\sum _{s=1}^{1}\left({\epsilon}_{i,s}-{\epsilon}_{i,\tau}\right),\dots ,\sum _{s=1}^{\tau -1}\left({\epsilon}_{i,s}-{\epsilon}_{i,\tau}\right),\sum _{s=\tau +1}^{\tau +1}\left({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,\tau}\right),\dots ,\sum _{s=\tau +1}^{T}\left({\epsilon}_{i,s}-{\tilde{\epsilon}}_{i,\tau}\right)\right]}^{\top}\right|}^{2+\chi}\hfill \\ & \le \tilde{\varrho}{\varsigma}_{N}^{-2-\chi}\sum _{i=1}^{N}{\sigma}_{i}^{2+\chi}\le \tilde{\varrho}{\left(N{\underline{\sigma}}^{2}\right)}^{-\frac{2+\chi}{2}}N{\overline{\sigma}}^{2+\chi}=\tilde{\varrho}{\underline{\sigma}}^{-2-\chi}{\overline{\sigma}}^{2+\chi}{N}^{-\frac{\chi}{2}}\to 0,\phantom{\rule{1.em}{0ex}}N\to \infty \hfill \end{array}$$

Bear in mind that
and
Applying the continuous mapping theorem completes the second part of the proof.

$$\sum _{i=1}^{N}\sum _{r=1}^{s}\left({\widehat{Y}}_{i,r}^{*}-{\widehat{Y}}_{i,t}^{*}\right)=\sum _{i=1}^{N}\left\{\left[\sum _{r=1}^{s}{\widehat{Y}}_{i,r}^{*}\right]-\frac{s}{t}\sum _{v=1}^{t}{\widehat{Y}}_{i,v}^{*}\right\}={\widehat{U}}_{N}^{*}\left(s\right)-\frac{s}{t}{\widehat{U}}_{N}^{*}\left(t\right)$$

$$\sum _{i=1}^{N}\sum _{r=s+1}^{T}\left({\widehat{Y}}_{i,r}^{*}-{\tilde{\widehat{Y}}}_{i,t}^{*}\right)={\widehat{V}}_{N}^{*}\left(s\right)-\frac{T-s}{T-t}{\widehat{V}}_{N}^{*}\left(t\right).$$

(iii) Under ${H}_{0}$, Theorem 1 provides
Then, in view of (4),
Finally, it is sufficient to realize the definition of ${\mathcal{S}}_{N,T}$ and ${\mathcal{S}}_{N,T}^{*}$ together with (A8). ☐

$$\underset{N\to \infty}{lim}\mathsf{P}[{\widehat{\tau}}_{N}=T]=1.$$

$$\underset{N\to \infty}{lim}\mathsf{P}\left[{\widehat{U}}_{N}\left(s\right)-\frac{s}{t}{\widehat{U}}_{N}\left(t\right)={U}_{N}\left(s\right)-\frac{s}{t}{U}_{N}\left(t\right)\right]=1,\phantom{\rule{1.em}{0ex}}1\le s\le t\le T.$$

## References

- Horváth, L.; Hušková, M. Change-point Detection in Panel Data. J. Time Ser. Anal.
**2012**, 33, 631–648. [Google Scholar] [CrossRef] - Peštová, B.; Pešta, M. Testing Structural Changes in Panel Data with Small Fixed Panel Size and Bootstrap. Metrika
**2015**, 78, 665–689. [Google Scholar] [CrossRef] - Bai, J. Common Breaks in Means and Variances for Panel Data. J. Econom.
**2010**, 157, 78–92. [Google Scholar] [CrossRef] - Peštová, B.; Pešta, M. Erratum to: Testing Structural Changes in Panel Data with Small Fixed Panel Size and Bootstrap. Metrika
**2016**, 79, 237–238. [Google Scholar] [CrossRef] - Kim, D. Estimating a common deterministic time trend break in large panels with cross sectional dependence. J. Econom.
**2011**, 164, 310–330. [Google Scholar] [CrossRef] - Kim, D. Common breaks in time trends for large panel data with a factor structure. Econom. J.
**2014**, 17, 301–337. [Google Scholar] [CrossRef] - Horváth, L.; Hušková, M.; Rice, G.; Wang, J. Estimation of the time of change in panel data. ArXiv, 2015; arXiv:math.ST/1503.04455. [Google Scholar]
- Baltagi, B.H.; Feng, Q.; Kao, C. Estimation of heterogeneous panels with structural breaks. J. Econom.
**2016**, 191, 176–195. [Google Scholar] [CrossRef] - Pešta, M.; Hudecová, Š. Asymptotic consistency and inconsistency of the chain ladder. Insur. Math. Econ.
**2012**, 51, 472–479. [Google Scholar] [CrossRef] - Lindner, A.M. Stationarity, mixing, distributional properties and moments of GARCH(p,q)-processes. In Handbook of Financial Time Series; Andersen, T.G., Davis, R.A., Kreiss, J.P., Mikosch, T., Eds.; Springer: Berlin, Germany, 2009; pp. 481–496. [Google Scholar]
- Peštová, B.; Pešta, M. Change Point in Panel Data with Small Fixed Panel Size: Ratio and Non-Ratio Test Statistics. ArXiv, 2016; arXiv:math.ME/1608.05670. [Google Scholar]
- Hušková, M.; Kirch, C. Bootstrapping sequential change-point tests for linear regression. Metrika
**2012**, 75, 673–708. [Google Scholar] [CrossRef] - Hušková, M.; Kirch, C.; Prášková, Z.; Steinebach, J. On the detection of changes in autoregressive time series, II. Resampling procedures. J. Stat. Plan. Inference
**2008**, 138, 1697–1721. [Google Scholar] [CrossRef] - Peštová, B.; Pešta, M. Change Point Detection in Panel Data with Small Fixed Panel Size. In Proceedings ITISE 2016; Valenzuela, O., Rojas, F., Ruiz, G., Pomares, H., Rojas, I., Eds.; University of Granada: Granada, Spain, 2016; pp. 194–205. [Google Scholar]
- Meyers, G.G.; Shi, P. Loss Reserving Data Pulled From NAIC Schedule P, 2011. Available online: http://www.casact.org/research/index.cfm?fa=loss_reserves_data ( accessed on 10 June 2014).
- Pešta, M. Total least squares and bootstrapping with application in calibration. Statistics
**2013**, 47, 966–991. [Google Scholar] [CrossRef]

**Figure 1.**Histograms of the estimated change points ${\widehat{\tau}}_{N}$ for various structures and distributions of the panel disturbances ($\tau =8$, $T=10$, $N=20$, $\sigma =0.2$; all of the panels are subject to a break of size ${\delta}_{i}\sim U[0,2]$).

**Figure 2.**Histograms of the estimated change points ${\widehat{\tau}}_{N}$ for various values of the change point τ ($T=10$, $N=20$, $\sigma =0.2$, $75\%$ of the panels are subject to a break of size ${\delta}_{i}\sim U[0,2]$ panel disturbances from AR(1) with $N(0,1)$ innovations).

**Figure 3.**Histograms of the estimated change points ${\widehat{\tau}}_{N}$ for various values of N ($\tau =9$, $T=10$, $\sigma =0.2$, $50\%$ of the panels are subject to a break of size ${\delta}_{i}\sim U[0,2]$, panel disturbances from AR(1) with ${t}_{5}$ innovations).

**Figure 4.**Histograms of the estimated change points ${\widehat{\tau}}_{N}$ for various values of σ ($\tau =1$, $T=10$, $N=10$, all of the panels are subject to a break of size ${\delta}_{i}\sim U[0,2]$, panel disturbances from GARCH(1,1) with $N(0,1)$ innovations).

**Figure 5.**Histograms of the estimated change points ${\widehat{\tau}}_{N}$ when various portion of the panels are subject to a break of size ${\delta}_{i}\sim U[0,2]$ ($\tau =5$, $T=10$, $N=20$, $\sigma =0.2$, panel disturbances from GARCH(1,1) with ${t}_{5}$ innovations).

**Figure 6.**Development of the yearly total claim amounts normalized by the earned premium together with the estimated change point ${\widehat{\tau}}_{146}=9$ (corresponding to year 1996).

**Figure 7.**Development of the yearly total claim amounts normalized by the earned premium together for the second half of the original observation period with the estimated change point ${\widehat{\tau}}_{146}=4$ (corresponding to year 1996).

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).