Abstract
This paper considers a new mean-variance model with strong mixing errors and describes a combination test for the mean shift and variance change. Under some stationarity and symmetry conditions, the important limiting distribution for a combination test is obtained, which can derive the limiting distributions for the mean change test and variance change test. As an application, an algorithm for a three-step method to detect the change-points is given. For example, the first step is to test whether there is at least a change-point. The second and third steps are to detect the mean change-point and the variance change-point, respectively. To illustrate our results, some simulations and real-world data analysis are discussed. The analysis shows that our tests not only have high powers, but can also determine the mean change-point or variance change-point. Compared to the existing methods of cpt.meanvar and mosum from the R package, the new method has the advantages of recognition capability and accuracy.
Keywords:
change-point of mean; change-point of variance; CUSUM estimator; limit distribution; mixing sequences MSC:
62M20; 62F05
1. Introduction
In the paper [1], statistical methods of control for manufacturing processes were first proposed. However, the issue of change-points initially appeared in the context of quality control, where staff typically observe the output of a production line, aiming to detect signals deviating from acceptable levels while observing the data. Since the seminal paper [2], the cumulative sum (CUSUM) test has been one of the most popular methods for detecting a parameter change in statistical models. CUSUM tests of the mean change-point and the variance change-point have played a central role in detecting abnormal signals during quality control and changes in financial time series and other fields. For example, the authors of the papers [3,4] considered CUSUM tests for the mean change-point model with independent errors and linear processes, respectively; the papers [5,6] investigated CUSUM tests for the variance change-point model with independent normal errors and independent errors, respectively. For further studies, reference can be made to [7,8,9,10] and the sources detailed therein. Combining the change-point of mean with change-point of variance, this paper considers a change-point model with mean shift and variance change. For , we consider a time series to be a mean-variance model as
where and are the mean and variance parameters, respectively. Since the condition of strong mixing (-mixing) is more general in time series [11], we consider the error sequence to be a -mixing sequence with a mean of zero and variance of one. Let us recall the definition of -mixing. Let and denote to be the -fields generated by random variables , . For , we define
Definition 1.
If as , then is called a strong mixing or α-mixing sequence.
In the mean-variance model (1), we monitor the mean change-point with the CUSUM statistic
and monitor the variance change-point with the CUSUM statistic
where .
To improve the power of the statistics of the mean change-point and the variance change-point, we use the combination statistic
where and are defined by (2) and (3), respectively. Here, ⊤ represents the transpose of the vector. Under the assumption of no change in the mean or variance, for all , the model (1) can be summarized in the null hypothesis as
where and . The change-point alternative hypothesis is that there is an integer such that or there is an integer such that .
In this paper, we consider the mean-variance model (1) with -mixing errors and investigate the limiting distributions for the statistics related to under the null hypothesis by (5). For example, if is smaller than a critical value (see details in Remark 2), then there is no evidence of a mean change or variance change in the mean-variance model (1). Otherwise, if is larger than another critical value (see details in Remark 2), the mean change-point location is suggested by
If is larger than this critical value, the variance change-point location is suggested by
Compared with existing methods for determining change-points, such as cpt.meanvar from the R package changepoint in [12] and mosum from the R package mosum in [13], we will show that our tests not only have high powers, but also determine the change-points as the mean change-point or variance change-point. Further details are provided in Section 3, Section 4 and Section 5.
In addition to the change-point studies referred to above, many scholars have sought to extend the change-points of the mean and variance for both independent and dependent data. For the mean change-point example, in [14,15,16], CUSUM estimators were investigated with dependent errors; in [17], a weighted CUSUM estimator was studied using an infinite variance process; in [18,19], a self-normalization method was used to test the mean change-point in a time series; in [20,21], data-driven methods were used to investigate the mean shift and variance change; in [22,23,24], CUSUM estimators were discussed regarding the mean change-point with panel data. For the variance change-point example, the Schwarz information criterion (SIC) estimator of variance change was studied with independent normal errors in [25]; ref. [26] extended the CUSUM estimator in [5] with normal data to infinite moving average processes; in [27], a weighted variances test was considered based on independent errors; covariance structure change was studied with linear processes in [28]. In addition, the authors of [29] considered a CUSUM test of parameter changes in a time series model; refs. [30,31] reported changes in a variance inflation factor (VIF) regression model and a linear regression model; the authors of ref. [32] considered changes in parameters using the Shiryaev–Roberts statistics; in ref. [33], the change of covariance structure in multivariate time series were considered; refs. [34,35,36] considered multiple change-points; in ref. [37], the authors investigated a Bayesian method for the change-point; the authors of ref. [38] investigated a new class of weighted CUSUM estimators of the mean change-point; in ref. [39], the least sum of the squared error (LSSE) and maximum log-likelihood (MLL) methods in the estimation of the change-point were examined; in ref. [40], multivariate change-points in a mean vector and/or covariance structure were considered; and ref. [41] discussed a CUSUM estimator in an ARMA–GARCH model. Furthermore, a test for the detection of outliers for continuous distribution data was investigated in [42]; refs. [43,44], respectively, investigated change-point problems with a nonstationary time series and the volatility of conditional heteroscedasticity, respectively.
It is pointed out that the -mixing sequence is very general in time series. For example, consider an infinite order moving average (MA(∞)) process , where exponentially fast, and is an sequence. If the probability density function of exists (such as normal, Cauchy, exponential, and uniform distributions), then is an -mixing with exponentially decaying coefficients. The strictly stationary time series, including the autoregressive moving average (ARMA) processes and geometrically ergodic Markov chains, are the -mixing processes. For further studies of -mixing, reference can be made to [45,46] for limit theorems, refs. [47,48] for central limit theorem, refs. [49,50,51,52] for regression models, etc.
The rest of this paper is organized as follows. Some assumptions are provided in Section 2. By some stationarity and symmetry conditions, the limit distribution for the combination statistic will be shown under the null hypothesis (5) in Section 2, which can derive the limiting distributions of the CUSUM statistics for the mean change and for the variance change, respectively. As an application, we give a three-step algorithm to detect the change-points in Section 3. For example, in the first step, we do the combination test to check whether there is at least a change-point or not; in the second step, we do the mean test to detect the mean change-point; in the third step, we use the variance test to detect the variance change-point. In Section 4 of simulation, it will show that our method has a better performance than the methods of as cpt.meanvar [12] and mosum [13]. We also use three examples of real-world data to detect the mean change-point and variance change-point in Section 5. In addition, some conclusions and future work will be discussed in Section 6. Last, the proofs of the main results are presented in Section 7.
Throughout the paper, as , let and denote the convergence in probability and distribution, respectively. Let denote some positive constants not depending on T, which may be different in various places. If X and Y have the same distribution, we denote it as . In addition, second-order stationarity means that for all and .
2. Main Results
First, we list some assumptions as follows:
Assumption 1.
Consider the model (1), where is a stationarity sequence of α-mixing random variables with , for all . In addition, for some , let and .
Assumption 2.
Assumption 3.
Let be a second-order stationarity sequence of α-mixing random variables with and . For some , assume that and . In addition, let for all .
Assumption 4.
Let
where and for .
Assumption 5.
Let be a sequence of positive integers satisfying
Remark 1.
The moment conditions and mixing coefficients of α-mixing sequence in Assumption 1 are used by many researchers, see [46,51], etc. The conditions (8) in Assumption 2 are the limiting variances for partial sums of and . The condition (9) in Assumption 2 is a symmetry condition, which requires the limiting for partial sums of covariance of and to be zero. For example, let denote the joint probability density function of random variables and for all and . Let be symmetrical, i.e., for all . It is easy to check that , which implies for all and . In addition, it has and for all . Obviously, the binary normal distribution can satisfy the conditions for this example. Thus, the condition of (9) is satisfied. The second-order stationarity condition in Assumption 3 is provided to obtain and in Assumption 2. They are the long-run variances and in (10). To estimate these long-run variances and , we use Assumption 4 and the sample autocovariance functions to give their estimators and in (16). A similar condition (11) can be seen in [26].
Second, we study the limiting distribution of combination statistic in (4) under the null hypothesis by (5). We denote as the greatest integer not exceeding x. Throughout the paper, let and be two independent standard Brownian bridges, ⇒ denote the convergence in distribution in the Skorokhod space .
Theorem 1.
Usually, the and in (8) are unknown. We should estimate them. By the second stationarity in Assumption 3, it is easy to obtain the long-run variances and defined by (10). In the next, we discuss the estimators of and . Let , . Then, and defined in (10) can be, respectively, estimated by and as
and
where and .
Lemma 1.
For , denote the combination statistic
where , , , , and are defined by (2)–(4) and (16), respectively.
Combining Theorem 1 with Lemma 1, we obtain two corollaries as follows:
Corollary 1.
Corollary 2.
Remark 2.
For , by (11.38) in [53], it is presented that
where and be defined in (12). Let α () be the level of significance. For , let be independent standard Brownian bridges for . Then the distribution of
was derived by Kiefer [54], which has a series Fourier-Bessel expansions. It is not easy to calculate the critical values for this distribution. Lee et al. [29] considered the problem of testing for parameter changes in time series models based on the CUSUM statistics and obtained the limiting distribution (23). They used the Monte Carlo method to obtain the critical values with different α and l. For example, when , the critical values are calculated as and (see [29]).
If , there is no evidence of a mean change or variance change. Otherwise, we conclude that there is at least a mean change-point or a variance change-point.
Similar to the multiple testing problems, by (22), we take the critical value for the distribution of to do the tests of the mean change-point and variance change-point, in order to control the type I error. For example, , . If , there is no evidence of a mean change. Otherwise, we conclude that there is a mean change-point, and its time location is defined in (6).
Meanwhile, by (22), the p-value of can be defined by as
where . Similarly, let be the critical value for the distribution of . If , there is no evidence of a variance change. Otherwise, we conclude that there is a variance change-point, and its time location is suggested in (7).
In addition, the p-value of can be defined by (24), where is replaced by and is .
3. The Three-Step Algorithm
Based on the combination statistic in Corollary 1, the mean change statistic and variance change statistic in Corollary 2, we give a three-step algorithm to test the changes in mean and variance in Algorithm 1, i.e., the combination test, mean change test, and variance change test. In the following algorithm, we assume that there is at most one mean and one variance in the time series. If there are more change-points, we will give the discussion in Remark 3 to detect them.
Remark 3.
It can be seen that the CUSUM statistic of variance change in (3) contains the sample mean statistic, while the CUSUM statistic of mean change in (2) does not contain the sample variance statistic. One can use
to replace , where and . However, the proofs of limiting distribution and consistency estimator based on will be complicated. Thus, we consider to construct the variance change-point estimator. That is why we do the mean change test in Step 2 before the test of variance change in this paper. To reduce the impact of mean change on the variance change test, we can construct the modified data if we find a mean change-point. Then, we go to do Step 3 of the variance change test. Since it is assumed that there is at most one mean and one variance in the time series, Algorithm 1 is terminated after Step 3. If there are more change-points, we can modify the process by the variance change. For example, base on the data (or the modified data }), let the modified process as
where , , and . Then, based on the modified data , we can combine the three-step algorithm with iterative methods to detect more change-points. For further details, one can refer to [20,21] and the sources detailed therein.
For further studies of multiple change-point detection, reference can be made to [35] and the sources detailed therein. Next, we should discuss the measure of accuracy for the multiple change-point detection. Based on a time series observation of , assume that there are change-points denoted by . By the change-point detection methods, it is assumed to detect change-points denoted by . Following [55,56], a set of correctly detected change-points is defined as True Positive (TP):
where m is a margin size with . Then, the Precision, Recall, and F1-score are defined as follows
where denotes the number of set .
| Algorithm 1 Three-step algorithm |
|
4. Simulations
In this section, some simulations illustrate the empirical detection probabilities for the change-point estimators , and defined by (18). For , we consider a mean-variance model as
where and are the mean parameters, and are the variance parameters, and are the mean change-point location and variance change-point location, respectively. Let be a random vector with and satisfying for some . It is easy to see that are the -mixing random variables with mixing coefficient .
Consider the null hypothesis : and and the alternative hypothesis : or . For simplicity, we consider 4 different cases as follows:
Case 1: and ; Case 2: and ;
Case 3: and ; Case 4: and .
The mean change-point location and variance change-point location will be given later. Denote
The details of our algorithm to detect change-points in the mean-variance model can be found in Section 3.
First, we consider the Case 1–Case 4 for the mean-variance model (27) based on the multivariate normal distribution. Let with the dependence parameter in . The level of significance is taken . By , , , and , we obtain the empirical sizes and powers for the estimators , and denoted by , and , respectively. Thus, the results of , and are shown in Table 1. The simulation results are taken by 1000 replications.
Table 1.
Empirical sizes and powers of , , based on and the level of significance .
By Table 1, we give some comments here:
- •
- For Case 1: The mean and variance are not changed. It can be seen that the empirical sizes of are around the level of significance , while the empirical sizes and of and are smaller than , respectively.
- •
- For Case 2: The mean is changed, while the variance is not changed. It can be seen that the powers of , of , go to 1 as sample size T increases, while the powers of are smaller than 0.025.
- •
- For Case 3: The variance is changed, while the mean is not changed. It can be seen that the powers of , of , increase to 1 as sample size T increases, while the powers of are around .
- •
- For Case 4: The mean and variance are both changed. We can find that the powers of , of , of , go to 1 as sample size T increases.
Second, we consider the multivariate t distribution. Let and and and be independent. Thus, has a multivariate t distribution denoted by . Similar to Table 1, we replace by and obtain the results of , and in Table 2.
Table 2.
Empirical sizes and powers of , , based on and the level of significance .
Compared to Table 1, we find that the powers of for Cases 2 and 3 in Table 2 are not identical to those in Table 1, but the sizes and powers of and for Cases 1–4 in Table 2 are as good as those in Table 1. It may be that the multivariate t distribution with 5 degrees of freedom has heavier tails, which affects the mean change test.
Thirdly, we will discuss the accuracy of Precision, Recall, and F1-score defined by (26) for the above change-point Cases 2–4. Killick and Eckley [12] studied the methods of change-point detection and gave the ‘cpt.mean’, ‘cpt.meanvar’ and ‘cpt.var’ in R the Package changepoint for the mean change, mean-variance change and variance change, respectively. Recently, Meier et al. [13] provided the R the Package mosum to detect the change-point using the moving sum statistics. We use cpt.meanvar and mosum to write ‘cpt.meanvar’ algorithm and ‘mosum’ algorithm, respectively. Thus, we compare these two methods with our method presented in Section 3. By the same setting in Table 1 and Table 2, we take in (25). When sample size T is 300,600 and 900, respectively, the bandwidth G in the mosum method is taken by 100, 120 and 150, respectively. Here, the G should be less than one half of the sample size (see R Package mosum). Then, we obtain the results of Precision, Recall, and F1-score in Table 3 and Table 4 under the multivariate normal distribution and multivariate t distribution, respectively.
Table 3.
Precision, Recall and F1-score of two algorithms based on .
Table 4.
Precision, Recall and F1-score of two algorithms based on .
Since the mosum method in [13] is mainly used to detect the mean change-point, by Table 3 and Table 4, the Precision, Recall, and F1-score of the mosum algorithm are worse than those of cpt.meanvar algorithm and our algorithm under Cases 3 and 4. By Table 3, under the multivariate normal case, the results of our algorithm for Cases 2 and 3 are as well as those of cpt.meanvar algorithm, but the results of our algorithm for Case 4 are better than those of cpt.meanvar algorithm. Furthermore, by Table 4, the results of our algorithm are better than those of cpt.meanvar algorithm under the multivariate t distribution.
5. The Real Data Analysis
In this section, we give three examples of real data to illustrate our three-step test for the change-point detection of mean and variance. The statistics , and can be found in Section 3.
Example 1.
The dataset is the annual flow of the river Nile at Aswan from 1871 to 1970 (see [57]), and it contains 100 observations denoted by , (see Figure 1). It measures the annual discharge at Aswan in 108 and is depicted in Figure 1. The sample autocorrelation function (ACF) is also presented in Figure 1. By the right side of Figure 1, the autocorrelation coefficient is relatively large when the lag is small, but it approaches zero as the lag increases. Therefore, the data satisfies the properties of α-mixing. By Figure 1, it seams that there is a mean change-point in the time series of the annual flow of the river Nile.
Figure 1.
The left side is the times series of the annual flow of the river Nile at Aswan from 1871 to 1970; the right side is the sample ACF for the river Nile.
To judge the existence of change-points, we set the null hypothesis that the annual flow of the river Nile has no change in the mean or variance. We use our three-step algorithm given in Section 3 to find the change-points. Base on , by Step 1, we take , , and obtain . Therefore, we reject the null hypothesis and conclude that there is at least a change-point of mean or variance. By Step 2, it has and the p-value . It means that there is a mean change-point located at . Meanwhile, it calculates by Step 3 with the modified data of the mean change that and the p-value . It means that there is no evidence of a variance change-point. Consequently, we conclude that there is only one mean change-point in the time series of the annual flow of the river Nile. It is pointed out that change-point 28 is the year 1898, when the Aswan dam was built. Since the Aswan dam was built, it has significantly changed the mean annual flow of the river Nile. In addition, Zeileis et al. [58] used the F test to detect the same change-point 28. We also use the cpt.meanvar method (see [12]) and the mosum method with bandwidth (see [13]) from the R and obtain the same change-point 28.
Example 2.
The dataset is the prices of AMD stock downloaded by Python, which contains 212 observations from 3 March 2008 to 31 December 2008. Let be the closing price of AMD stock, and the return can be defined as and for . Figure 2 shows the plots of times series of returns of AMD stock and its sample ACF.
Figure 2.
The left side is the times series of returns of AMD.com stock from March 2008 to December 2008; the right side is the sample ACF for these returns.
By the right side of Figure 2, the times series of returns satisfy the properties of α-mixing. In addition, by the left side of Figure 2, the returns are around zero, but the variance of returns seams to change. Therefore, we use the three-step algorithm to find the change-points and set the null hypothesis that the return of AMD stock has no change in mean or variance. Based on the sample , by Step 1, we take , , and obtain . By Step 2, it has and . It means that there is no evidence of a mean change-point. By Step 3, it has and . Therefore, we detect a variance change-point located at (on 12 September 2008). On the other hand, under the independent normal random variables, the authors investigated the change-point detection of variance [5,25]. Thus, we use the methods in [5,25] to detect the variance change-points 136 and 137, respectively. We also use the cpt.meanvar method from the R and obtain a change-point 136. However, we do not detect any change-point using the mosum method. Obviously, the difference between 136 and 137 is only one. Furthermore, it is known that Lehman Brothers declared bankruptcy on 15 September 2008, i.e., point 137. Therefore, it added financial risk to the stock market. Consequently, the variance of returns of AMD stock began to increase after the time of the bankruptcy of Lehman Brothers.
Example 3.
The dataset is the quarterly US ex-post real interest rate from 1961:Q1 to 1986:Q3 provided by Citibase data bank (see [59]). The data are also available from the R package strucchange (see [60]) and denoted by , (see Figure 3). The sample AFC based on the quarterly US ex-post real interest rate is also shown in Figure 3.
Figure 3.
The left side is the quarterly US ex-post real interest rate from 1961:Q1 to 1986:Q3; the right side is the sample ACF for these interest rates.
Similarly, by the right left of Figure 3, the times series of US ex-post real interest rate satisfy the properties of α-mixing; and by the left side of Figure 3, it seems that there are some change-points of mean or variance. We also use the three-step algorithm to find the change-points and set the null hypothesis that the quarterly US ex-post real interest rate has no change in mean or variance. Base on , by , , and Step 1, we have . It means that there exist some change-points in this times series . By Step 2, it has , . So there exists a mean change-point at (on 1979:Q4). Then, it can be checked by Step 3 with the modified data of the mean change that and . In other words, it has a variance change-point located at (on 1973:Q3). Consequently, we detect a mean change-point 76 and a variance change-point 51. In addition, we use mosum method from the R with bandwidth to detect two change-points 47 and 76. Meanwhile, we apply cpt.meanvar from the R and find two change-points 47 and 79. On the other hand, the differences between the change-points , , are small. However, we detect the change-point 76 to be a mean change-point and detect point 51 to be a variance change-point, while the mosum and cpt.meanvar methods do not specify the types of these change-points. Thus, our method has an advantage over their methods. Furthermore, it is pointed out that the sudden jump in oil prices in 1973 added to the volatility of the US ex-post real interest rate. We also point out that the Federal Reserve’s operating procedures in October 1979 increased the means of US ex-post real interest rate (see [59]).
6. Conclusions
Many researchers have studied the mean change-point models and variance change-point models, and obtained the limiting distributions for the CUSUM statistics of the mean change-point and variance change-point (see [3,6]). As far as we know, there are few papers to study the change-point model with the mean change and variance change. In this paper, we consider the mean-variance change-point model (1) with the -mixing errors. Based on the CUSUM statistics of mean and variance, we give the combination statistic in (4). To determine whether there is a change-point of mean or variance, the limiting distributions for the CUSUM statistics and are obtained under the null hypothesis there is no change in mean or variance. Some consistent estimators and for long-run variances and are presented in (17) of Lemma 1, respectively. Then, we obtain the limiting distributions for a combination statistic , mean CUSUM statistic and variance CUSUM statistic in Corollaries 1 and 2. As an application, we give a three-step algorithm for change-point detection. The first step is to test whether there is at least a change-point or not. The second step and third step are to detect the mean change-point and variance change-point, respectively. To illustrate our three-step test of the change-point detection, some simulations and three real data examples are presented in Section 4 and Section 5, respectively. It can be seen that our algorithm has an advantage over the existing methods cpt.meanvar by [12] and mosum by [13]. For example, our method not only has a high power but can also determine the change-points as the mean change-point or variance change-point. On the other hand, the multiple change-point problems of the mean, variance, mean vector, and covariance matrix have gained much attention. In this article, we consider the limit distribution under the null hypothesis of no change-point. It is important for research to investigate the limit distribution under the alternative hypotheses. It is also interesting for researchers to study these problems based on the dependent panel data, high-dimensional data, and other dependent data in future work.
7. Proofs of Main Results
Lemma 2
(Lemma 1 in [49]). Let , where is a measurable function onto and τ and υ are finite positive integers. If is an α-mixing with for some , then is also an α-mixing with .
Lemma 3
(Proposition 2.5 in [51]). Let , . If and for some and , then
Lemma 4
(Lemma 1.4 in [52]). For some , let be a mean zero α-mixing sequence with for all and . Then
Lemma 5
(Corollary 1 in [47] and Theorem 0 in [48]). For some , let be an α-mixing sequence with for all , and . For , denote and suppose
Then
Furthermore,
where for , and is a Wiener process(standard Brownian motion). Then, for ,
where is a standard Brownian bridge.
Lemma 6.
Proof of Theorem 1.
By and , it has . Then, by Lemma 4 with and , it has
which implies
In addition, we apply (33) in Lemma 6 and obtain that
Proof of Lemma 1.
First, we prove that in (10) absolutely converges. Obviously, implies for some . Then, by the second-order stationarity of -mixing sequence , we apply Lemma 3 with , , and , and obtain that
Now, we consider the term in (39). Obviously, by the second-order stationarity of , (14), and , we obtain that
where . Combining with (37), we have
By , it has for some . In addition by the second-order stationarity of -mixing sequence with , and , we apply Lemma 4 and obtain that
which implies
Thus, it has
By Lemma 3, it can be seen that are -mixing random variables with the same mixing coefficients. Thus, by Lemma 4 with and , we establish that
which implies
Meanwhile, by , Hölder inequality and , it has
By (42),
Similar to the proof of (44), we have
Similarly,
Next, we prove the right of (17). Similar to (39), by (10) and (16), it follows
where as . Similar to the proof of (38), by Lemma 3 with and , it follows
Thus, we have
proving as .
By the null hypothesis defined by (5), it has , . Then, are also -mixing random variables with the same mixing coefficients. Similar to the proof of (43), by Lemma 4 with and , it can be obtained
which implies
It is time to consider the term . We can check that
where . Combining with , it can be checked that
According to Lemma 2, it is easy to obtain that are -mixing random variables with the same mixing coefficients. Then, similar to the proof of (43), by Lemma 4 with and , we obtain
which implies
Proof of Lemma 6.
By the Cramér–Wold device, it is sufficient to show that
We rewrite , where
Obviously, is also a mean zero sequence of -mixing random variables with the same mixing coefficients. By the null hypothesis defined by (5) and the Assumptions 1 and 2, it is easy to check that
Author Contributions
Supervision W.Y.; software M.G.; writing–original draft preparation, X.S., X.W., and W.Y. All authors have read and agreed to the published version of the manuscript.
Funding
Yang’s work was funded by NSF of Anhui Province (2008085MA14, 2108085MA06), Quality Engineering Project of Anhui University (2023xjzlgc232); Shi’s work was supported by the NSERC Discovery Grant RGPIN 2022-03264, the Interior Universities Research Coalition and the BC Ministry of Health, and the University of British Columbia Okanagan (UBC-O) Vice Principal Research in collaboration with the UBC-O Irving K. Barber Faculty of Science.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Shewhart, W.A. The application of statistics as an aid in maintaining quality of a manufactured product. J. Amer. Statist. Assoc. 1925, 20, 546–548. [Google Scholar] [CrossRef]
- Page, E.S. Continuous inspection schemes. Biometrika 1954, 41, 100–115. [Google Scholar] [CrossRef]
- Antoch, J.; Hušková, M.; Veraverbeke, N. Change-point problem and bootstrap. J. Nonparametr. Stat. 1995, 5, 123–144. [Google Scholar] [CrossRef]
- Bai, J. Least squares estimation of a shift in linear processes. J. Time Series Anal. 1994, 15, 453–472. [Google Scholar] [CrossRef]
- Inclán, C.; Tiao, G. Use of cumulative sums of squares for retrospective detection of changes of variance. J. Amer. Statist. Assoc. 1994, 89, 913–923. [Google Scholar]
- Gombay, E.; Horváth, L.; Hušková, M. Estimators and tests for change in variances. Statist. Decis. 1996, 14, 145–159. [Google Scholar] [CrossRef]
- Csörgő, M.; Horváth, L. Limit Theorems in Change-Point Analysis; Wiley: Chichester, UK, 1997; pp. 170–180. [Google Scholar]
- Chen, J.; Gupta, A. Parametric Statistical Change Point Analysis, with Applications to Genetics, Medicine and Finance, 2nd ed.; Birkhäuser: Boston, MA, USA, 2012; pp. 1–30. [Google Scholar]
- Shiryaev, A. On stochastic models and optimal methods in the quickest detection problems. Theory Probab. Appl. 2009, 53, 385–401. [Google Scholar] [CrossRef]
- Shiryaev, A. Stochastic Disorder Problems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 367–388. [Google Scholar]
- Rosenblatt, M. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 1956, 42, 43–47. [Google Scholar] [CrossRef] [PubMed]
- Killick, R.; Eckley, I.A. changepoint, An R Package for Changepoint Analysis. J. Stat. Softw. 2014, 58, 1–19. [Google Scholar] [CrossRef]
- Meier, A.; Kirch, C.; Cho, H. mosum: A Package for Moving Sums in Change-Point Analysis. J. Stat. Softw. 2021, 97, 1–42. [Google Scholar] [CrossRef]
- Kokoszka, P.; Leipus, R. Change-point in the mean of dependent observations. Statist. Probab. Lett. 1998, 40, 385–393. [Google Scholar] [CrossRef]
- Shi, X.P.; Wu, Y.H.; Miao, B.Q. Strong convergence rate of estimators of change-point and its application. Comput. Statist. Data Anal. 2009, 53, 990–998. [Google Scholar] [CrossRef]
- Ding, S.S.; Fang, H.Y.; Dong, X.; Yang, W.Z. The CUSUM statistics of change-point models based on dependent sequences. J. Appl. Stat. 2022, 49, 2593–2611. [Google Scholar] [CrossRef] [PubMed]
- Zhou, J.; Liu, S.Y. Inference for mean change-point in infinite variance AR(p) process. Stat. Probab. Lett. 2009, 79, 6–15. [Google Scholar] [CrossRef]
- Shao, X.; Zhang, X. Testing for change points in time series. J. Amer. Statist. Assoc. 2010, 105, 1228–1240. [Google Scholar] [CrossRef]
- Shao, X. Self-normalization for time series, a review of recent developments. J. Amer. Statist. Assoc. 2015, 110, 1797–1817. [Google Scholar] [CrossRef]
- Tsay, R. Outliers, level shifts and variance changes in time series. J. Forecast. 1988, 7, 1–20. [Google Scholar] [CrossRef]
- Yang, W.Z.; Liu, H.S.; Wang, Y.W.; Wang, X.J. Data-driven estimation of change-points with mean shift. J. Korean Statist. Soc. 2023, 52, 130–153. [Google Scholar] [CrossRef]
- Bai, J. Common breaks in means and variance for panel data. J. Econom. 2010, 157, 78–92. [Google Scholar] [CrossRef]
- Horváth, L.; Hušková, M. Change-point detection in panel data. J. Time Ser. Anal. 2012, 33, 631–648. [Google Scholar] [CrossRef]
- Cho, H. Change-point detection in panel data via double CUSUM statistic. Electron. J. Stat. 2016, 10, 2000–2038. [Google Scholar] [CrossRef]
- Chen, J.; Gupta, A. Testing and locating variance change points with application to stock prices. J. Amer. Statist. Assoc. 1997, 92, 739–747. [Google Scholar] [CrossRef]
- Lee, S.; Park, S. The cusum of squares test for scale changes in infinite order moving average processes. Scand. J. Stat. 2001, 28, 625–644. [Google Scholar] [CrossRef]
- Xu, M.; Wu, Y.; Jin, B. Detection of a change-point in variance by a weighted sum of powers of variances test. J. Appl. Stat. 2019, 46, 664–679. [Google Scholar] [CrossRef]
- Berkes, I.; Gombay, E.; Horvath, L. Testing for changes in the covariance structure of linear processes. J. Stat. Plan. Inf. 2009, 139, 2044–2063. [Google Scholar] [CrossRef]
- Lee, S.; Ha, J.; Na, O. The cusum test for parameter change time series models. Scand. J. Stat. 2003, 30, 781–796. [Google Scholar] [CrossRef]
- Vexler, A. Guaranteed testing for epidemic changes of a linear regression model. J. Stat. Plann. Inference 2006, 136, 3101–3120. [Google Scholar] [CrossRef]
- Jin, B.S.; Wu, Y.H.; Shi, X.P. Consistent two-stage multiple change-point detection in linear models. Canad. J. Statist. 2016, 44, 161–179. [Google Scholar] [CrossRef]
- Gurevich, G. Optimal properties of parametric Shiryaev-Roberts statistical control procedures. Comput. Model. New Technol. 2013, 17, 37–50. [Google Scholar]
- Aue, A.; Hörmann, S.; Horváth, L.; Reimherr, M. Break detection in the covariance structure of multivariate time series models. Ann. Statist. 2009, 37, 4046–4087. [Google Scholar] [CrossRef]
- Cho, H.; Kirch, C. Two-stage data segmentation permitting multiscale change points, heavy tails and dependence. Ann. Inst. Statist. Math. 2022, 74, 653–684. [Google Scholar] [CrossRef]
- Niu, Y.; Hao, N.; Zhang, H. Multiple change-point detection, a selective overview. Statist. Sci. 2016, 31, 611–623. [Google Scholar] [CrossRef]
- Korkas, K.; Fryzlewicz, P. Multiple change-point detection for non-stationary time series using wild binary segmentation. Statist. Sinica 2017, 27, 287–311. [Google Scholar] [CrossRef]
- Shi, X.P.; Wu, Y.H.; Rao, C.R. Consistent and powerful graph-based change-point test for high-dimensional data. Proc. Natl. Acad. Sci. USA 2017, 114, 3873–3878. [Google Scholar] [CrossRef] [PubMed]
- Shi, X.P.; Wang, X.-S.; Reid, N. A New Class of Weighted CUSUM Statistics. Entropy 2022, 24, 1652. [Google Scholar] [CrossRef]
- Chen, F.; Mamon, R.; Nkurunziza, S. Inference for a change-point problem under a generalised Ornstein-Uhlenbeck setting. Ann. Inst. Statist. Math. 2018, 70, 807–853. [Google Scholar] [CrossRef]
- Zamba, K.D.; Hawkins, D.M. A multivariate change-point model for change in mean vector and/or covariance dtructure. J. Qual. Technol. 2009, 41, 285–303. [Google Scholar] [CrossRef]
- Oh, H.; Lee, S. On score vector-and residual-based CUSUM tests in ARMA-GARCH models. Stat. Methods Appl. 2018, 27, 385–406. [Google Scholar] [CrossRef]
- Jäntschi, L. A test detecting the outliers for continuous distributions based on the cumulative distribution function of the data being tested. Symmetry 2019, 11, 835. [Google Scholar] [CrossRef]
- William, K.; Isidore, N. Inference for nonstationary time series of counts with application to change-point problems. Ann. Inst. Statist. Math. 2022, 74, 801–835. [Google Scholar]
- Arrouch, M.S.E.; Elharfaoui, E.; Ngatchou-Wandji, J. Change-Point Detection in the Volatility of Conditional Heteroscedastic Autoregressive Nonlinear Models. Mathematics 2023, 11, 4018. [Google Scholar] [CrossRef]
- Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Academic Press Inc.: New York, NY, USA, 1980. [Google Scholar]
- Lin, Z.Y.; Lu, C.R. Limit Theory for Mixing Dependent Random Variable; Science Press: Beijing, China, 1997. [Google Scholar]
- Withers, C.S. Central limit theorems for dependent variables. Z. Wahrsch. Verw. Gebiete. 1981, 57, 509–534. [Google Scholar] [CrossRef]
- Herrndorf, N. A Functional Central Limit Theorem for Strongly Mixing Sequences of Random Variables. Z. Wahrsch. Verw. Gebiete 1985, 69, 541–550. [Google Scholar] [CrossRef]
- White, H.; Domowitz, I. Nonlinear regression with dependent observations. Econometrica 1984, 52, 143–162. [Google Scholar] [CrossRef]
- Györfi, L.; Härdle, W.; Sarda, P.; Vieu, P. Nonparametric Curve Estimation from Time Series; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
- Fan, J.Q.; Yao, Q.W. Nonlinear Time Series. Nonparametric and Parametric Methods; Springer: New York, NY, USA, 2003. [Google Scholar]
- Yang, W.Z.; Wang, Y.W.; Hu, S.H. Some probability inequalities of least-squares estimator in non linear regression model with strong mixing errors. Comm. Statist. Theory Methods 2017, 46, 165–175. [Google Scholar] [CrossRef]
- Billingsley, P. Convergence of Probability Measures; John Wiley & Sons, Inc.: New York, NY, USA, 1968. [Google Scholar]
- Kiefer, J. K-sample analogues of the Kolmogorov-Smirnov and Cramér-v. Mises tests. Ann. Math. Statist. 1959, 30, 420–447. [Google Scholar] [CrossRef]
- Bolboacă, S.D.; Jäntschi, L. Predictivity approach for quantitative structure-property models. application for blood-brain barrier permeation of diverse drug-like compounds. Int. J. Mol. Sci. 2011, 12, 4348–4364. [Google Scholar] [CrossRef]
- Truong, C.; Oudre, L.; Vayatis, N. Selective review of offline change-point detection methods. Signal Process. 2020, 167, 107299. [Google Scholar] [CrossRef]
- Balke, N. Detecting level shifts in time series. J. Bus. Econom. Statist. 1993, 11, 81–92. [Google Scholar]
- Zeileis, A.; Kleiber, C.; Krämer, W.; Hornik, H. Testing and dating of structural changes in practice. Comput. Statist. Data Anal. 2003, 44, 109–123. [Google Scholar] [CrossRef]
- Garcia, R.; Perron, P. An analysis of the real interest rate under regime shifts. Rev. Econom. Statist. 1996, 78, 111–125. [Google Scholar] [CrossRef]
- Zeileis, A.; Leisch, F.; Hornik, K.; Kleiber, C. strucchange: An R Package for Testing for Structural Change in Linear Regression Models. J. Stat. Softw. 2002, 7, 1–38. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).