# Time Series Segmentation Based on Stationarity Analysis to Improve New Samples Prediction

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- A new proposal for time series segmentation based on stationarity, named ADF-based segmentation.
- A framework to perform segmentation of time series based on stationarity using change detector algorithms (e.g., Page-Hinkley (PH) and ADWIN (ADW)), called Change Detector segmentation. Additionally, three techniques to tune the hyperparameters of the change detection algorithm, called Bigger in Smaller out, Bigger in, and Smaller out.
- An analysis on the improvement of the predictive capacity of time series using segmentation through stationary analysis.

## 2. Preliminaries

#### 2.1. Segmentation Process

**Definition**

**1**

**.**A time series can be defined as a set of sequential data, ordered in time [2]. It can be collected at equally spaced time points and we use the notation ${y}_{t}$ with $(t=\dots ,-1,0,1,2,\dots )$, i.e., the set of observations is indexed by t, representing the time at which each observation was taken. If the data was not taken at equally spaced times, we denote it with $i=1,2,\dots $, and so, $({t}_{i}-{t}_{i-1})$ is not necessarily equal to one [26].

**Definition**

**2**

**.**According to the definition of random processes [27], a discrete-time or continuous-time random process $X\left(t\right)$ is stationary if the joint distribution of any set of samples does not depend on the placement of the time origin. This means that the joint cumulative distribution function of $X\left({t}_{1}\right)$, $X\left({t}_{2}\right)$, …, $X\left({t}_{k}\right)$ is the same as that of $X({t}_{1}+\tau )$, $X({t}_{2}+\tau )$, …, $X({t}_{k}+\tau )$ for all time shifts τ, all k, and all choices of sample times ${t}_{1},\dots ,{t}_{k}$.

#### 2.2. Augmented Dickey–Fuller Test

#### 2.3. Change Detection

**Definition**

**3**

**.**Change detection is usually applied to a data stream, which is an infinite sequence of data and can be represented by S, where S is given by: S = $\{({x}_{1},{y}_{1}),({x}_{2},{y}_{2}),\dots ,$$({x}_{t},{y}_{t}),\dots \}$. Each instance is a pair $({x}_{t},{y}_{t})$ where ${x}_{t}$ is a d-dimensional vector arriving at the time stamp t and ${y}_{t}$ is the class label of ${x}_{t}$ [29]. In the case of univariate time series, the change detection algorithm will analyze each sample X of the time series at time t. It expects that at a given point ${X}_{t+n}$, where n is a time shift, has a distribution similar to the point ${X}_{t}$, otherwise an alarm will be triggered.

## 3. Related Work

## 4. Proposed Approach

#### 4.1. Change Detector Segmentation Framework

Algorithm 1: Change Detector Segmentation code |

#### 4.1.1. Global ADF Test

#### 4.1.2. Windowing

#### 4.1.3. Local ADF Test

#### 4.1.4. Thresholding

#### 4.1.5. Tuning

**Bigger in Smaller out:**this strategy chooses the hyperparameter configuration that keeps the largest amount of data within the critical intervals and the least amount of data outside these intervals. This approach balances the selection of samples present in the critical interval with the least number of samples outside it.**Bigger in:**this strategy chooses the hyperparameter configuration that keeps the largest amount of data that is within the critical interval. This approach places the greatest emphasis on removing samples at critical intervals.**Smaller out:**this strategy chooses the hyperparameter configuration that keeps the least amount of data outside the critical interval. This approach minimizes the selection of samples outside the critical intervals.

#### 4.1.6. Change Detectors

#### 4.1.7. Removing Samples

#### 4.2. ADF-Based Segmentation

Algorithm 2: ADF-based Segmentation code |

## 5. Experimental Study

- RQ1: Can time series segmentation based on stationarity analysis assist in the predictive process of new samples?
- RQ2: Is stationarity-based segmentation capable of providing improvements in the aspect of prediction and reduction in time series sizes in different databases?
- RQ3: Were the segmentation techniques proposed in this work compared with similar techniques?
- RQ4: How correlated are stationarity and segmentation techniques?

#### 5.1. Experimental Setup

#### 5.1.1. Databases

#### 5.1.2. Change Detectors

#### 5.1.3. Metrics

#### 5.1.4. Prediction Techniques

#### 5.2. OSTS Method

#### 5.3. Experimental Results

#### 5.3.1. Predictions Results

#### 5.3.2. Time Series Size Reduction

#### 5.3.3. Sample Segmentation

#### 5.4. Statistical Analysis

#### 5.5. Stationarity Impact on Segmentation

#### 5.6. Limitations

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Conflicts of Interest

## Abbreviations

IoT | Internet of Things |

ADF | Augmented Dickey-Fuller |

DL | Deep Learning |

LSTM | Long short-term memory |

TCN | Temporal Convolutional Network |

PH | Page-Hinkley |

ADW | ADWIN |

RQ | Research Questions |

PV | Photovoltaic Plant |

MDT | Minimum Daily Temperatures |

MS | Monthly Sunspot |

RMSE | Root Mean Squared Error |

## References

- Bezerra, V.H.; da Costa, V.G.T.; Barbon Junior, S.; Miani, R.S.; Zarpelão, B.B. IoTDS: A One-Class Classification Approach to Detect Botnets in Internet of Things Devices. Sensors
**2019**, 19, 3188. [Google Scholar] [CrossRef] [Green Version] - Box, G.E.P.; Jenkins, G.M. Time Series Analysis: Forecasting and Control, 3rd ed.; Prentice Hall PTR: Hoboken, NJ, USA, 1994. [Google Scholar]
- Keogh, E.; Chu, S.; Hart, D.; Pazzani, M. Segmenting time series: A survey and novel approach. In Data Mining in Time Series Databases; World Scientific: Singapore, 2004; pp. 1–21. [Google Scholar]
- Aminikhanghahi, S.; Cook, D.J. A survey of methods for time series change point detection. Knowl. Inf. Syst.
**2017**, 51, 339–367. [Google Scholar] [CrossRef] [Green Version] - Barzegar, V.; Laflamme, S.; Hu, C.; Dodson, J. Multi-Time Resolution Ensemble LSTMs for Enhanced Feature Extraction in High-Rate Time Series. Sensors
**2021**, 21, 1954. [Google Scholar] [CrossRef] - Lee, W.; Ortiz, J.; Ko, B.; Lee, R.B. Time Series Segmentation through Automatic Feature Learning. arXiv
**2018**, arXiv:1801.05394. [Google Scholar] - Byakatonda, J.; Parida, B.; Kenabatho, P.K.; Moalafhi, D. Analysis of rainfall and temperature time series to detect long-term climatic trends and variability over semi-arid Botswana. J. Earth Syst. Sci.
**2018**, 127, 25. [Google Scholar] [CrossRef] [Green Version] - Pavlyshenko, B.M. Machine-learning models for sales time series forecasting. Data
**2019**, 4, 15. [Google Scholar] [CrossRef] [Green Version] - Shi, B.; Zhang, Y.; Yuan, C.; Wang, S.; Li, P. Entropy analysis of short-term heartbeat interval time series during regular walking. Entropy
**2017**, 19, 568. [Google Scholar] [CrossRef] - Junior, S.B.; Costa, V.G.T.; Chen, S.H.; Guido, R.C. U-healthcare system for pre-diagnosis of Parkinson’s disease from voice signal. In Proceedings of the IEEE International Symposium on Multimedia (ISM), Taichung, Taiwan, 10–12 December 2018; pp. 271–274. [Google Scholar]
- Fonseca, E.S.; Guido, R.C.; Junior, S.B.; Dezani, H.; Gati, R.R.; Pereira, D.C.M. Acoustic investigation of speech pathologies based on the discriminative paraconsistent machine (DPM). Biomed. Signal Process. Control.
**2020**, 55, 101615. [Google Scholar] [CrossRef] - Pena, E.H.; Carvalho, L.F.; Barbon, S., Jr.; Rodrigues, J.J.; Proença, M.L., Jr. Anomaly detection using the correlational paraconsistent machine with digital signatures of network segment. Inf. Sci.
**2017**, 420, 313–328. [Google Scholar] [CrossRef] - Idrees, S.M.; Alam, M.A.; Agarwal, P. A prediction approach for stock market volatility based on time series data. IEEE Access
**2019**, 7, 17287–17298. [Google Scholar] [CrossRef] - Mahalakshmi, G.; Sridevi, S.; Rajaram, S. A survey on forecasting of time series data. In Proceedings of the 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16), Kovilpatti, India, 7–9 January 2016; pp. 1–8. [Google Scholar]
- Siami-Namini, S.; Tavakoli, N.; Namin, A.S. A comparison of ARIMA and LSTM in forecasting time series. In Proceedings of the 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1394–1401. [Google Scholar]
- Santana, E.J.; Silva, R.P.; Zarpelão, B.B.; Junior, S.B. Photovoltaic Generation Forecast: Model Training and Adversarial Attack Aspects. In Intelligent Systems, Proceedings of the 9th Brazilian Conference, BRACIS 2020, Rio Grande, Brazil, 20–23 October 2020; Cerri, R., Prati, R.C., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12320, p. 12320. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Breed, G.A.; Costa, D.P.; Goebel, M.E.; Robinson, P.W. Electronic tracking tag programming is critical to data collection for behavioral time-series analysis. Ecosphere
**2011**, 2, 1–12. [Google Scholar] [CrossRef] - Fu, T.C. A review on time series data mining. Eng. Appl. Artif. Intell.
**2011**, 24, 164–181. [Google Scholar] [CrossRef] - Jamali, S.; Jönsson, P.; Eklundh, L.; Ardö, J.; Seaquist, J. Detecting changes in vegetation trends using time series segmentation. Remote Sens. Environ.
**2015**, 156, 182–195. [Google Scholar] [CrossRef] - Cheung, Y.W.; Lai, K.S. Lag order and critical values of the augmented Dickey–Fuller test. J. Bus. Econ. Stat.
**1995**, 13, 277–280. [Google Scholar] - Carmona-Poyato, A.; Fernández-García, N.; Madrid-Cuevas, F.; Durán-Rosal, A. A new approach for optimal time-series segmentation. Pattern Recognit. Lett.
**2020**, 135, 153–159. [Google Scholar] [CrossRef] - Bessec, M.; Fouquau, J.; Meritet, S. Forecasting electricity spot prices using time-series models with a double temporal segmentation. Appl. Econ.
**2015**, 48, 1–18. [Google Scholar] [CrossRef] [Green Version] - Box, G.E.P.; Jenkins, G. Time Series Analysis, Forecasting and Control; Holden-Day, Inc.: Hoboken, NJ, USA, 1990. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv
**2018**, arXiv:1803.01271. [Google Scholar] - Prado, R.; West, M. Time Series Modelling, Inference and Forecasting. 2010. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.325.8477&rep=rep1&type=pdf (accessed on 31 August 2021).
- Leon-Garcia, A. Probability and Random Processes for Electrical Engineering; Pearson Education: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
- MacKinnon, J.G. Approximate Asymptotic Distribution Functions for Unit-Root and Cointegration Tests. J. Bus. Econ. Stat.
**1994**, 12, 167–176. [Google Scholar] [CrossRef] - Sun, Y.; Wang, Z.; Liu, H.; Du, C.; Yuan, J. Online ensemble using adaptive windowing for data streams with concept drift. Int. J. Distrib. Sens. Netw.
**2016**, 12, 4218973. [Google Scholar] [CrossRef] [Green Version] - Ceravolo, P.; Marques Tavares, G.; Junior, S.B.; Damiani, E. Evaluation Goals for Online Process Mining: A Concept Drift Perspective. IEEE Trans. Serv. Comput.
**2020**, 1. [Google Scholar] [CrossRef] - Cano, A.; Krawczyk, B. Kappa Updated Ensemble for Drifting Data Stream Mining. Mach. Learn.
**2020**, 109, 175–218. [Google Scholar] [CrossRef] - Suradhaniwar, S.; Kar, S.; Durbha, S.S.; Jagarlapudi, A. Time Series Forecasting of Univariate Agrometeorological Data: A Comparative Performance Evaluation via One-Step and Multi-Step Ahead Forecasting Strategies. Sensors
**2021**, 21, 2430. [Google Scholar] [CrossRef] [PubMed] - Poghosyan, A.; Harutyunyan, A.; Grigoryan, N.; Pang, C.; Oganesyan, G.; Ghazaryan, S.; Hovhannisyan, N. An Enterprise Time Series Forecasting System for Cloud Applications Using Transfer Learning. Sensors
**2021**, 21, 1590. [Google Scholar] [CrossRef] - Hooi, B.; Liu, S.; Smailagic, A.; Faloutsos, C. BeatLex: Summarizing and Forecasting Time Series with Patterns. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2017, Skopje, Macedonia, 18–22 September 2017; pp. 3–19. [Google Scholar]
- Gahrooei, M.R.; Paynabar, K. Change detection in a dynamic stream of attributed networks. J. Qual. Technol.
**2018**, 50, 418–430. [Google Scholar] [CrossRef] [Green Version] - Gil-Alana, L.A. Long memory behaviour in the daily maximum and minimum temperatures in Melbourne, Australia. Meteorol. Appl.
**2004**, 11, 319–328. [Google Scholar] [CrossRef] - Andrews, D.F.; Herzberg, A.M. Monthly Mean Sunspot Numbers. In Data: Springer Series in Statistics; Springer: New York, NY, USA, 1985; Volume 35, pp. 213–216. [Google Scholar]
- Bifet, A.; Gavalda, R. Learning from time-changing data with adaptive windowing. In Proceedings of the SIAM International Conference on Data Mining, Minneapolis, MN, USA, 26–28 April 2007; pp. 443–448. [Google Scholar]
- Page, E.S. Continuous inspection schemes. Biometrika
**1954**, 41, 100–115. [Google Scholar] [CrossRef] - Montiel, J.; Read, J.; Bifet, A.; Abdessalem, T. Scikit-Multiflow: A Multi-output Streaming Framework. J. Mach. Learn. Res.
**2018**, 19, 1–5. [Google Scholar] - Ho, S.; Xie, M. The use of ARIMA models for reliability forecasting and analysis. Comput. Ind. Eng.
**1998**, 35, 213–216. [Google Scholar] [CrossRef] - Pena, E.H.; Barbon, S.; Rodrigues, J.J.; Proença, M.L. Anomaly detection using digital signature of network segment with adaptive ARIMA model and Paraconsistent Logic. In Proceedings of the IEEE Symposium on Computers and Communications (ISCC), Funchal, Portugal, 23–26 June 2014; pp. 1–6. [Google Scholar]
- Akhter, M.N.; Mekhilef, S.; Mokhlis, H.; Shah, N.M. Review on forecasting of photovoltaic power generation based on machine learning and metaheuristic techniques. IET Renew. Power Gener.
**2019**, 13, 1009–1023. [Google Scholar] [CrossRef] [Green Version] - Cerqueira, V.; Torgo, L.; Soares, C. Machine Learning vs Statistical Methods for Time Series Forecasting: Size Matters. arXiv
**2019**, arXiv:1909.13316. [Google Scholar] - Lea, C.; Vidal, R.; Reiter, A.; Hager, G. Temporal convolutional networks: A unified approach to action segmentation. In Proceedings of the ECCV Workshops—Computer Vision, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2016; pp. 47–54. [Google Scholar] [CrossRef] [Green Version]

**Figure 3.**Time series size reduction according to segmentation methods ADF, ADW, PH, and OSTS. (

**a**) Database A; (

**b**) database B; (

**c**) database C; (

**d**) database D; (

**e**) database E; (

**f**) database F; (

**g**) database G; (

**h**) database H; (

**i**) database I; (

**j**) database J.

**Figure 4.**Segmented samples of dataset A (chunk from 500 to 560 samples). ADF removed 2 samples, PH and ADW removed a single sample, and OSTS removed 17. The stationarity-based segmenters considered the range from 515 to 543 as stable and without the need for segmentation, while the OSTS segmenter performed quite a lot of segmentation at these points.

**Figure 5.**Segmented samples of dataset C (chunk from 520 to 580 samples). ADF removed 19 samples, PH removed 2 samples, ADW removed a single sample and OSTS removed 18. Unlike the case of dataset A, the ADF segmenter performed many segmentations like the OSTS, but in different regions, while the other segmenters considered the region as stable.

**Figure 6.**Comparison of the RMSE values obtained by segmentation techniques for Naive predictor according to the Nemenyi test. Groups that are not significantly different ($\alpha =0.05$ and $CD=1.04$) are connected.

**Figure 7.**Comparison of the RMSE values obtained by segmentation techniques for ARIMA predictor according to the Nemenyi test. Groups that are not significantly different ($\alpha =0.05$ and $CD=1.03$) are connected.

**Figure 8.**Comparison of the RMSE values obtained by segmentation techniques for LSTM predictor according to the Nemenyi test. Groups that are not significantly different ($\alpha =0.05$ and $CD=1.04$) are connected.

**Figure 9.**Comparison of the RMSE values obtained by segmentation techniques for TCN predictor according to the Nemenyi test. Groups that are not significantly different ($\alpha =0.05$ and $CD=1.03$) are connected.

**Figure 10.**Correlation between the stationarity (based on ADF Test) and performance (rRMSE) obtained by segmentation methods.

**Figure 11.**Correlation between the stationarity (based on ADF Test) and dimension reduction delivered by segmentation methods.

Reference | Segmentation Technique | Purpose of Segmentation |
---|---|---|

Carmona-Poyato et al. (2020) [22] | Based on A* algorithm with optimal polygonal approximations | Data representation reducing the dimensionality with minimum information loss |

Lee et al. (2018) [6] | Unsupervised approach, based on deep learning | Automatic knowledge extraction |

Hooi et al. (2017) [34] | BeatLex, based on patterns to match segments of the time series | Vocabulary-based approach to match segments of the time series in an intuitive and intepretable way |

Bessec et al. (2015) [23] | Temporal segmentation based on hourly and seasonal segmentation | Forecast spot prices in France with double temporal segmentation |

Jamali et al. (2015) [20] | Temporal segmentation based on thresholds of the time series features | Segments the changes in the vegetation time series to identify the change type and its characteristics |

Keogh et al. (2004) [3] | Sliding windows, Bottom-up, Top-Down, and SWAB | Empirical comparison of time series segmentation algorithms form a data mining perspective |

Identifier | Database | Train Interval | ADF Value | Test Interval |
---|---|---|---|---|

A | PV | November to December | −16.57 | 4 weeks January |

B | PV | November to January | −19.24 | March |

C | PV | November to February | −21.81 | March |

D | PV | January to February | −16.36 | 4 weeks March |

E | PV | February | −11.57 | 4 weeks March |

F | PV | February | −11.57 | days in March |

G | MDT | 1981 to 1984 | −3.14 | 1985 |

H | MDT | 1986 to 1989 | −2.59 | 1990 |

I | MDT | 1981 to 1989 | −4.34 | 1990 |

J | MS | 1749 to 1899 | −7.04 | 1900 to 1983 |

Parameters | Experimented Hyperparameters | |
---|---|---|

LSTM | Number of stacked layers | 1, 2, 3 |

Units | 32, 64, 128 | |

Dropout | 0 | |

TCN | Number of filters | 32, 64 |

Kernel | 2, 3 | |

Dilations | [1, 4, 12, 48], [1, 2, 4, 8, 12, 24, 48], [1, 4, 16, 32], [1, 2, 4, 8, 16, 32], [1, 3, 6, 12, 24], [1, 2, 6, 12, 24], [1, 2, 4, 8, 16], [1, 4, 16], [1, 2, 4, 8], [1, 4, 8] | |

Blocks | 1, 2 | |

Dropout | 0 |

Naive | ARIMA | LSTM | TCN | |
---|---|---|---|---|

ADF | 18,923.65 | 13,559.87 | 3193.70 | 3242.92 |

ADW | 18,923.66 | 13,532.41 | 3177.46 | 3294.75 |

PH | 18,923.65 | 13,727.95 | 3408.02 | 3844.67 |

OSTS | 18,923.65 | 14,239.36 | 3482.60 | 3952.74 |

Original | 20,006.11 | 13,537.78 | 3229.33 | 3276.01 |

**Table 5.**Relative RMSE of each segmentation technique referring to the original base across four different predictive techniques using the ADF-based method, ADW, and PH (i.e., instances of our segmentation framework), and OSTS segmentation method. Lower errors are in bold and worst average cases are underlined.

Database Identifier | Segmentation Techniques | Predictive Techniques | Average | |||
---|---|---|---|---|---|---|

Naive | ARIMA | LSTM | TCN | |||

A | ADF | 0.99 | 0.99 | 0.97 | 0.99 | 0.98 |

ADW | 0.99 | 0.99 | 1.00 | 0.99 | 0.99 | |

PH | 0.99 | 0.99 | 1.03 | 0.99 | 1.00 | |

OSTS | 1.00 | 1.06 | 1.01 | 0.99 | 1.01 | |

B | ADF | 0.99 | 1.00 | 0.97 | 0.99 | 0.98 |

ADW | 1.00 | 0.99 | 0.99 | 0.98 | 0.99 | |

PH | 0.99 | 0.99 | 0.98 | 0.99 | 0.98 | |

OSTS | 1.08 | 1.12 | 1.00 | 1.02 | 1.05 | |

C | ADF | 1.00 | 0.98 | 0.94 | 0.83 | 0.93 |

ADW | 1.00 | 1.02 | 0.96 | 0.85 | 0.95 | |

PH | 1.00 | 1.02 | 0.96 | 0.81 | 0.94 | |

OSTS | 0.98 | 1.03 | 0.99 | 1.10 | 1.02 | |

D | ADF | 1.00 | 0.95 | 1.02 | 0.99 | 0.99 |

ADW | 1.00 | 0.98 | 1.03 | 0.97 | 0.99 | |

PH | 0.99 | 0.98 | 1.01 | 0.90 | 0.97 | |

OSTS | 1.00 | 1.00 | 1.34 | 1.30 | 1.16 | |

E | ADF | 0.99 | 0.99 | 1.00 | 0.98 | 0.99 |

ADW | 1.00 | 0.99 | 1.01 | 0.99 | 0.99 | |

PH | 0.99 | 0.99 | 1.02 | 0.96 | 0.99 | |

OSTS | 1.00 | 1.01 | 1.26 | 1.34 | 1.15 | |

F | ADF | 1.00 | 1.00 | 0.87 | 0.73 | 0.90 |

ADW | 0.99 | 0.99 | 0.67 | 0.86 | 0.88 | |

PH | 0.99 | 1.00 | 0.74 | 1.20 | 0.98 | |

OSTS | 0.99 | 1.03 | 0.93 | 1.45 | 1.10 | |

G | ADF | 1.00 | 1.00 | 0.99 | 0.96 | 0.98 |

ADW | 1.00 | 0.99 | 0.99 | 0.92 | 0.97 | |

PH | 1.00 | 1.00 | 0.99 | 0.95 | 0.98 | |

OSTS | 1.00 | 1.22 | 1.05 | 1.17 | 1.11 | |

H | ADF | 1.00 | 0.87 | 0.95 | 0.99 | 0.95 |

ADW | 1.00 | 0.99 | 0.95 | 0.99 | 0.98 | |

PH | 1.00 | 0.93 | 0.96 | 0.96 | 0.96 | |

OSTS | 1.00 | 1.24 | 1.00 | 1.12 | 1.11 | |

I | ADF | 1.00 | 1.00 | 0.99 | 0.98 | 0.99 |

ADW | 1.00 | 0.99 | 1.00 | 1.00 | 0.99 | |

PH | 1.00 | 0.99 | 0.99 | 0.99 | 0.99 | |

OSTS | 1.00 | 0.99 | 1.00 | 1.04 | 1.00 | |

J | ADF | 0.99 | 0.98 | 1.00 | 0.98 | 0.98 |

ADW | 0.99 | 0.98 | 1.00 | 0.97 | 0.98 | |

PH | 0.99 | 1.00 | 1.00 | 1.00 | 0.99 | |

OSTS | 0.99 | 0.99 | 1.22 | 1.13 | 1.08 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Silva, R.P.; Zarpelão, B.B.; Cano, A.; Junior, S.B.
Time Series Segmentation Based on Stationarity Analysis to Improve New Samples Prediction. *Sensors* **2021**, *21*, 7333.
https://doi.org/10.3390/s21217333

**AMA Style**

Silva RP, Zarpelão BB, Cano A, Junior SB.
Time Series Segmentation Based on Stationarity Analysis to Improve New Samples Prediction. *Sensors*. 2021; 21(21):7333.
https://doi.org/10.3390/s21217333

**Chicago/Turabian Style**

Silva, Ricardo Petri, Bruno Bogaz Zarpelão, Alberto Cano, and Sylvio Barbon Junior.
2021. "Time Series Segmentation Based on Stationarity Analysis to Improve New Samples Prediction" *Sensors* 21, no. 21: 7333.
https://doi.org/10.3390/s21217333