# Automatic Grouping in Singular Spectrum Analysis

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Review of SSA

## 3. Theoretical Background

#### 3.1. Distances Based on Matrix Norms

**The Frobenius norm**The most frequently used matrix norm is the Frobenius norm defined as:$${\u2225\mathbf{A}\u2225}_{F}=\sqrt{\sum _{i=1}^{m}\sum _{j=1}^{n}{a}_{ij}^{2}}.$$**The ${\mathit{L}}_{\mathbf{1}}$-norm**The ${L}_{1}$-norm of the matrix $\mathbf{A}$ is defined as:$${\u2225\mathbf{A}\u2225}_{{L}_{1}}=\sum _{i=1}^{m}\sum _{j=1}^{n}\left|{a}_{ij}\right|.$$**The 1-norm**The 1-norm of the matrix $\mathbf{A}$ is the maximum of the absolute column sums, that is,$${\u2225\mathbf{A}\u2225}_{1}=\underset{1\le j\le n}{max}\left(\sum _{i=1}^{m}\left|{a}_{ij}\right|\right).$$**The infinity norm**The infinity norm of the matrix $\mathbf{A}$ is the maximum of the absolute row sums, that is,$${\u2225\mathbf{A}\u2225}_{\infty}=\underset{1\le i\le m}{max}\left(\sum _{j=1}^{n}\left|{a}_{ij}\right|\right).$$**The maximum modulus norm**In this case, the maximum modulus of all the elements in the matrix $\mathbf{A}$ is computed, that is,$${\u2225\mathbf{A}\u2225}_{M}=\underset{\begin{array}{c}1\le i\le m\\ 1\le j\le n\end{array}}{max}\left|{a}_{ij}\right|.$$**The 2-norm**The spectral or 2-norm of the matrix $\mathbf{A}$ is denoted by ${\u2225\mathbf{A}\u2225}_{2}$. It can be shown that$${\u2225\mathbf{A}\u2225}_{2}=\mathrm{The}\phantom{\rule{4.pt}{0ex}}\mathrm{largest}\phantom{\rule{4.pt}{0ex}}\mathrm{sin}\mathrm{gular}\phantom{\rule{4.pt}{0ex}}\mathrm{value}\phantom{\rule{4.pt}{0ex}}\mathrm{of}\phantom{\rule{0.222222em}{0ex}}\mathbf{A}=\sqrt{\mathrm{The}\phantom{\rule{4.pt}{0ex}}\mathrm{largest}\phantom{\rule{4.pt}{0ex}}\mathrm{eigenvalue}\phantom{\rule{4.pt}{0ex}}\mathrm{of}\phantom{\rule{0.222222em}{0ex}}\mathbf{A}{\mathbf{A}}^{T}}.$$

**w**-correlation. It shows the quality of decomposition and determines how well different components of a time series are separated from each other. The

**w**-correlation between two time series ${Y}_{N}^{\left(1\right)}$ and ${Y}_{N}^{\left(2\right)}$ is defined as follows:

**w**-correlation-based distance between two components ${\mathbf{X}}_{i}$ and ${\mathbf{X}}_{j}$ as ${d}_{ij}=1-\left|{\rho}_{ij}^{\left(\mathbf{w}\right)}\right|$. It is noteworthy that there is another

**w**-correlation-based distance between two components ${\mathbf{X}}_{i}$ and ${\mathbf{X}}_{j}$ defined as ${d}_{ij}=\frac{1}{2}(1-{\rho}_{ij}^{\left(\mathbf{w}\right)})$. This distance measure, which is explained in [29] and employed in the

`R`package

`Rssa`[38,39,40], is not used in this paper.

#### 3.2. Hierarchical Clustering Methods

**Divisive**: In this technique, an initial single cluster of objects is divided into two clusters such that the objects in one cluster are far from the objects in the other cluster. The procedure continues by splitting the clusters into smaller and smaller clusters until each object makes a separate cluster [41,42]. This method is implemented in our research via the function`diana`from the`cluster`package [43] of the freely available statistical`R`software [44].**Agglomerative**: In this method, the individual objects are initially treated as a cluster, and then the most similar clusters are merged according to their similarities. This process proceeds by successive fusions until all clusters are fused into a single cluster. [41,42]. The agglomerative hierarchical clustering methods that are applied in this research are as follows [45].**Single:**The distance between two clusters ${C}_{i}$ and ${C}_{j}$ (${D}_{ij}$) is the minimum distance between two points x and y, where $x\in {C}_{i}$ and $y\in {C}_{j}$; that is,$${D}_{ij}=\underset{x\in {C}_{i},y\in {C}_{j}}{min}{d}_{xy}.$$**Complete:**The maximum distance between two points x and y is treated as the distance between two clusters ${C}_{i}$ and ${C}_{j}$, where $x\in {C}_{i}$ and $y\in {C}_{j}$; that is,$${D}_{ij}=\underset{x\in {C}_{i},y\in {C}_{j}}{max}{d}_{xy}.$$**Average:**${D}_{ij}$ is defined as the mean of the distances between the pair of points x and y, where $x\in {C}_{i}$ and $y\in {C}_{j}$:$${D}_{ij}=\sum _{x\in {C}_{i},y\in {C}_{j}}\frac{{d}_{xy}}{{n}_{i}\times {n}_{j}},$$**McQuitty:**${D}_{ij}$ is defined as the mean of the between-cluster dissimilarities:$${D}_{ij}=\frac{{D}_{ik}+{D}_{il}}{2},$$**Median:**${D}_{ij}$ is defined as follows:$${D}_{ij}=\frac{{D}_{ik}+{D}_{il}}{2}-\frac{{D}_{kl}}{4},$$**Centroid:**${D}_{ij}$ is defined as the squared Euclidean distance between the centres of gravity of the two clusters; that is,$${D}_{ij}={\parallel {\overline{x}}_{i}-{\overline{x}}_{j}\parallel}^{2},$$**Ward:**This method is based on minimizing the total within-cluster variance. The pair of clusters with the minimum cluster distance is merged at each step of the analysis. This pair of clusters provides a minimum increase in the total within-cluster variance after merging [45]. There are two algorithms`ward.D`and`ward.D2`for this method, which are available in`R`packages such as`stats`and`NbClust`[45]. By implementing the`ward.D2`algorithm, the dissimilarities are squared before the cluster updates.

`hclust`from the

`stats`package of

`R`software to perform agglomerative hierarchical clustering. More details on hierarchical clustering algorithms can be found in [41,42,46,47].

`R`function

`cluster.stats`from the

`fpc`package [51].

## 4. Simulation Results

- (a)
**Exponential:**${y}_{t}=exp\left(0.03t\right)+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.- (b)
**Linear:**${y}_{t}=0.5t+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.- (c)
**Sine:**${y}_{t}=sin(\pi t/6)+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.- (d)
**Cosine+Cosine:**${y}_{t}=0.7cos(\pi t/2)+0.5cos(\pi t/3)+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.- (e)
**Exponential×Sine:**${y}_{t}=exp\left(0.03t\right)sin(2\pi t/3)+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.- (f)
**Exponential+Sine:**${y}_{t}=exp\left(0.03t\right)+sin(2\pi t/3)+{\epsilon}_{t},\phantom{\rule{1.em}{0ex}}t=1,2,\dots ,100$.

**w**-correlation and the Frobenius norm results in the best performance, except for the complete method. In this case, the similarity between the grouping by the complete method and “correct” groups decrease as L increases. In addition, clustering based on ${L}_{1}$, Infinity and 2-norm distances exhibit good performance in detecting the “correct” groups for $L=10$. The capability of these distances goes to decline for a larger L except for the single method and 2-norm distance.

**w**-correlation and the Frobenius norm is better than the other distances. In addition, the performance of ${L}_{1}$, infinity and 2-norm distances are acceptable only for $L=10$. The capability of these distances decreases for a larger L.

**w**-correlation and the Frobenius norm is better than the other distances. As can be seen in these figures, the single, average and McQuitty methods are better than other methods when the Frobenius norm is used to measure the dissimilarity. Another interesting find is that for large L and at each level of the SNR, the single method outperforms other methods if ${L}_{1}$, 1-norm Infinity and 2-norm distances be used. However, the average and McQuitty methods are better than other methods for the maximum modulus norm.

**w**-correlation and the Frobenius norms over the other distances are more visible for large SNR. As can be seen in these figures, for large L and at each level of the SNR, the single method outperforms other methods when the maximum modulus norm is not used. In the case with the maximum modulus norm, the average, McQuitty and centroid methods present better outputs.

**w**-correlation and the Frobenius norm is better than the other distances, especially for the single, average and McQuitty clustering methods. Secondly, the ward.D and ward.D2 methods can not provide a satisfactory result for large L, at each level of the SNR and all types of distances. It is noteworthy that in this simulated series, the ${L}_{1}$ norm shows a good performance, which is not obtained from the previous simulated series. It can be concluded from these figures that the single method is better than other methods for $SNR<1$. In the case with $SNR>1$ and $L=48$, the capability of the single method in detecting the "correct" groups is considerable when the 2-norm is used.

## 5. Real-World Data

- Seasonally non-adjusted food and vehicle products of France from January 1990 to February 2014. These data are taken from INSEE (Institute National de la Statistique et des Etudes Economiques) including 290 observations. These series were previously used in [52,53,54]. Those interested in a summary of the data are referred to [52] instead of replicating this information here. The time series plots for these data, which are depicted in Figure 8, clearly show that they have a seasonal structure along with a non-linear trend.
- Gross domestic product (GDP) of the United States of America (USA) in billions of dollars from January 1947 to January 2019. This quarterly time series contains 289 observations that are taken from Federal Reserve Economic Data available at https://www.quandl.com/data/FRED/GDP. As shown in Figure 8, the GDP series is non-stationary with a non-linear trend that appears to increase exponentially over time.

**w**-correlations. For example, consider the food product time series. The two-dimensional figures of the eigenvectors of this time series are shown in Figure 9. This figure, which is the scatter plot of successive eigenvectors, is used to detect the harmonic components with different frequencies. Each regular P-vertex polygon in the scatter plots of eigenvectors denotes a harmonic component with period P [27]. Therefore, for identifying the harmonic component it is sufficient to find regular P-vertex polygons, which may be in a spiral form. Corresponding pairs of eigenvectors identify the harmonic component. Hence, it can be concluded from Figure 9 that the pairs of eigenvectors 2–3, 4–5, 6–7, and 8–9 correspond to the harmonic components of food product time series. More details on the window length selection and group identification can be found in [1,29].

**w**-correlation-based distance is the best method. Additionally, the single method shows a good performance especially for

**w**-correlation, Frobenius, and 2-norm. However, the average method is better than other methods for the maximum modulus norm.

**w**-correlation and Frobenius norms are the best methods. Note that the centroid linkage performs well only for

**w**-correlation-based distance. Similar to the food product series, the average method is better than other methods for the maximum modulus norm, which is in accordance with the simulation results for harmonic series (cases c and d).

**w**-correlation and maximum modulus norm.

## 6. Conclusions

**w**-correlation and the Frobenius norm is better than other distance measures. Also, the results support the idea that single linkage along with 2-norm can provide satisfactory automatic grouping. However, the evidence from this study implies that the ward.D and ward.D2 linkages could not detect meaningful groups.

**w**-correlation and the Frobenius norm as distance measures, can lead to more accurate results.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Golyandina, N.; Zhigljavsky, A. Singular Spectrum Analysis for Time Series; Springer Briefs in Statistics; Springer: New York, NY, USA, 2013. [Google Scholar]
- Aydin, S.; Saraoglu, H.M.; Kara, S. Singular Spectrum Analysis of Sleep EEG in Insomnia. J. Med. Syst.
**2011**, 35, 457–461. [Google Scholar] [CrossRef] - Sanei, S.; Hassani, H. Singular Spectrum Analysis of Biomedical Signals; Taylor & Francis/CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
- Hassani, H.; Yeganegi, M.R.; Silva, E.S. A New Signal Processing Approach for Discrimination of EEG Recordings. Stats
**2018**, 1, 155–168. [Google Scholar] [CrossRef] [Green Version] - Safi, S.M.M.; Pooyan, M.; Nasrabadi, A.M. Improving the performance of the SSVEP-based BCI system using optimized singular spectrum analysis (OSSA). Biomed. Signal Proces. Control
**2018**, 46, 46–58. [Google Scholar] [CrossRef] - Ghodsi, Z.; Silva, E.S.; Hassani, H. Bicoid Signal Extraction with a Selection of Parametric and Nonparametric Signal Processing Techniques. Genom. Proteom. Bioinform.
**2015**, 13, 183–191. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Movahedifar, M.; Yarmohammadi, M.; Hassani, H. Bicoid signal extraction: Another powerful approach. Math. Biosci.
**2018**, 303, 52–61. [Google Scholar] [CrossRef] [PubMed] - Carvalho, M.; Rodrigues, P.C.; Rua, A. Tracking the US business cycle with a singular spectrum analysis. Econ. Lett.
**2012**, 114, 32–35. [Google Scholar] [CrossRef] - Hassani, H.; Silva, E.S.; Gupta, R.; Segnon, M.K. Forecasting the price of gold. Appl. Econ.
**2015**, 47, 4141–4152. [Google Scholar] [CrossRef] - Silva, E.S.; Ghodsi, Z.; Ghodsi, M.; Heravi, S.; Hassani, H. Cross country relations in European tourist arrivals. Ann. Tour. Res.
**2017**, 63, 151–168. [Google Scholar] [CrossRef] - Arteche, J.; Garcia-Enriquez, J. Singular Spectrum Analysis for signal extraction in Stochastic Volatility models. Econom. Stat.
**2017**, 1, 85–98. [Google Scholar] [CrossRef] - Groth, A.; Ghil, M. Synchronization of world economic activity. Chaos Interdiscip. J. Nonlinear Sci.
**2017**, 27, 127002. [Google Scholar] [CrossRef] [Green Version] - Groth, A.; Ghil, M. Multivariate singular spectrum analysis and the road to phase synchronization. Phys. Rev. E
**2011**, 84, 036206. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Mahmoudvand, R.; Rodrigues, P.C. Predicting the Brexit Outcome Using Singular Spectrum Analysis. J. Comput. Stat. Model.
**2018**, 1, 9–15. [Google Scholar] - Saayman, A.; Klerk, J. Forecasting tourist arrivals using multivariate singular spectrum analysis. Tour. Econ.
**2019**, 25, 330–354. [Google Scholar] [CrossRef] - Hassani, H.; Rua, A.; Silva, E.S.; Thomakos, D. Monthly forecasting of GDP with mixed-frequency multivariate singular spectrum analysis. Int. J. Forecast.
**2019**, 35, 1263–1272. [Google Scholar] [CrossRef] [Green Version] - Rocco S, C.M. Singular spectrum analysis and forecasting of failure time series. Reliab. Eng. Syst. Saf.
**2013**, 114, 126–136. [Google Scholar] [CrossRef] - Muruganatham, B.; Sanjith, M.A.; Krishnakumar, B.; Satya Murty, S.A.V. Roller element bearing fault diagnosis using singular spectrum analysis. Mech. Syst. Signal Process.
**2013**, 35, 150–166. [Google Scholar] [CrossRef] - Chen, Q.; Dam, T.V.; Sneeuw, N.; Collilieux, X.; Weigelt, M.; Rebischung, P. Singular spectrum analysis for modeling seasonal signals from GPS time series. J. Geodyn.
**2013**, 72, 25–35. [Google Scholar] [CrossRef] - Hou, Z.; Wen, G.; Tang, P.; Cheng, G. Periodicity of Carbon Element Distribution Along Casting Direction in Continuous-Casting Billet by Using Singular Spectrum Analysis. Metall. Mater. Trans. B
**2014**, 45, 1817–1826. [Google Scholar] [CrossRef] - Liu, K.; Law, S.S.; Xia, Y.; Zhu, X.Q. Singular spectrum analysis for enhancing the sensitivity in structural damage detection. J. Sound Vib.
**2014**, 333, 392–417. [Google Scholar] [CrossRef] - Bail, K.L.; Gipson, J.M.; MacMillan, D.S. Quantifying the Correlation Between the MEI and LOD Variations by Decomposing LOD with Singular Spectrum Analysis. In Earth on the Edge: Science for a Sustainable Planet International Association of Geodesy Symposia; Springer: Berlin/Heidelberg, Germany, 2014; Volume 139, pp. 473–477. [Google Scholar]
- Chao, H.-S.; Loh, C.-H. Application of singular spectrum analysis to structural monitoring and damage diagnosis of bridges. Struct. Infrastruct. Eng. Maint. Manag. Life-Cycle Des. Perform.
**2014**, 10, 708–727. [Google Scholar] [CrossRef] - Khan, M.A.R.; Poskitt, D.S. Forecasting stochastic processes using singular spectrum analysis: Aspects of the theory and application. Int. J. Forecast.
**2017**, 33, 199–213. [Google Scholar] [CrossRef] - Lahmiri, S. Minute-ahead stock price forecasting based on singular spectrum analysis and support vector regression. Appl. Math. Comput.
**2018**, 320, 444–451. [Google Scholar] [CrossRef] - Poskitt, D.S. On Singular Spectrum Analysis and Stepwise Time Series Reconstruction. J. Time Ser. Anal.
**2019**. [Google Scholar] [CrossRef] - Golyandina, N.; Nekrutkin, V.; Zhigljavsky, A. Analysis of Time Series Structure: SSA and Related Techniques; Chapman & Hall/CRC: London, UK, 2001. [Google Scholar]
- Hassani, H.; Mahmoudvand, R. Singular Spectrum Analysis Using R; Palgrave Macmillan: Basingstoke, UK, 2018. [Google Scholar]
- Golyandina, N.; Korobeynikov, A.; Zhigljavsky, A. Singular Spectrum Analysis with R; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Golyandina, N. Particularities and commonalities of singular spectrum analysis as a method of time series analysis and signal processing. arXiv
**2019**, arXiv:1907.02579v1. [Google Scholar] - Alexandrov, T.; Golyandina, N. Automatic extraction and forecast of time series cyclic components within the framework of SSA. In Proceedings of the 5th St.Petersburg Workshop on Simulation, Saint Petersburg, Russia, 26 June–2 July 2005; pp. 45–50. Available online: http://www.gistatgroup.com/gus/autossa2.pdf (accessed on 2 October 2019).
- Bilancia, M.; Campobasso, F. Airborne particulate matter and adverse health events: Robust estimation of timescale effects. In Classification as a Tool for Research; Locarek-Junge, H., Weihs, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 481–489. [Google Scholar]
- Hassani, H. Singular Spectrum Analysis: Methodology and Comparison. J. Data Sci.
**2007**, 5, 239–257. [Google Scholar] - Broomhead, D.; King, G. Extracting qualitative dynamics from experimental data. Physica D
**1986**, 20, 217–236. [Google Scholar] [CrossRef] - Broomhead, D.; King, G. On the qualitative analysis of experimental dynamical systems. In Nonlinear Phenomena and Chaos; Sarkar, S., Ed.; Adam Hilger: Bristol, UK, 1986; pp. 113–144. [Google Scholar]
- Proschan, M.A.; Shaw, P.A. Essential of Probability Theory for Statisticians; Chapman & Hall/CRC: London, UK, 2016. [Google Scholar]
- Golub, G.H.; Loan, C.F.V. Matrix Computations, 4th ed.; The John Hopkins University Press: Baltimore, UK, 2013. [Google Scholar]
- Korobeynikov, A. Computation- and space-efficient implementation of SSA. Stat. Its Interface
**2010**, 3, 257–368. [Google Scholar] [CrossRef] - Golyandina, N.; Korobeynikov, A. Basic Singular Spectrum Analysis and forecasting with
`R`. Comput. Stat. Data Anal.**2014**, 71, 934–954. [Google Scholar] [CrossRef] - Golyandina, N.; Korobeynikov, A.; Shlemov, A.; Usevich, K. Multivariate and 2D Extensions of Singular Spectrum Analysis with the Rssa Package. J. Stat. Softw.
**2015**, 67, 1–78. [Google Scholar] [CrossRef] - Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data: An Introduction to Cluster Analysis; Wiley: New York, NY, USA, 1990. [Google Scholar]
- Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 6th ed.; Pearson Education Limited: Harlow, UK, 2013. [Google Scholar]
- Maechler, M.; Rousseeuw, P.; Struyf, A.; Hubert, M.; Hornik, K.
**Cluster**: Cluster Analysis Basics and Extensions; R Package Version 2.0.7-1; R Package Vignette: Madison, WI, USA, 2018. [Google Scholar] - R Core Team.
**R**: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018; Available online: https://www.R-project.org/ (accessed on 2 October 2019). - Charrad, M.; Ghazzali, N.; Boiteau, V.; Niknafs, A. NbClust: An
`R`Package for Determining the Relevant Number of Clusters in a Data Set. J. Stat. Softw.**2014**, 61, 1–36. [Google Scholar] [CrossRef] - Contreras, P.; Murtagh, F. Hierarchical Clustering. In Handbook of Cluster Analysis; Henning, C., Meila, M., Murtagh, F., Rocci, R., Eds.; Chapman & Hall/CRC: London, UK, 2016; pp. 103–123. [Google Scholar]
- Gordon, A.D. Classification, 2nd ed.; Chapman and Hall: London, UK, 1999. [Google Scholar]
- Hubert, L.; Arabie, P. Comparing partitions. J. Classif.
**1985**, 2, 193–218. [Google Scholar] [CrossRef] - Gates, A.J.; Ahn, Y.Y. The impact of random models on clustering similarity. J. Mach. Learn. Res.
**2017**, 18, 1–28. [Google Scholar] - Vinh, N.X.; Epps, J.; Bailey, J. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. J. Mach. Learn. Res.
**2010**, 11, 2837–2854. [Google Scholar] - Hennig, C.
**fpc**: Flexible Procedures for Clustering, R package version 2.1-11.1; R Package Vignette: Madison, WI, USA, 2018; Available online: https://CRAN.R-project.org/package=fpc (accessed on 2 October 2019). - Silva, E.S.; Hassani, H.; Heravi, S. Modeling European industrial production with multivariate singular spectrum analysis: A cross-industry analysis. J. Forecast.
**2018**, 37, 371–384. [Google Scholar] [CrossRef] - Hassani, H.; Heravi, H.; Zhigljavsky, A. Forecasting European industrial production with singular spectrum analysis. Int. J. Forecast.
**2009**, 25, 103–118. [Google Scholar] [CrossRef] - Heravi, S.; Osborn, D.R.; Birchenhall, C.R. Linear versus neural network forecasts for European industrial production series. Int. J. Forecast.
**2004**, 20, 435–446. [Google Scholar] [CrossRef]

Simulated Series | Correct Groups |
---|---|

Exponential | $\left\{1\right\},\{2,\dots ,L\}$ |

Linear | $\{1,2\},\{3,\dots ,L\}$ |

Sine | $\{1,2\},\{3,\dots ,L\}$ |

Cosine+Cosine | $\{1,2\},\{3,4\},\{5,\dots ,L\}$ |

Exponential×Sine | $\{1,2\},\{3,\dots ,L\}$ |

Exponential+Sine | $\left\{1\right\},\{2,3\},\{4,\dots ,L\}$ |

Real Time Series | L | Groups |
---|---|---|

Food product | 144 | $\left\{1\right\},\{2,3\},\{4,5\},\{6,7\},\{8,9\},\{10,\dots ,144\}$ |

Vehicles | 144 | $\left\{1\right\},\{2,10\},\{3,4\},\{5,6\},\{7,8\},\left\{9\right\},\{11,12\},\{13,14\},\{15,\dots ,144\}$ |

GDP | 144 | $\left\{1\right\},\{2,\dots ,144\}$ |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kalantari, M.; Hassani, H.
Automatic Grouping in Singular Spectrum Analysis. *Forecasting* **2019**, *1*, 189-204.
https://doi.org/10.3390/forecast1010013

**AMA Style**

Kalantari M, Hassani H.
Automatic Grouping in Singular Spectrum Analysis. *Forecasting*. 2019; 1(1):189-204.
https://doi.org/10.3390/forecast1010013

**Chicago/Turabian Style**

Kalantari, Mahdi, and Hossein Hassani.
2019. "Automatic Grouping in Singular Spectrum Analysis" *Forecasting* 1, no. 1: 189-204.
https://doi.org/10.3390/forecast1010013