# Time Series Seasonal Analysis Based on Fuzzy Transforms

^{*}

## Abstract

**:**

## 1. Introduction

_{t}is a function of the values y

_{t−1}, y

_{t−2}, …, y

_{t−p}, where p is the number of input nodes.

#### 1.1 TSSF Method for Forecasting Analysis

_{0}at the time t, we calculate the sth seasonal subset of cardinality n(s), in which the time t is inserted. Then, we consider the sth inverse F-transform ${f}_{n(s)}^{F}(t)$. The forecasted value of the parameter is given by the formula ${f}_{n(s)}^{F}(t)+trend(t)$, where trend(t) is the value of the polynomial trend at the time t. Figure 2 illustrates the application of the TSSF for forecasting analysis.

## 2. Direct and Inverse F-Transform

_{1}, x

_{2}, …, x

_{n}be points of [a,b], called nodes, such that x

_{1}= a < x

_{2}< … < x

_{n}= b. The family of fuzzy sets A

_{1}, …, A

_{n}: [a,b] → [0,1], called basic functions, is a fuzzy partition of [a,b] if the following hold:

- -
- A
_{i}(x_{i}) = 1 for every i =1,2, …,n; - -
- A
_{i}(x) = 0 if x $\notin $ [x_{i−1},x_{i+1}] for i = 2, …,n − 1; - -
- A
_{i}(x) is a continuous function on [a,b]; - -
- A
_{i}(x) strictly increases on [x_{i−1}, x_{i}] for i = 2, …, n and strictly decreases on [x_{i},x_{i+1}] for i = 1, …, n − 1; - -
- A
_{1}(x) +…+ A_{n}(x) = 1 for every x $\in $ [a,b].

_{1}(x),…,A

_{n}(x)} form an uniform fuzzy partition if n ≥ 3 and x

_{i}= a + h∙(i − 1), where h = (b − a)/(n − 1) and i = 1, 2, …, n (that is the nodes are equidistant);

- -
- A
_{i}(x_{i}− x) = A_{i}(x_{i}+ x) for every x $\in $ [0,h] and i = 2, …, n − 1; - -
- A
_{i+1}(x) = A_{i}(x − h) for every x $\in $ [x_{i}, x_{i+1}] and i = 1,2, …, n − 1.

_{1}, ..., p

_{m}of [a,b]. If P is sufficiently dense with respect to the given partition {A

_{1}, A

_{2}, …, A

_{n}}, i.e., for each i $\in $ {1,…,n} there exists an index j $\in $ {1,…,m} such that A

_{i}(p

_{j}) > 0, we define {F

_{1}, F

_{2}, …, F

_{n}} as the discrete direct F-transform of f with respect to {A

_{1}, A

_{2}, …, A

_{n}}, where F

_{i}is given by

_{1}, A

_{2}, …, A

_{n}} by setting

_{1},x

_{2}, …, x

_{k}) is defined on m points p

_{j}= (p

_{j1}, p

_{j2}, …, p

_{jk}) $\in $ [a

_{1},b

_{1}] × [a

_{2},b

_{2}] × … × [a

_{k},b

_{k}] for j = 1,…,m. We say that P = {(p

_{11}, p

_{12}, …, p

_{1k}), …, (p

_{m1}, p

_{m2}, …,p

_{mk})} is sufficiently dense with respect to the fuzzy partitions

_{1},b

_{1}], …, [a

_{k},b

_{k}], respectively, if for each k-tuple {h

_{1}, …, h

_{k}} $\in $ {1, …, n

_{1}} × … × {1, …, n

_{k}}, there exists a point p

_{j}= (p

_{j1}, p

_{j2}, …, p

_{jk}) in P, j = 1, …, m, such that ${A}_{1{h}_{1}}({p}_{j1})\cdot \dots \cdot {A}_{k{h}_{k}}({p}_{jk})>0$.

_{1},h

_{2},…,h

_{k})-th component ${F}_{{h}_{1}{h}_{2}\dots {h}_{k}}$ of the discrete direct F-transform of f with respect to the basic Function (3) as

_{j}= (p

_{j1}, p

_{j2},…,p

_{jk}) $\in $ [a

_{1},b

_{1}] × … × [a

_{k},b

_{k}]:

**Theorem**

**1.**

_{1},x

_{2,…,}x

_{k}) be given on the set of points P = {(p

_{11},p

_{12}, …, p

_{1k}) ,(p

_{21}, p

_{22}, …, p

_{2k}), …,(p

_{m1}, p

_{m2}, …, p

_{mk})} $\in $ [a

_{1},b

_{1}] × [a

_{2},b

_{2}] × … × [a

_{k},b

_{k}]. Then, for every ε > 0, there exist k integers n

_{1}= n

_{1}(ε), …, n

_{k}= n

_{k}(ε) and k related fuzzy partitions (3) of [a

_{1},b

_{1}], …, [a

_{k},b

_{k}], respectively, such that the set P is sufficiently dense with respect to them and for every p

_{j}= (p

_{j1}, p

_{j2}, …, p

_{jk}) in P, j = 1, …, m, the following inequality holds:

## 3. F-Transform Forecasting Method

^{(j)}, y

^{(j)}) in the following form:

^{(j)}, y

^{(j)}) to determine a mapping f from the input-space R

^{n}into the output-space R. We assume that the ${x}_{i}^{(1)},\dots ,{x}_{i}^{(j)},\dots ,{x}_{i}^{(M)}$ lie in $\left[{x}_{i}^{-},{x}_{i}^{+}\right]$ for every i = 1, …, n, and y

^{(1)}, y

^{(2)}, …, y

^{(M)}in [y

^{−},y

^{+}]. The F-transform forecasting method calculates a function $y=f({x}_{1},\dots ,{x}_{n})$, which approximates the data. Similar to in [32], we create a partition of n

_{i}fuzzy sets for each domain $[{x}_{i}^{-},{x}_{i}^{+}]$, hence we construct the respective direct and inverse F-transforms (Equations (3) and (4)) to estimate an approximation of f. We illustrate this forecasting method in the following steps:

- (1)
- Give a uniform partition composed by n
_{i}fuzzy sets (n_{i}≥ 3) $\left\{{A}_{i1},\dots ,{A}_{ini}\right\}$ of the domain $[{x}_{i}^{-},{x}_{i}^{+}]$ of each variable x_{i}, i = 1, …, n. If ${x}_{i1},\dots ,{x}_{is},\dots ,{x}_{i{n}_{i}}$ are the nodes of each interval $[{x}_{i}^{-},{x}_{i}^{*}]$, each function A_{is}is defined in the following way for s = 1, …, n_{i}:$$\begin{array}{l}{A}_{i1}(x)=\{\begin{array}{ll}0.5\cdot (1+\mathrm{cos}\frac{\pi}{{s}_{i}}(x-{x}_{i1}))& \mathrm{if}\text{}\mathrm{x}\in \text{}[{x}_{i1},{x}_{i2}]\\ 0& \mathrm{otherwise}\end{array}\\ {A}_{is}(x)=\{\begin{array}{ll}0.5\cdot (1+\mathrm{cos}\frac{\pi}{{s}_{i}}(x-{x}_{is}))\text{}& \mathrm{if}\text{}x\in \text{}[{x}_{i(s-1)},{x}_{i(s+1)}]\\ 0& \mathrm{otherwise}\end{array}\\ {A}_{i{n}_{i}}(x)=\{\begin{array}{ll}0.5\cdot (1+\mathrm{cos}\frac{\pi}{{s}_{i}}(x-{x}_{i({n}_{i}-1)}))\text{}& \mathrm{if}\text{}x\text{}\in \text{}[{x}_{i({n}_{i}-1)},{x}_{i{n}_{i}}]\\ 0& \mathrm{otherwise}\end{array}\end{array}$$ - (2)
- If the dataset is not sufficiently dense with respect to the fuzzy partition, i.e., if there exists a variable x
_{i}and a fuzzy set A_{is}of the corresponding fuzzy partition such that ${A}_{is}({x}_{i}^{(r)})=0$ for each r = 1, …, M, the process stops otherwise calculate the n_{1}∙n_{2}∙…∙n_{k}components of the direct F-transform of f with Equation (3) by setting k = n, ${p}_{j1}={x}_{1}^{(j)}$, …, ${p}_{jn}={x}_{n}^{(j)}$ and $y(j)=f({x}_{1}^{(j)},{x}_{2}^{(j)},\dots ,{x}_{n}^{(j)})$, obtaining the following quantity:$${F}_{{h}_{1}{h}_{2}\dots {h}_{n}}=\frac{{\displaystyle \sum _{j=1}^{M}{y}^{(j)}\cdot {A}_{1{h}_{1}}({x}_{1}^{(j)})\cdot \dots \cdot {A}_{n{h}_{n}}({x}_{n}^{(j)})}}{{\displaystyle \sum _{j=1}^{M}{A}_{1{h}_{1}}({x}_{1}^{(j)})\cdot \dots \cdot {A}_{n{h}_{n}}({x}_{n}^{(j)})}}$$ - (3)
- Calculate the discrete inverse F-transform as$${f}_{{n}_{1}\dots {n}_{n}}^{F}\left({x}_{1}^{(j)},....,{x}_{n}^{(j)}\right)={\displaystyle \sum _{{h}_{1}=1}^{{n}_{1}}{\displaystyle \sum _{{h}_{2}=1}^{{n}_{2}}\dots {\displaystyle \sum _{{h}_{n}=1}^{{n}_{n}}{F}_{{h}_{1}{h}_{2}\dots {h}_{n}}\cdot {A}_{1{h}_{1}}({x}_{1}^{(j)})\cdot \dots \cdot {A}_{n{h}_{n}}({x}_{n}^{(j)})}}}$$
- (4)
- Calculate the forecasting index as$$MADMEAN=\frac{{\displaystyle \sum _{j=1}^{M}|{f}_{{n}_{1}{n}_{2}\dots {n}_{n}}^{F}({x}_{1}^{(j)},\dots ,{x}_{n}^{(j)})-{y}^{(j)}|}}{{\displaystyle \sum _{j=1}^{M}{y}^{(j)}}}$$

## 4. TSSF Method

_{0}measured at different times. The dataset is formed by M measure pairs as $\left({t}^{(0)},{y}_{0}^{(0)}\right),\left({t}^{(1)},{y}_{0}^{(1)}\right),\dots ,\left({t}^{(M)},{y}_{0}^{(M)}\right)$.

^{(j)}, being ${y}^{(j)}={y}_{0}^{(j)}-trend({t}^{(j)})$. After de-trending the dataset, we use a partition into S subsets, being S considered as seasonal period. Each subset represents the seasonal fluctuation with respect to the trend.

_{s}pairs, expressing the fluctuation measures of the parameter y

_{0}at different times: (t

^{(1)}, y

^{(1)}), (t

^{(2)}, y

^{(2)}) .... (t

^{(M}

_{s}

^{)}, y

^{(M}

_{s}

^{)}), where y

^{(j)}is given by the original measure ${y}_{0}^{(j)}$ at the time t

^{(j)}minus the trend calculated at that time. The formulae of the corresponding one-dimensional directed and inverse F-transforms, considering n basic functions, are given as

_{s}index for the sth fluctuation subset of data given as

_{0}at the time t in the sth seasonal period, we add to ${f}_{n(s)}^{F}(t)$ the trend calculated at the time t, i.e., trend(t), obtaining the following value:

- -
- Root Mean Square Error (RMSE) is defined as:$$\mathrm{RMSE}={\left(\frac{{\displaystyle \sum _{j=1}^{M}{\left({\tilde{y}}_{0}({t}^{(j)})-{y}_{0}^{(j)}\right)}^{2}}}{M}\right)}^{1/2}$$
- -
- Mean Absolute Percentage Error (MAPE) is defined as:$$\mathrm{MAPE}=\left[\frac{1}{M}{\displaystyle \sum _{j=1}^{M}\left|\frac{{\tilde{y}}_{0}({t}^{(j)})-{y}_{0}^{(j)}}{{y}_{0}^{(j)}}\right|}\right]\times 100$$
- -
- Mean Absolute Deviation (MAD) is defined as:$$\mathrm{MAD}=\frac{{\displaystyle \sum _{j=1}^{M}\left|{\tilde{y}}_{0}({t}^{(j)})-{y}_{0}^{(j)}\right|}}{M}$$

## 5. Experiments on Time Series Data

- (1)
- Average seasonal variation method [3] (labeled as avgSV): This method calculates the mean seasonal variation for each seasonal period and adds the mean seasonal variation to the trend value.
- (2)
- (3)
- F-transform prediction method [29]: This is applied to the complete dataset (labeled as F-transforms).

## 6. Conclusions

## Author Contributions

## Conflicts of Interest

## Appendix A

- (1)
- Calculate the trend using a polynomial fitting
- (2)
- Subtract to the data the trend value obtaining a new dataset
- (3)
- Partition the dataset into subsets: each data subset contains the measured data in a season.
- (4)
- FOR each seasonal subset
- (5)
- n:= 3
- (6)
- Set the fuzzy partition (8)
- (7)
- IF the subset is sufficiently dense with respect to the fuzzy partition THEN
- (8)
- Calculate the direct F-transform (15)
- (9)
- Calculate the inverse F-transform (16)
- (10)
- Calculate the MADMEAN index
- (11)
- IF MADMEAN > Threshold THEN
- (12)
- n:= n + 1
- (13)
- Go to 6)
- (14)
- END IF
- (15)
- END IF
- (16)
- NEXT
- (17)
- STORE the direct F-transform components
- (18)
- END

## Appendix B

- a9 = 1.34428E-33
- a8 = −2.82732E-28
- a7 = 2.32945E-23
- a6 = −9.06821E-19
- a5 = 1.45202E-14
- a4 = 0
- a3 = 0
- a2 = −0.066427015
- a1 = 0
- a0 = 17,699,903.37

- a9 = 1.45368E-33
- a8 = −3.06035E-28
- a7 = 2.52386E-23
- a6 = −9.83437E-19
- a5 = 1.57619E-14
- a4 = 0
- a3 = 0
- a2 = −0.072310129
- a1 = 0
- a0 = 19,302,936.01

- a9 = 4.40678E-32
- a8 = −1.11212E-26
- a7 = 1.14529E-21
- a6 = −5.94437E-17
- a5 = 1.42772E-12
- a4 = 0
- a3 = −0.000762063
- a2 = 13.0645932
- a1 = 0
- a0 = −1160591164

- a9 = 1.53589E-31
- a8 = −4.0454E-26
- a7 = 4.41907E-21
- a6 = −2.5338E-16
- a5 = 7.78147E-12
- a4 = −1.06189E-07
- a3 = 0
- a2 = 0
- a1 = 519825.0932
- a0 = −7088111243

## References

- Abraham, B.; Ledolter, J. Statistical Methods for Forecasting; John Wiley & Sons: New York, NY, USA, 1983; p. 445. ISBN 978-0-471-86764-3. [Google Scholar]
- Principles of Forecasting: A Handbook for Researchers and Practitioners; Armstrong, J.S. (Ed.) Springer: Berlin, Germany, 2001; p. 841. [Google Scholar]
- Box, G.E.P.; Jenkins, G.M.; Reinsel, G.C. Time Series Analysis: Forecasting and Control, 5th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2015; p. 712. ISBN 978-1-118-67502-1. [Google Scholar]
- Chatfield, C. Time Series Forecasting; Chapman & Hall/CRC: Boca Raton, FL, USA, 2001; p. 267. ISBN 1-58488-063-5. [Google Scholar]
- Hymdam, R.J.; Athanasopoulos, G. Forecasting Principles and Practice; OText Publisher: Melbourne, Australia, 2013; p. 291. ISBN 9780987507105. [Google Scholar]
- Lu, W.Z.; Wang, W.J. Potential assessment of the Support Vector Machine method in forecasting ambient air pollutant trends. Chemosphere
**2005**, 59, 693–701. [Google Scholar] [CrossRef] [PubMed] - Makridakis, S.G.; Wheelwright, S.C.; Hyndman, R.J. Forecasting: Methods and Applications, 3rd ed.; J. Wiley & Sons: New York, NY, USA, 1998; p. 656. ISBN 978-0-471-53233-0. [Google Scholar]
- Zhang, G.P.; Qi, M. Neural network forecasting for seasonal and trend time series. Eur. J. Oper. Res.
**2005**, 160, 501–514. [Google Scholar] [CrossRef] - Pankratz, A. Forecasting with Dynamic Regression Models; Wiley: Hoboken, NJ, USA, 2012; p. 392. [Google Scholar]
- Miller, K.; Smola, A.J.; Ratsch, G.; Scholkopf, B.; Kohlmorgen, J.; Vapnik, V. Predicting time series with support vector machines. In Proceedings of the 7th International Conference on Artificial Neural Networks, Lecture Notes in Computer Sciences, Lausanne, Switzerland, 8–10 October 1997; Springer: Berlin, Germany, 1998; Volume 1327, pp. 999–1004. [Google Scholar]
- Pai, P.F.; Lin, K.P.; Lin, C.S.; Chang, P.T. Time series forecasting by a seasonal support vector regression model. Exp. Syst. Appl.
**2010**, 37, 4261–4265. [Google Scholar] [CrossRef] - Mohandes, M.A.; Halawani, T.O.; Rehman, S.; Hussain, A.A. Support vector machines for wind speed prediction. Renew. Energy
**2004**, 29, 939–947. [Google Scholar] [CrossRef] - Ittig, P.T. A seasonal index for business. Decis. Sci.
**1997**, 28, 335–355. [Google Scholar] [CrossRef] - Hong, W.C.; Pai, P.F. Potential assessment of the support vector regression technique in rainfall forecasting. Water Resour. Manag.
**2007**, 21, 495–513. [Google Scholar] [CrossRef] - Crone, S.F.; Hibon, M.; Nikolopoulos, K. Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. Int. J. Forecast.
**2011**, 27, 635–660. [Google Scholar] [CrossRef] - Hamzacebi, C. Improving artificial neural networks performance in seasonal time series forecasting. Inf. Sci.
**2008**, 178, 4550–4559. [Google Scholar] [CrossRef] - Zhang, G.P.; Kline, D.M. Quarterly time-series forecasting with neural networks. IEEE Trans. Neural Netw.
**2007**, 18, 1800–1814. [Google Scholar] [CrossRef] - Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast.
**1998**, 14, 35–62. [Google Scholar] [CrossRef] - Zhang, G.; Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing
**2003**, 50, 159–175. [Google Scholar] [CrossRef] - Faraway, J.; Chatfield, C. Time series forecasting with neural networks: A comparative study using the airline data. J. R. Stat. Soc. Ser. C Appl. Stat.
**1998**, 47, 231–250. [Google Scholar] [CrossRef] - Kihoro, J.M.; Otieno, R.O.; Wafula, C. Seasonal time series forecasting: A comparative study of ARIMA and ANN models. Afr. J. Sci. Technol.
**2004**, 5, 41–49. [Google Scholar] [CrossRef] - Khandelwal, I.; Adhikari, R.; Verma, G. Time series forecasting using hybrid ARIMA and ANN models based on DWT decomposition. Procedia Comput. Sci.
**2015**, 48, 173–179. [Google Scholar] [CrossRef] - Štepnicka, M.; Cortez, P.; Peralta Donate, J.; Štepnickova, L. Forecasting seasonal time series with computational intelligence: On recent methods and the potential of their combinations. Exp. Syst. Appl.
**2013**, 40, 1981–1992. [Google Scholar] [CrossRef] [Green Version] - Kumar, A.; Kumar, D.; Jarial, S.K. A hybrid clustering method based on improved artificial bee colony and fuzzy C-Means algorithm. Int. J. Artif. Intell.
**2017**, 15, 40–60. [Google Scholar] - Medina, J.; Ojeda-Aciego, M. Multi-adjoint t-concept lattices. Inf. Sci.
**2010**, 180, 712–725. [Google Scholar] [CrossRef] - Nowaková, J.; Prílepok, M.; Snášel, V. Medical image retrieval using vector quantization and fuzzy S-tree. J. Med. Syst.
**2017**, 41, 1–16. [Google Scholar] [CrossRef] [PubMed] - Pozna, C.; Minculete, N.; Precup, R.; Kòczy, L.T.; Ballagi, A. Signatures: Definitions, operators and applications to fuzzy modeling. Fuzzy Sets Syst.
**2012**, 201, 86–104. [Google Scholar] [CrossRef] - Perfilieva, I. Fuzzy transforms: Theory and applications. Fuzzy Sets Syst.
**2006**, 157, 993–1023. [Google Scholar] [CrossRef] - Di Martino, F.; Loia, V.; Sessa, S. Fuzzy transforms method in prediction data analysis. Fuzzy Sets Syst.
**2011**, 180, 146–163. [Google Scholar] [CrossRef] - Wang, L.X.; Mendel, J.M. Generating fuzzy rules by learning from examples. IEEE Trans. Syst. Man Cybern.
**1992**, 22, 1414–1427. [Google Scholar] [CrossRef] - Di Martino, F.; Loia, V.; Perfilieva, I.; Sessa, S. An image coding/decoding method based on direct and inverse fuzzy transforms. Int. J. Approx. Reason.
**2008**, 48, 110–131. [Google Scholar] [CrossRef] - Di Martino, F.; Loia, V.; Sessa, S. Fuzzy transforms method and attribute dependency in data analysis. Inf. Sci.
**2010**, 180, 493–505. [Google Scholar] [CrossRef] - Novák, V.; Pavliska, V.; Perfilieva, I.; Štepnicka, M. F-transform and fuzzy natural logic in Time Series Analysis. In Proceedings of the 8th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT), Milan, Italy, 11–13 September 2013; Atlantic Press: Pleasantville, NJ, USA; pp. 40–47. [Google Scholar]
- Kolassa, W.; Schutz, W. Advantages of the MADMEAN ratio over the MAPE. Foresight
**2007**, 6, 40–43. [Google Scholar] - Goodrich, R.L. The Forecast Pro methodology. Int. J. Forecast.
**2000**, 16, 533–535. [Google Scholar] [CrossRef]

**Figure 4.**Trend of the mean temperature in the months of July and August (from 1 July 2003 to 16 August 2015) obtained by using a ninth-degree polynomial fitting.

**Figure 5.**Trend of the max temperature in the months of July and August (from 1 July 2003 to 16 August 2015) obtained by using a best fit polynomial of nine degree.

**Figure 6.**(

**a**) Results obtained for the mean temperature by using the avgSV method; (

**b**) results obtained for the mean temperature by using the ARIMA method; (

**c**) results obtained for the mean temperature by using the F-transforms method; and (

**d**) results obtained for the mean temperature by using the TSSF method.

**Figure 7.**(

**a**) Results for the max temperature in the months of July and August under avgSV method; (

**b**) results for the max temperature in the months of July and August under ARIMA method; (

**c**) results for the max temperature in the months of July and August under the F-transforms method; and (

**d**) results for the max temperature in the months of July and August under the TSSF method.

**Figure 9.**(

**a**) Results obtained for the variation of the min temperature by using the avgSV method; (

**b**) results obtained for the variation of the min temperature by using the ARIMA method; (

**c**) results obtained for the variation of the min temperature by using the F-transforms method; and (

**d**) results obtained for the variation of the min temperature by using the TSSF method.

**Figure 10.**Trend of mean temperature under a best fit polynomial of nine degree Chiavari-Caperana weather station dataset.

**Figure 11.**(

**a**) Results obtained for the variation of the mean temperature by using the avgSV method; (

**b**) results obtained for the variation of the mean temperature by using the ARIMA method; (

**c**) results obtained for the variation of the mean temperature by using the F-transforms method; and (

**d**) results obtained for the variation of the mean temperature by using the TSSF method.

Forecasting Method | RMSE | MAPE | MAD | MADMEAN |
---|---|---|---|---|

avgSV | 1.78 | 5.60% | 1.42 | 5.50 |

ARIMA | 1.55 | 4.89% | 1.24 | 4.80 |

F-transforms | 1.61 | 4.96% | 1.28 | 5.05 |

TSSF | 1.37 | 4.29% | 1.09 | 4.22 |

Forecasting Method | RMSE | MAPE | MAD | MADMEAN |
---|---|---|---|---|

avgSV | 2.21 | 5.74% | 1.74 | 5.70 |

ARIMA | 1.93 | 5.03% | 1.53 | 4.99 |

F-transforms | 1.98 | 5.17% | 1.56 | 5.13 |

TSSF | 1.76 | 4.57% | 1.39 | 4.53 |

Forecasting Method | RMSE | MAPE | MAD | MADMEAN |
---|---|---|---|---|

avgSV | 2.97 | - | 2.34 | 18.66 |

ARIMA | 1.25 | 0.97 | 7.46 | |

F-transforms | 1.56 | - | 1.09 | 8.73 |

TSSF | 0.87 | - | 0.66 | 5.26 |

Parameter | Training Dataset Dimension | Test Dataset Dimension | Seasonal Period | RMSE | |||||
---|---|---|---|---|---|---|---|---|---|

avgSV | ARIMA | F-transf. | TSSF | SVM | ADANN | ||||

Temp. mean | 757 | 15 | Week | 1.83 | 1.58 | 1.62 | 1.39 | 0.88 | 0.87 |

Temp. max | 757 | 15 | Week | 2.20 | 1.92 | 1.99 | 1.68 | 1.31 | 1.26 |

Temp. min | 4354 | 212 | Month | 2.90 | 1.27 | 1.54 | 0.94 | 0.96 | 1.01 |

**Table 5.**RMSE, MAPE, MAD, MADMEAN indices for the mean temperature (Chiavari-Caperana weather station).

Forecasting Method | RMSE | MAPE | MAD | MADMEAN |
---|---|---|---|---|

avgSV | 2.87 | 27.74% | 2.14 | 16.19 |

ARIMA | 1.13 | 12.37% | 0.89 | 5.95 |

F-transforms | 1.38 | 15.18% | 1.10 | 7.30 |

TSSF | 0.73 | 7.96% | 0.58 | 3.83 |

Parameter | Training Dataset Dimension | Test Dataset Dimension | Seasonal Period | RMSE | |||||
---|---|---|---|---|---|---|---|---|---|

avgSV | ARIMA | F-transf. | TSSF | SVM | ADANN | ||||

Temp. mean | 3822 | 78 | Month | 2.85 | 1.15 | 1.40 | 0.85 | 0.88 | 0.87 |

**Table 7.**RMSE in six methods for the mean temperature in various stations in Genova district (Italy).

Station | RMSE | |||||
---|---|---|---|---|---|---|

avgSV | ARIMA | F-transf. | TSSF | SVM | ADANN | |

Alpe Gorreto | 2.98 | 1.20 | 1.49 | 0.84 | 0.81 | 0.83 |

Campo Ligure | 2.74 | 1.09 | 1.34 | 0.76 | 0.71 | 0.76 |

Barbagelata | 3.25 | 1.30 | 1.57 | 0.89 | 0.84 | 0.90 |

Camogli | 3.39 | 1.38 | 1.68 | 0.95 | 0.88 | 0.86 |

Campo ligure | 3.02 | 1.20 | 1.49 | 0.83 | 0.77 | 0.79 |

Carlasco | 2.91 | 1.15 | 1.42 | 0.80 | 0.77 | 0.76 |

Chiavari | 2.78 | 1.12 | 1.39 | 0.78 | 0.73 | 0.77 |

Genova Bolzaneto | 2.95 | 1.16 | 1.41 | 0.81 | 0.77 | 0.75 |

Genova Pegli | 3.34 | 1.29 | 1.64 | 0.94 | 0.89 | 0.88 |

Panesi | 3.20 | 1.29 | 1.56 | 0.87 | 0.84 | 0.83 |

Rapallo | 2.71 | 1.08 | 1.33 | 0.75 | 0.78 | 0.84 |

Rovegno | 2.94 | 1.18 | 1.45 | 0.82 | 0.82 | 0.80 |

Tigliolo | 3.06 | 1.24 | 1.52 | 0.85 | 0.80 | 0.85 |

Viganego | 3.17 | 1.28 | 1.57 | 0.88 | 0.82 | 0.83 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Di Martino, F.; Sessa, S.
Time Series Seasonal Analysis Based on Fuzzy Transforms. *Symmetry* **2017**, *9*, 281.
https://doi.org/10.3390/sym9110281

**AMA Style**

Di Martino F, Sessa S.
Time Series Seasonal Analysis Based on Fuzzy Transforms. *Symmetry*. 2017; 9(11):281.
https://doi.org/10.3390/sym9110281

**Chicago/Turabian Style**

Di Martino, Ferdinando, and Salvatore Sessa.
2017. "Time Series Seasonal Analysis Based on Fuzzy Transforms" *Symmetry* 9, no. 11: 281.
https://doi.org/10.3390/sym9110281