Algorithms and Applications of Time Series Analysis

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (15 January 2021) | Viewed by 22903

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Physics, Faculty of Science and Technology, University of Silesia, Katowice, Poland
Interests: statistical physics; time series analysis; complex systems

Special Issue Information

Dear Colleagues,

We live in a world of big data where such distant activities as sales, politics, economics, and research are driven and directed by the ever-growing data stream. In fact, by the time you finish reading this summary, more than 1 MB of data will have been created for every person on Earth, potentially containing useful information. The effort devoted to time series analysis is supposed to answer a simple question: How does the past influence the future? Ever growing computational power together with faster and more robust algorithms may find the answer one day.

This Special Issue of Algorithms, entitled “Algorithms and Applications of Time Series Analysis”, will be mainly devoted (but not limited to) the problems of analysis of data chronologically organized in time. We invite you to submit your latest research on both theoretical issues and practical applications, such as time and frequency domain analysis, stationarity, modeling, classification, complexity, and pattern recognition, to name but a few.

Dr. Lukasz Machura
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • time series analysis
  • forecasting
  • classification
  • pattern recognition
  • statistical modeling
  • information entropy
  • stationarity
  • complexity
  • wavelets

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 994 KiB  
Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
by Kamel Arafet and Rafael Berlanga
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 22 | Viewed by 5120
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

16 pages, 1120 KiB  
Article
Typhoon Intensity Forecasting Based on LSTM Using the Rolling Forecast Method
by Shijin Yuan, Cheng Wang, Bin Mu, Feifan Zhou and Wansuo Duan
Algorithms 2021, 14(3), 83; https://doi.org/10.3390/a14030083 - 4 Mar 2021
Cited by 35 | Viewed by 5182
Abstract
A typhoon is an extreme weather event with strong destructive force, which can bring huge losses of life and economic damage to people. Thus, it is meaningful to reduce the prediction errors of typhoon intensity forecasting. Artificial and deep neural networks have recently [...] Read more.
A typhoon is an extreme weather event with strong destructive force, which can bring huge losses of life and economic damage to people. Thus, it is meaningful to reduce the prediction errors of typhoon intensity forecasting. Artificial and deep neural networks have recently become widely used for typhoon forecasting in order to ensure typhoon intensity forecasting is accurate and timely. Typhoon intensity forecasting models based on long short-term memory (LSTM) are proposed herein, which forecast typhoon intensity as a time series problem based on historical typhoon data. First, the typhoon intensity forecasting models are trained and tested with processed typhoon data from 2000 to 2014 to find the optimal prediction factors. Then, the models are validated using the optimal prediction factors compared to a feed-forward neural network (FNN). As per the results of the model applied for typhoons Chan-hom and Soudelor in 2015, the model based on LSTM using the optimal prediction factors shows the best performance and lowest prediction errors. Thus, the model based on LSTM is practical and meaningful for predicting typhoon intensity within 120 h. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

15 pages, 8755 KiB  
Article
The Modeling of Time Series Based on Least Square Fuzzy Cognitive Map
by Guoliang Feng, Wei Lu and Jianhua Yang
Algorithms 2021, 14(3), 69; https://doi.org/10.3390/a14030069 - 24 Feb 2021
Cited by 10 | Viewed by 2104
Abstract
A novel design method for time series modeling and prediction with fuzzy cognitive maps (FCM) is proposed in this paper. The developed model exploits the least square method to learn the weight matrix of FCM derived from the given historical data of time [...] Read more.
A novel design method for time series modeling and prediction with fuzzy cognitive maps (FCM) is proposed in this paper. The developed model exploits the least square method to learn the weight matrix of FCM derived from the given historical data of time series. A fuzzy c-means clustering algorithm is used to construct the concepts of the FCM. Compared with the traditional FCM, the least square fuzzy cognitive map (LSFCM) is a direct solution procedure without iterative calculations. LSFCM model is a straightforward, robust and rapid learning method, owing to its reliable and efficient. In addition, the structure of the LSFCM can be further optimized with refinements the position of the concepts for the higher prediction precision, in which the evolutionary optimization algorithm is used to find the optimal concepts. Withal, we discussed in detail the number of concepts and the parameters of activation function on the impact of FCM models. The publicly available time series data sets with different statistical characteristics coming from different areas are applied to evaluate the proposed modeling approach. The obtained results clearly show the effectiveness of the approach. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

14 pages, 2491 KiB  
Article
A Novel Multi-Dimensional Composition Method Based on Time Series Similarity for Array Pulse Wave Signals Detecting
by Hongjie Zou, Yitao Zhang, Jun Zhang, Chuanglu Chen, Xingguang Geng, Shaolong Zhang and Haiying Zhang
Algorithms 2020, 13(11), 297; https://doi.org/10.3390/a13110297 - 14 Nov 2020
Cited by 3 | Viewed by 2951
Abstract
Pulse wave signal sensed over the radial artery on the wrist is a crucial physiological indicator in disease diagnosis. The sensor array composed of multiple sensors has the ability to collect abundant pulse wave information. As a result, it has gradually attracted the [...] Read more.
Pulse wave signal sensed over the radial artery on the wrist is a crucial physiological indicator in disease diagnosis. The sensor array composed of multiple sensors has the ability to collect abundant pulse wave information. As a result, it has gradually attracted the attention of practitioners. However, few practical methods are used to obtain a one-dimensional pulse wave from the sensor array’s spatial multi-dimensional signals. The current algorithm using pulse wave with the highest amplitude value as the significant data suffers from low consistency because the signal acquired each time differs significantly due to the sensor’s relative position shift to the test area. This paper proposes a processing method based on time series similarity, which can take full advantage of sensor arrays’ spatial multi-dimensional characteristics and effectively avoid the above factors’ influence. A pulse wave acquisition system (PWAS) containing a micro-electro-mechanical system (MEMS) sensor array is continuously extruded using a stable dynamic pressure input source to simulate the pulse wave acquisition process. Experiments are conducted at multiple test locations with multiple data acquisitions to evaluate the performance of the algorithm. The experimental results show that the newly proposed processing method using time series similarity as the criterion has better consistency and stability. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

20 pages, 3310 KiB  
Article
A Boundary Distance-Based Symbolic Aggregate Approximation Method for Time Series Data
by Zhenwen He, Shirong Long, Xiaogang Ma and Hong Zhao
Algorithms 2020, 13(11), 284; https://doi.org/10.3390/a13110284 - 9 Nov 2020
Cited by 9 | Viewed by 6119
Abstract
A large amount of time series data is being generated every day in a wide range of sensor application domains. The symbolic aggregate approximation (SAX) is a well-known time series representation method, which has a lower bound to Euclidean distance and may discretize [...] Read more.
A large amount of time series data is being generated every day in a wide range of sensor application domains. The symbolic aggregate approximation (SAX) is a well-known time series representation method, which has a lower bound to Euclidean distance and may discretize continuous time series. SAX has been widely used for applications in various domains, such as mobile data management, financial investment, and shape discovery. However, the SAX representation has a limitation: Symbols are mapped from the average values of segments, but SAX does not consider the boundary distance in the segments. Different segments with similar average values may be mapped to the same symbols, and the SAX distance between them is 0. In this paper, we propose a novel representation named SAX-BD (boundary distance) by integrating the SAX distance with a weighted boundary distance. The experimental results show that SAX-BD significantly outperforms the SAX representation, ESAX representation, and SAX-TD representation. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

Back to TopTop