applsci-logo

Journal Browser

Journal Browser

Advanced AI and Machine Learning Techniques for Time Series Analysis and Pattern Recognition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 October 2026 | Viewed by 44577

Special Issue Editors


E-Mail Website
Guest Editor
Istituto Nazionale di Astrofisica INAF IASF Palermo, Via Ugo La Malfa 153, 90146 Palermo, Italy
Interests: artificial intelligence; computer science; machine learning; deep learning; computer vision; high-energy astrophysics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Istituto Nazionale di Astrofisica INAF IASF Palermo, Via Ugo La Malfa 153, 90146 Palermo, Italy
Interests: software engineering; computer-aided system; semantic analysis; control software system; high-energy astrophysics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor Assistant
Istituto Nazionale di Astrofisica INAF IASF Palermo, Via Ugo La Malfa 153, 90146 Palermo, Italy
Interests: artificial intelligence; machine learning; deep learning; high-energy astrophysics

Special Issue Information

Dear Colleagues,

This Special Issue aims to explore cutting-edge artificial intelligence and machine learning approaches for analyzing time series data and recognizing complex patterns across diverse domains. We invite original research articles and comprehensive review papers that advance the theoretical foundations or practical applications of deep learning architectures, transformer models, reinforcement learning, and hybrid AI systems specifically designed for temporal data challenges. Topics of interest include, but are not limited to, the following: novel architectures for multivariate time series forecasting, anomaly detection in streaming data, interpretable models for temporal pattern discovery, transfer learning for limited time series datasets, and AI techniques for real-time decision-making systems. We particularly welcome interdisciplinary submissions demonstrating innovative applications in healthcare, finance, industrial monitoring, environmental science, or smart infrastructure.

Dr. Antonio Pagliaro
Dr. Pierluca Sangiorgi
Guest Editors

Dr. Antonio Alessio Compagnino
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • time series analysis
  • pattern recognition
  • predictive modeling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

6 pages, 163 KB  
Editorial
Advanced AI and Machine Learning Techniques for Time Series Analysis and Pattern Recognition
by Antonio Pagliaro, Antonio Alessio Compagnino and Pierluca Sangiorgi
Appl. Sci. 2025, 15(6), 3165; https://doi.org/10.3390/app15063165 - 14 Mar 2025
Cited by 3 | Viewed by 4063
Abstract
Time series analysis and pattern recognition are cornerstones for innovation across diverse domains. In finance, these techniques enable market prediction and risk assessment. Astrophysicists use them to detect various phenomena and analyze data. Environmental scientists track ecosystem changes and pollution patterns, while healthcare [...] Read more.
Time series analysis and pattern recognition are cornerstones for innovation across diverse domains. In finance, these techniques enable market prediction and risk assessment. Astrophysicists use them to detect various phenomena and analyze data. Environmental scientists track ecosystem changes and pollution patterns, while healthcare professionals monitor patient vitals and disease progression. Transportation systems optimize traffic flow and predict maintenance needs. Energy providers balance grid loads and forecast consumption. Climate scientists model atmospheric changes and extreme weather events. Cybersecurity experts identify threats through anomaly detection in network traffic patterns. This editorial introduces this Special Issue, which explores state-of-the-art AI and machine learning (ML) techniques, including Long Short-Term Memory (LSTM) networks, Transformers, ensemble methods, and AutoML frameworks. We highlight innovative applications in data-driven finance, astrophysical event reconstruction, cloud masking, and healthcare monitoring. Recent advancements in feature engineering, unsupervised learning frameworks for cloud masking, and Transformer-based time series forecasting demonstrate the potential of these technologies. The papers collected in this Special Issue showcase how integrating domain-specific knowledge with computational innovations provides a pathway to achieving higher accuracy in time series analysis across various scientific disciplines. Full article

Research

Jump to: Editorial, Review

19 pages, 2067 KB  
Article
Shipping News Sentiment Meets Multiscale Decomposition: A Dual-Gated Deep Model for Baltic Dry Index Forecasting
by Lili Qu, Nan Hong and Jieru Tan
Appl. Sci. 2026, 16(6), 2739; https://doi.org/10.3390/app16062739 - 12 Mar 2026
Viewed by 380
Abstract
Accurate prediction of shipping freight indices, represented by the Baltic Dry Index (BDI), is crucial for operational decision-making and risk management in the shipping industry. Existing models mainly rely on historical time-series data and often overlook the influence of unstructured information such as [...] Read more.
Accurate prediction of shipping freight indices, represented by the Baltic Dry Index (BDI), is crucial for operational decision-making and risk management in the shipping industry. Existing models mainly rely on historical time-series data and often overlook the influence of unstructured information such as market sentiment. To address this limitation, this study proposes a dynamic freight rate prediction framework integrating a shipping text sentiment index. First, a shipping news sentiment index is constructed using a RoBERTa-based pre-trained model to quantify the impact of market sentiment on freight rate fluctuations. Second, the BDI series is decomposed and reconstructed through Variational Mode Decomposition (VMD) and Fuzzy C-Means (FCM) clustering to extract multiscale features. Finally, a deep learning based multi-step prediction model is developed by incorporating the sentiment index into the forecasting process. Empirical results show that the proposed model significantly outperforms benchmark models without sentiment information in terms of MAE, RMSE, and R2, and demonstrates greater robustness under extreme market conditions. These findings provide a novel methodological framework for improving freight rate forecasting accuracy and offer practical decision support for shipping enterprises. Full article
Show Figures

Figure 1

28 pages, 2622 KB  
Article
Simulation of Reservoir Group Outflow Using LSTM with a Knowledge-Guided Loss Function Coordinated by the MDUPLEX Algorithm
by Qiaoping Liu, Changlu Qiao and Shuo Cao
Appl. Sci. 2026, 16(4), 2125; https://doi.org/10.3390/app16042125 - 22 Feb 2026
Viewed by 350
Abstract
Global climate change and spatiotemporal heterogeneity in water resources exacerbate supply-demand imbalances. Accurate outflow simulation for joint reservoir group operations thus becomes critical for scientific water resources management. Existing data-driven models like the Long Short-Term Memory (LSTM) lack the robust integration of physical [...] Read more.
Global climate change and spatiotemporal heterogeneity in water resources exacerbate supply-demand imbalances. Accurate outflow simulation for joint reservoir group operations thus becomes critical for scientific water resources management. Existing data-driven models like the Long Short-Term Memory (LSTM) lack the robust integration of physical constraints. Traditional mechanistic methods, by contrast, lack generality and stability under complex hydrological conditions. To address this limitation, we propose MDUPLEX-KG-LSTM—a physically constrained data-driven model for reservoir outflow simulation. The model incorporates multi-round DUPLEX (MDUPLEX) data partitioning, which ensures statistical homogeneity across training, validation, and test datasets. It also features a Knowledge-Guided (KG) loss function that embeds core physical constraints: water balance, dead water level, flood season restricted water level, and inter-reservoir re-regulation mechanisms. Additionally, it adopts an LSTM network optimized via Particle Swarm Optimization (PSO) for enhanced predictive performance. We validate the model using daily hydrological data from 2010 to 2025 for three reservoirs in the Wujiaqu Irrigation District of Xinjiang, China. The model exhibits exceptional stability and predictive accuracy across key evaluation metrics: Nash–Sutcliffe Efficiency (NSE) ≥ 0.82, Pearson correlation coefficient (r) > 0.94, Root Mean Square Error (RMSE) ≤ 1.50 m3/s, and Water Balance Index (WBI) ≤ 0.016. It outperforms conventional data-driven and mechanistic models in extreme flow simulation scenarios. It also eliminates unphysical negative outflow values in all predictive results. The model achieves 100% compliance with flood control standards and an irrigation guarantee rate of no less than 86%. This study advances the development of physically constrained data-driven modeling for water resources engineering. It provides reliable methodological support for the intelligent operation of reservoir groups in smart water conservancy systems. The model also balances training cost and inference efficiency effectively. It demonstrates verified scalability for reservoir groups of varying scales, fully meeting the operational deployment requirements of smart water systems. Full article
Show Figures

Figure 1

25 pages, 3917 KB  
Article
Hierarchical Attention Fused CNN-LSTM Using Structured 2D Indicator Matrices for Stock Trading Action Detection
by Hao Feng, Xian Li, Dongjie Zhao and Hui Kong
Appl. Sci. 2026, 16(4), 1672; https://doi.org/10.3390/app16041672 - 7 Feb 2026
Viewed by 539
Abstract
Accurate detection of trading actions (buy, sell, and hold) is critical for portfolio optimization and risk management in volatile stock markets. However, existing approaches often suffer from deficiencies in feature representation, spatiotemporal modeling, and class balancing, which limit their effectiveness. To address these [...] Read more.
Accurate detection of trading actions (buy, sell, and hold) is critical for portfolio optimization and risk management in volatile stock markets. However, existing approaches often suffer from deficiencies in feature representation, spatiotemporal modeling, and class balancing, which limit their effectiveness. To address these issues, we propose HA-CL, a deep learning framework that integrates a hierarchical attention mechanism with CNN-LSTM. Specifically, technical indicators are encoded into a structured 2D matrix to preserve the inherent characteristics of stocks. Features extracted by ResNet are processed by a channel-wise LSTM equipped with an attention core to adaptively fuse spatial, temporal, and channel-level importance. To mitigate class imbalance, we design a customized extrema labeling strategy augmented with extrema oversampling, an importance-aware focal loss, and a heuristic action recalibration. Experiments on 63 Chinese A-share stocks show that HA-CL achieves an average accuracy of 68.89% with an annualized return of 111.01%, substantially outperforming all baselines. Risk-adjusted return metrics such as the Sharpe Ratio and the Maximum Drawdown further validate its robustness across market conditions. Together, they highlight the potential of HA-CL to translate complex market patterns into profitable trading actions. Full article
Show Figures

Figure 1

16 pages, 940 KB  
Article
A Reinforcement Learning Framework for Fraud Detection in Highly Imbalanced Financial Data
by Alkis Papanastassiou, Benedetta Camaiani, Piergiulio Lenzi and Riccardo Crupi
Appl. Sci. 2026, 16(1), 252; https://doi.org/10.3390/app16010252 - 26 Dec 2025
Viewed by 1391
Abstract
Anomaly detection in financial transactions is a challenging task, primarily due to severe class imbalance and the adaptive behavior of fraudulent activities. This paper presents a reinforcement learning framework for fraud detection (RLFD) to address this problem. We train a deep Q-network (DQN) [...] Read more.
Anomaly detection in financial transactions is a challenging task, primarily due to severe class imbalance and the adaptive behavior of fraudulent activities. This paper presents a reinforcement learning framework for fraud detection (RLFD) to address this problem. We train a deep Q-network (DQN) agent with a long short-term memory (LSTM) encoder to process sequences of financial events and identify anomalies. On a proprietary, highly imbalanced dataset, 10-fold cross-validation highlights a distinct trade-off in performance. While a gradient boosted trees (GBT) baseline demonstrates superior global ranking capabilities (higher ROC and PR AUC), the RLFD agent successfully learns a high-recall policy directly from the reward signal, meeting operational needs for rare event detection. Importantly, a dynamic orthogonality analysis proves that the two models detect distinct subsets of fraudulent activity. The RLFD agent consistently identifies unique fraudulent transactions that the tree-based model misses, regardless of the decision threshold. Even at high-confidence operating points, the RLFD agent accounts for nearly 30% of the detected anomalies. These results suggest that while tree-based models offer high precision for static patterns, RL-based agents capture sequential anomalies that are otherwise missed, supporting for a hybrid, parallel deployment strategy. Full article
Show Figures

Figure 1

25 pages, 13024 KB  
Article
Hybrid Frequency–Temporal Modeling with Transformer for Long-Term Satellite Telemetry Prediction
by Zhuqing Chen, Jiasen Yang, Zhongkang Yin, Yijia Wu, Lei Zhong, Qingyu Jia and Zhimin Chen
Appl. Sci. 2025, 15(21), 11585; https://doi.org/10.3390/app152111585 - 30 Oct 2025
Viewed by 1641
Abstract
Reliable forecasting of satellite telemetry is critical for spacecraft health management and mission planning. However, conventional data-driven methods often struggle to effectively capture both the long-term dependencies and local dynamics inherent in telemetry data. To tackle these challenges, we introduce FFT1D-Dual, a hybrid [...] Read more.
Reliable forecasting of satellite telemetry is critical for spacecraft health management and mission planning. However, conventional data-driven methods often struggle to effectively capture both the long-term dependencies and local dynamics inherent in telemetry data. To tackle these challenges, we introduce FFT1D-Dual, a hybrid Transformer framework that unifies frequency-domain and temporal-domain modeling, effectively capturing both long-term dependencies and local features in telemetry data to enable more accurate satellite forecasting. The encoder replaces computationally expensive self-attention with a novel Dual-Path Mixer encoder that combines one-dimensional Fast Fourier Transform (FFT) and temporal convolutions, adaptively fused via a learnable channel-wise gating mechanism. A standard attention-based decoder with dynamic positional encodings preserves temporal reasoning capability. Experiments on real-world satellite telemetry datasets demonstrate that FFT1D-Dual mostly outperforms baselines across both short- and long-term horizons across three representative telemetry variables while maintaining consistently lower error growth in long-horizon predictions. Ablation studies confirm that the combination of frequency-domain modeling and dual-path fusion jointly contributes to these gains. The proposed approach provides an efficient solution for accurate long-term forecasting in complex satellite telemetry scenarios. Full article
Show Figures

Figure 1

19 pages, 974 KB  
Article
Short-Duration Monofractal Signals for Heart Failure Characterization Using CNN-ELM Models
by Juan L. López, José A. Vásquez-Coronel, David Morales-Salinas, Daniel Toral Acosta, Romeo Selvas Aguilar and Ricardo Chapa Garcia
Appl. Sci. 2025, 15(21), 11453; https://doi.org/10.3390/app152111453 - 27 Oct 2025
Viewed by 856
Abstract
Monofractal analysis offers a promising framework for characterizing cardiac dynamics, particularly in the early detection of heart failure. However, most existing approaches rely on long-duration physiological signals and do not explore the classification of disease severity. In this study, we propose a hybrid [...] Read more.
Monofractal analysis offers a promising framework for characterizing cardiac dynamics, particularly in the early detection of heart failure. However, most existing approaches rely on long-duration physiological signals and do not explore the classification of disease severity. In this study, we propose a hybrid CNN-ELM model trained exclusively on synthetic monofractal time series of short length (128 to 512 samples), aiming to assess its ability to distinguish between healthy individuals and varying degrees of heart failure defined by the NYHA functional classification. Our results show that Hurst exponent distributions reflect the progressive loss of complexity in cardiac rhythms as heart failure severity increases. The model successfully classified both binary (healthy vs. sick) and multiclass (NYHA I–IV) scenarios by grouping Hurst exponent values (H0.1 to H0.9) into clinical categories, achieving peak accuracy ranges of 97.3–98.9% for binary classification and 96.2–98.8% for multiclass classification across signal lengths of 128, 256, and 512 samples. Importantly, the CNN-ELM architecture demonstrated fast training times and robust generalization, outperforming previous approaches based solely on support vector machines. These findings highlight the clinical potential of monofractal indices as non-invasive biomarkers of cardiovascular health and support the use of short synthetic signals for scalable, low-cost screening applications. Future work will extend this framework to multifractal and real-world clinical data and explore its integration into intelligent diagnostic systems. Full article
Show Figures

Figure 1

22 pages, 2356 KB  
Article
A Study on Metal Futures Price Prediction Based on Piecewise Cubic Bézier Filtering for TCN
by Qingliang Zhao, Hongding Li, Qiangqiang Zhang and Yiduo Wang
Appl. Sci. 2025, 15(17), 9792; https://doi.org/10.3390/app15179792 - 6 Sep 2025
Cited by 1 | Viewed by 1382
Abstract
This study develops an effective forecasting model for metal futures prices with enhanced capability in trend identification and abrupt change detection, aiming to improve decision-making in both financial and industrial contexts. A hybrid framework is proposed that integrates non-uniform piecewise cubic Bézier curves [...] Read more.
This study develops an effective forecasting model for metal futures prices with enhanced capability in trend identification and abrupt change detection, aiming to improve decision-making in both financial and industrial contexts. A hybrid framework is proposed that integrates non-uniform piecewise cubic Bézier curves with a temporal convolutional network (TCN). The Bézier–Hurst (BH) decomposition extracts multi-scale trend components, which are then processed by a TCN to capture long-range dependencies. Empirical results show that the model outperforms LSTM, standard TCN, Bézier–TCN, and WD-TCN, achieving higher accuracy in trend detection and abrupt change response. This integration of Bézier-based decomposition with TCN offers a novel and robust tool for forecasting, providing valuable support for risk control and strategic planning in commodity markets. Full article
Show Figures

Figure 1

22 pages, 3060 KB  
Article
Analysis of the Time Series of Compressed Air Flow and Pressure and Determining Criteria for Diagnosing Causes of Pressure Drop in Pneumatic Systems
by Tanya Titova and Rosen Kosturkov
Appl. Sci. 2025, 15(17), 9536; https://doi.org/10.3390/app15179536 - 29 Aug 2025
Viewed by 1827
Abstract
This article explores the possibility of diagnosing unwanted pressure drops in pneumatic systems. The proposed algorithm aims to distinguish the causes and location of their occurrence. The diagnostics clearly distinguish pressure drops caused by supply lines from those caused in the main or [...] Read more.
This article explores the possibility of diagnosing unwanted pressure drops in pneumatic systems. The proposed algorithm aims to distinguish the causes and location of their occurrence. The diagnostics clearly distinguish pressure drops caused by supply lines from those caused in the main or branch lines of an industrial pneumatic system. Pressure drops in pneumatic systems are one of the main causes of increased energy consumption. For the energy and resource optimization of pneumatic systems, it is essential to detect and locate the causes of pressure drops. This article proposes an approach for using the time diagrams of two measurable variables—flow rate and pressure—at the inlet of the end consumer (machine). Based on constant monitoring and a correlation relationship between the two time series, we determined indicators for detecting and locating unwanted pressure drops. In order to verify the proposed approach and the performed analysis, in general, we made observations of 16 real production machines and lines. Full article
Show Figures

Figure 1

20 pages, 4093 KB  
Article
CNN Input Data Configuration Method for Fault Diagnosis of Three-Phase Induction Motors Based on D-Axis Current in D-Q Synchronous Reference Frame
by Yeong-Jin Goh
Appl. Sci. 2025, 15(15), 8380; https://doi.org/10.3390/app15158380 - 28 Jul 2025
Viewed by 1143
Abstract
This study proposes a novel approach to input data configuration for the fault diagnosis of three-phase induction motors. Conventional neural network (CNN)-based diagnostic methods often employ three-phase current signals and apply various image transformation techniques, such as RGB mapping, wavelet transforms, and short-time [...] Read more.
This study proposes a novel approach to input data configuration for the fault diagnosis of three-phase induction motors. Conventional neural network (CNN)-based diagnostic methods often employ three-phase current signals and apply various image transformation techniques, such as RGB mapping, wavelet transforms, and short-time Fourier transform (STFT), to construct multi-channel input data. While such approaches outperform 1D-CNNs or grayscale-based 2D-CNNs due to their rich informational content, they require multi-channel data and involve an increased computational complexity. Accordingly, this study transforms the three-phase currents into the D-Q synchronous reference frame and utilizes the D-axis current (Id) for image transformation. The Id is used to generate input data using the same image processing techniques, allowing for a direct performance comparison under identical CNN architectures. Experiments were conducted under consistent conditions using both three-phase-based and Id-based methods, each applied to RGB mapping, DWT, and STFT. The classification accuracy was evaluated using a ResNet50-based CNN. Results showed that the Id-STFT achieved the highest performance, with a validation accuracy of 99.6% and a test accuracy of 99.0%. While the RGB representation of three-phase signals has traditionally been favored for its information richness and diagnostic performance, this study demonstrates that a high-performance CNN-based fault diagnosis is achievable even with grayscale representations of a single current. Full article
Show Figures

Figure 1

32 pages, 4711 KB  
Article
Anomaly Detection in Elderly Health Monitoring via IoT for Timely Interventions
by Cosmina-Mihaela Rosca and Adrian Stancu
Appl. Sci. 2025, 15(13), 7272; https://doi.org/10.3390/app15137272 - 27 Jun 2025
Cited by 5 | Viewed by 5049
Abstract
As people age, more careful health monitoring becomes increasingly important. The article presents the development and implementation of an integrated system for monitoring the health of elderly individuals using Internet of Things (IoT) technology and a wearable bracelet to continuously collect vital data. [...] Read more.
As people age, more careful health monitoring becomes increasingly important. The article presents the development and implementation of an integrated system for monitoring the health of elderly individuals using Internet of Things (IoT) technology and a wearable bracelet to continuously collect vital data. The device integrates MAX30100 sensors for heart rate monitoring and MPU-6050 for step counting and sleep quality analysis (deep and superficial sleep). The collected data for average heart rate (AR), minimum (mR), maximum (MR), number of steps (S), deep sleep time (DST), and superficial sleep time (SST) is processed in real-time through a health anomaly detection algorithm (HADA), based on the dimensionality reduction method using PCA. The system is connected to the Azure cloud infrastructure, ensuring secure data transmission, preprocessing, and the automatic generation of alerts for prompt medical interventions. Studies conducted over two years demonstrated a sensitivity of 100% and an accuracy of 98.5%, with a tendency to generate additional alerts to avoid overlooking critical events. The results outline the importance of personalizing the analysis, adapting algorithms to individual characteristics, and the system’s potential to prevent medical complications and improve the quality of life for elderly individuals. Full article
Show Figures

Figure 1

10 pages, 16733 KB  
Article
Coal Mine Water Inflow Prediction Model Based on Multi-Factor Pearson Correlation Analysis
by Liang Ma, Zaibing Liu, Weiming Chen, Junjie Hu, Hongjian Ye, Tao Fan and Lin An
Appl. Sci. 2025, 15(12), 6600; https://doi.org/10.3390/app15126600 - 12 Jun 2025
Cited by 4 | Viewed by 1001
Abstract
Since geological structures around coal mines are complex, sudden coal mine water inflow is seriously threatening coal mining safety. To improve the accuracy of predicting coal mine water inflow, a multi-source dataset is collected to develop a coal mine water inflow prediction model [...] Read more.
Since geological structures around coal mines are complex, sudden coal mine water inflow is seriously threatening coal mining safety. To improve the accuracy of predicting coal mine water inflow, a multi-source dataset is collected to develop a coal mine water inflow prediction model based on multi-factor Pearson correlation analysis, where a convolutional neural network and bidirectional long short-term memory neural network are adopted to extract features from time-series data. To validate the performance of the present prediction model, a case study is conducted, where the predicted coal mine water inflow is close to the collected coal mine water inflow. Meanwhile, compared to other prediction models, the present prediction model can predict the magnitude and development trend of coal mine water inflow in the next 8 h more accurately, where the mean absolute percentage error is 5.76% and the correlation coefficient is 0.922. Full article
Show Figures

Figure 1

15 pages, 394 KB  
Article
Time Series Anomaly Detection Using Signal Processing and Deep Learning
by Jana Backhus, Aniruddha Rajendra Rao, Chandrasekar Venkatraman and Chetan Gupta
Appl. Sci. 2025, 15(11), 6254; https://doi.org/10.3390/app15116254 - 2 Jun 2025
Cited by 7 | Viewed by 8197
Abstract
In this paper, we propose a two-step approach for time series anomaly detection that combines signal processing techniques with deep learning methods. In the first step, we apply a bandpass filter to the time series data to reduce noise and highlight relevant frequency [...] Read more.
In this paper, we propose a two-step approach for time series anomaly detection that combines signal processing techniques with deep learning methods. In the first step, we apply a bandpass filter to the time series data to reduce noise and highlight relevant frequency components, which enhances the signals in them. In the second step, we utilize a Functional Neural Network Autoencoder for anomaly detection, leveraging its ability to capture non-linear temporal relationships in the data. By learning a compact latent representation and remapping the filtered time series, the Autoencoder effectively identifies deviations from normal patterns, allowing us to detect anomalies. Our experiments on several benchmark datasets demonstrate that bandpass filtering consistently improves the performance of deep learning methods, including the Functional Neural Network Autoencoder, by refining the input data. Our proposed approach achieves superior performance of up to 20% in detecting anomalies, particularly in a time series with intricate structures, highlighting its potential for practical applications in multiple domains. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

24 pages, 598 KB  
Review
A Review of Anomaly Detection in Spacecraft Telemetry Data
by Asma Fejjari, Alexis Delavault, Robert Camilleri and Gianluca Valentino
Appl. Sci. 2025, 15(10), 5653; https://doi.org/10.3390/app15105653 - 19 May 2025
Cited by 18 | Viewed by 12827
Abstract
Telemetry data play a pivotal role in ensuring the success of spacecraft missions and safeguarding the integrity of spacecraft systems. Therefore, the timely detection and subsequent notification of any abnormal events related to the functionality of spacecraft subsystems are crucial to ensure their [...] Read more.
Telemetry data play a pivotal role in ensuring the success of spacecraft missions and safeguarding the integrity of spacecraft systems. Therefore, the timely detection and subsequent notification of any abnormal events related to the functionality of spacecraft subsystems are crucial to ensure their safe operation. In recent years, several anomaly detection methods have been developed to monitor spacecraft telemetry data and detect anomalies. This manuscript provides a comprehensive literature review of the existing anomaly detection methods for spacecraft telemetry data. It exposes the challenges faced by such systems, highlights the strengths and limitations of each anomaly detection method, and assesses and compares the performance of these approaches in detecting anomalies. Initial results show that GCN and TCN models have achieved promising precision up to 94%. The paper concludes with a series of recommendations and the potential research directions. Full article
Show Figures

Figure 1

Back to TopTop