Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (107)

Search Parameters:
Keywords = time series GAN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 994 KiB  
Article
Reliability Evaluation of New-Generation Substation Relay Protection Equipment Based on ASFSSA-LSTM-GAN
by Baojiang Tian, Kai Chen, Xingwei Du, Wenyan Duan, Yibo Wang, Jiajia Hu and Hongbo Zou
Processes 2025, 13(7), 2300; https://doi.org/10.3390/pr13072300 - 19 Jul 2025
Viewed by 340
Abstract
In order to improve the reliability evaluation accuracy of a new generation of substation relay protection equipment under small-sample failure rate data, a Generative Adversarial Network (GAN) model based on the Adaptive Spiral Flying Sparrow Search Algorithm (ASFSSA) to optimize the Long Short-Term [...] Read more.
In order to improve the reliability evaluation accuracy of a new generation of substation relay protection equipment under small-sample failure rate data, a Generative Adversarial Network (GAN) model based on the Adaptive Spiral Flying Sparrow Search Algorithm (ASFSSA) to optimize the Long Short-Term Memory (LSTM) network is proposed. Because of the adaptability of LSTM for processing time series, LSTM is embedded into the GAN, and the LSTM optimized by ASFSSA is used as the generator of GAN. The trained model is used to expand the original data samples, and the least squares method is used to estimate the distribution model parameters, to obtain the reliability function of the relay protection equipment, and to predict the operating life of the equipment. The results show that compared with other methods, the correlation coefficient of the expanded data samples is closer to the original data, and the life estimation of the equipment is more accurate. The model can be used as a reference for reliability assessment and acceptance testing of the new generation of substation relay protection equipment. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

21 pages, 4238 KiB  
Article
Fault Prediction of Hydropower Station Based on CNN-LSTM-GAN with Biased Data
by Bei Liu, Xiao Wang, Zhaoxin Zhang, Zhenjie Zhao, Xiaoming Wang and Ting Liu
Energies 2025, 18(14), 3772; https://doi.org/10.3390/en18143772 - 16 Jul 2025
Viewed by 239
Abstract
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network [...] Read more.
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network (CNNs) and long short-term memory network (LSTM) with a generative adversarial network (GAN). Firstly, a reliability mechanism based on principal component analysis (PCA) is designed to solve the problem of data bias caused by multiple monitoring devices. Then, the CNN-LSTM network is used to predict time series data, and the GAN is used to expand fault data samples to solve the problem of an unbalanced data distribution. Meanwhile, a multi-scale feature extraction network with time–frequency information is designed to improve the accuracy of fault detection. Finally, a dynamic multi-task training algorithm is proposed to ensure the convergence and training efficiency of the deep models. Experimental results show that compared with RNN, GRU, SVM, and threshold detection algorithms, the proposed fault prediction method improves the accuracy performance by 5.5%, 4.8%, 7.8%, and 9.3%, with at least a 160% improvement in the fault recall rate. Full article
(This article belongs to the Special Issue Optimal Schedule of Hydropower and New Energy Power Systems)
Show Figures

Figure 1

28 pages, 7946 KiB  
Article
U-Net Inspired Transformer Architecture for Multivariate Time Series Synthesis
by Shyr-Long Jeng
Sensors 2025, 25(13), 4073; https://doi.org/10.3390/s25134073 - 30 Jun 2025
Viewed by 453
Abstract
This study introduces a Multiscale Dual-Attention U-Net (TS-MSDA U-Net) model for long-term time series synthesis. By integrating multiscale temporal feature extraction and dual-attention mechanisms into the U-Net backbone, the model captures complex temporal dependencies more effectively. The model was evaluated in two distinct [...] Read more.
This study introduces a Multiscale Dual-Attention U-Net (TS-MSDA U-Net) model for long-term time series synthesis. By integrating multiscale temporal feature extraction and dual-attention mechanisms into the U-Net backbone, the model captures complex temporal dependencies more effectively. The model was evaluated in two distinct applications. In the first, using multivariate datasets from 70 real-world electric vehicle (EV) trips, TS-MSDA U-Net achieved a mean absolute error below 1% across key parameters, including battery state of charge, voltage, acceleration, and torque—representing a two-fold improvement over the baseline TS-p2pGAN. While dual-attention modules provided only modest gains over the basic U-Net, the multiscale design enhanced overall performance. In the second application, the model was used to reconstruct high-resolution signals from low-speed analog-to-digital converter data in a prototype resonant CLLC half-bridge converter. TS-MSDA U-Net successfully learned nonlinear mappings and improved signal resolution by a factor of 36, outperforming the basic U-Net, which failed to recover essential waveform details. These results underscore the effectiveness of transformer-inspired U-Net architectures for high-fidelity multivariate time series modeling in both EV analytics and power electronics. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 1630 KiB  
Article
Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach
by Evangelos Rozos
Hydrology 2025, 12(7), 170; https://doi.org/10.3390/hydrology12070170 - 29 Jun 2025
Viewed by 379
Abstract
The advent of machine learning (ML) has significantly transformed hydrology, particularly in the simulation of hydrological flows. However, ML techniques have not been employed to the same extent in stochastic hydrology. In applied sciences, the most common ML-based approach for developing stochastic simulation [...] Read more.
The advent of machine learning (ML) has significantly transformed hydrology, particularly in the simulation of hydrological flows. However, ML techniques have not been employed to the same extent in stochastic hydrology. In applied sciences, the most common ML-based approach for developing stochastic simulation schemes is the use of generative adversarial networks (GANs), which consist of two sub-models, that is, a generator and a discriminator. Despite their potential, GANs have notable limitations, including high architectural complexity and the requirement to divide observed time series into shorter segments to generate sufficient training examples. This segmentation reduces the effective length of the series, limiting the model’s ability to capture and reproduce long-term dependencies. In this study, we propose a simpler stochastic scheme based on a single convolutional neural network (CNN) used as a generator, replacing the discriminator component of the GAN with a specifically designed cost function. The model is applied to a case study involving measured flow velocity time series and evaluated against traditional stochastic schemes designed for both Markovian and Hurst–Kolmogorov processes. Results show that the CNN-based approach not only offers computational simplicity but also outperforms conventional methods in preserving key statistical characteristics of the observed data. Full article
(This article belongs to the Section Statistical Hydrology)
Show Figures

Figure 1

30 pages, 1351 KiB  
Article
FedSW-TSAD: SWGAN-Based Federated Time Series Anomaly Detection
by Xiuxian Zhang, Hongwei Zhao, Weishan Zhang, Shaohua Cao, Haoyun Sun and Baoyu Zhang
Sensors 2025, 25(13), 4014; https://doi.org/10.3390/s25134014 - 27 Jun 2025
Viewed by 390
Abstract
As distributed sensing technologies evolve, the collection of time series data is becoming increasingly decentralized, which introduces serious challenges for both model training and data privacy protection. In response to this trend, federated time series anomaly detection enables collaborative analysis across distributed sensing [...] Read more.
As distributed sensing technologies evolve, the collection of time series data is becoming increasingly decentralized, which introduces serious challenges for both model training and data privacy protection. In response to this trend, federated time series anomaly detection enables collaborative analysis across distributed sensing nodes without exposing raw data. However, federated anomaly detection experiences issues with unstable training and poor generalization due to client heterogeneity and the limited expressiveness of single-path detection methods. To address these challenges, this study proposes FedSW-TSAD, a federated time series anomaly detection method based on the Sobolev–Wasserstein GAN (SWGAN). It leverages the Sobolev–Wasserstein constraint to stabilize adversarial training and combines discriminative signals from both reconstruction and prediction modules, thereby improving robustness against diverse anomalies. In addition, FedSW-TSAD adopts a differential privacy mechanism with L2-norm-constrained noise injection, ensuring privacy in model updates under the federated setting. The experimental results determined using four real-world sensor datasets demonstrate that FedSW-TSAD outperforms existing methods by an average of 14.37% in the F1-score while also enhancing gradient privacy under the differential privacy mechanism. This highlights the practical value of FedSW-TSAD for privacy-preserving anomaly detection in sensor-based monitoring systems such as industrial IoT, remote diagnostics, and predictive maintenance. Full article
(This article belongs to the Special Issue AI-Driven Security and Privacy for IIoT Applications)
Show Figures

Figure 1

34 pages, 2216 KiB  
Article
An Optimized Transformer–GAN–AE for Intrusion Detection in Edge and IIoT Systems: Experimental Insights from WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT Datasets
by Ahmad Salehiyan, Pardis Sadatian Moghaddam and Masoud Kaveh
Future Internet 2025, 17(7), 279; https://doi.org/10.3390/fi17070279 - 24 Jun 2025
Viewed by 507
Abstract
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial [...] Read more.
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial to prevent disruptions and data breaches. Traditional IDS approaches often fall short against evolving threats, highlighting the need for intelligent and adaptive solutions. While deep learning (DL) offers strong capabilities for pattern recognition, single-model architectures often lack robustness. Thus, hybrid and optimized DL models are increasingly necessary to improve detection performance and address data imbalance and noise. In this study, we propose an optimized hybrid DL framework that combines a transformer, generative adversarial network (GAN), and autoencoder (AE) components, referred to as Transformer–GAN–AE, for robust intrusion detection in Edge and IIoT environments. To enhance the training and convergence of the GAN component, we integrate an improved chimp optimization algorithm (IChOA) for hyperparameter tuning and feature refinement. The proposed method is evaluated using three recent and comprehensive benchmark datasets, WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT, widely recognized as standard testbeds for IIoT intrusion detection research. Extensive experiments are conducted to assess the model’s performance compared to several state-of-the-art techniques, including standard GAN, convolutional neural network (CNN), deep belief network (DBN), time-series transformer (TST), bidirectional encoder representations from transformers (BERT), and extreme gradient boosting (XGBoost). Evaluation metrics include accuracy, recall, AUC, and run time. Results demonstrate that the proposed Transformer–GAN–AE framework outperforms all baseline methods, achieving a best accuracy of 98.92%, along with superior recall and AUC values. The integration of IChOA enhances GAN stability and accelerates training by optimizing hyperparameters. Together with the transformer for temporal feature extraction and the AE for denoising, the hybrid architecture effectively addresses complex, imbalanced intrusion data. The proposed optimized Transformer–GAN–AE model demonstrates high accuracy and robustness, offering a scalable solution for real-world Edge and IIoT intrusion detection. Full article
Show Figures

Figure 1

26 pages, 3118 KiB  
Article
Evaluation of Different Generative Models to Support the Validation of Advanced Driver Assistance Systems
by Manasa Mariam Mammen, Zafer Kayatas and Dieter Bestle
Appl. Mech. 2025, 6(2), 39; https://doi.org/10.3390/applmech6020039 - 27 May 2025
Viewed by 863
Abstract
Validating the safety and reliability of automated driving systems is a critical challenge in the development of autonomous driving technology. Such systems must reliably replicate human driving behavior across scenarios of varying complexity and criticality. Ensuring this level of accuracy necessitates robust testing [...] Read more.
Validating the safety and reliability of automated driving systems is a critical challenge in the development of autonomous driving technology. Such systems must reliably replicate human driving behavior across scenarios of varying complexity and criticality. Ensuring this level of accuracy necessitates robust testing methodologies that can systematically assess performance under various driving conditions. Scenario-based testing addresses this challenge by recreating safety-critical situations at varying levels of abstraction, from simulations to real-world field tests. However, conventional parameterized models for scenario generation are often resource intensive, prone to bias from simplifications, and limited in capturing realistic vehicle trajectories. To overcome these limitations, the paper explores AI-based methods for scenario generation, with a focus on the cut-in maneuver. Four different approaches are trained and compared: Variational Autoencoder enhanced with a convolutional neural network (VAE), a basic Generative Adversarial Network (GAN), Wasserstein GAN (WGAN), and Time-Series GAN (TimeGAN). Their performance is assessed with respect to their ability to generate realistic and diverse trajectories for the cut-in scenario using qualitative analysis, quantitative metrics, and statistical analysis. Among the investigated approaches, VAE demonstrates superior performance, effectively generating realistic and diverse scenarios while maintaining computational efficiency. Full article
Show Figures

Figure 1

30 pages, 4437 KiB  
Article
Smart Maritime Transportation-Oriented Ship-Speed Prediction Modeling Using Generative Adversarial Networks and Long Short-Term Memory
by Xinqiang Chen, Peishi Wu, Yajie Zhang, Xiaomeng Wang, Jiangfeng Xian and Han Zhang
J. Mar. Sci. Eng. 2025, 13(6), 1045; https://doi.org/10.3390/jmse13061045 - 26 May 2025
Viewed by 719
Abstract
Ship-speed prediction is an emerging research area in marine traffic safety and other related fields, occupying an important position with respect to these areas. At present, the effectiveness of techniques used in in time-series forecasting methods in ship-speed prediction is poor, and there [...] Read more.
Ship-speed prediction is an emerging research area in marine traffic safety and other related fields, occupying an important position with respect to these areas. At present, the effectiveness of techniques used in in time-series forecasting methods in ship-speed prediction is poor, and there are accumulated errors in long-term forecasting, which is limited in its processing of ship-speed information combined with multi-feature data input. To overcome this difficulty and further optimize the accuracy of ship-speed prediction, this research proposes a new deep learning framework to predict ship speed by combining GANs (Generative Adversarial Networks) and LSTM (Long Short-Term Memory). First, the algorithm takes an LSTM network as the generating network and uses the LSTM to mine the spatiotemporal correlation between nodes. Secondly, the complementary characteristics linked between the generative network and the discriminant network are used to eliminate the cumulative error of a single neural network in the long-term prediction process and improve the prediction accuracy of the network in ship-speed determination. To conclude, the Generator–LSTM model advanced here is used for ship-speed prediction and compared with other models, utilizing identical AIS (automatic identification system) ship-speed information in the same scene. The findings indicate that the model demonstrates high accuracy in the typical error measurement index, which means that the model can reliably better predict the ship speed. The results of the study will assist maritime traffic participants in better taking precautions to prevent collisions and improve maritime traffic safety. Full article
Show Figures

Figure 1

25 pages, 3866 KiB  
Article
Fault Detection for Power Batteries Using a Generative Adversarial Network with a Convolutional Long Short-Term Memory (GAN-CNN-LSTM) Hybrid Model
by Shaofan Liu, Tianbao Xie, Yanxin Li and Siyu Liu
Appl. Sci. 2025, 15(11), 5795; https://doi.org/10.3390/app15115795 - 22 May 2025
Viewed by 662
Abstract
With the rapid proliferation of new energy vehicles, the safety of power batteries has attracted increasing attention. As a crucial approach to ensuring system stability, fault detection has become a research focus. However, strong temporal dependencies in battery operation data and the scarcity [...] Read more.
With the rapid proliferation of new energy vehicles, the safety of power batteries has attracted increasing attention. As a crucial approach to ensuring system stability, fault detection has become a research focus. However, strong temporal dependencies in battery operation data and the scarcity of fault samples hinder the accuracy and robustness of existing methods. To address these challenges, this paper proposes a deep learning-based fault detection model that integrates a Generative Adversarial Network (GAN) with a Convolutional Long Short-Term Memory (CNN-LSTM) network. The GAN is employed to augment minority-class fault samples, effectively mitigating the class imbalance in the dataset. Then, the CNN-LSTM module directly processes raw multivariate time-series data, combining the capability of CNN in extracting local spatial patterns with the LSTM strength in modeling temporal dependencies, enabling accurate identification of battery faults. Experiments conducted on real-world datasets collected from electric vehicles demonstrate that the proposed model achieves a Precision of 95.23%, Recall of 87.23%, and F1-Score of 91.12% for fault detection. Additionally, it yields an Average Precision (AP) of 97.45% and an Area Under the ROC Curve (AUC) of 99%, significantly outperforming conventional deep learning and machine learning baselines. This study provides a practical and high-performance solution for fault detection in power battery systems, with promising application potential. Full article
Show Figures

Figure 1

22 pages, 17083 KiB  
Article
Volcanic Activity Classification Through Semi-Supervised Learning Applied to Satellite Radiance Time Series
by Francesco Spina, Giuseppe Bilotta, Annalisa Cappello, Marco Spina, Francesco Zuccarello and Gaetana Ganci
Remote Sens. 2025, 17(10), 1679; https://doi.org/10.3390/rs17101679 - 10 May 2025
Viewed by 578
Abstract
Satellite imagery provides a rich source of information that serves as a comprehensive and synoptic tool for the continuous monitoring of active volcanoes, including those in remote and inaccessible areas. The huge influx of such data requires the development of automated systems for [...] Read more.
Satellite imagery provides a rich source of information that serves as a comprehensive and synoptic tool for the continuous monitoring of active volcanoes, including those in remote and inaccessible areas. The huge influx of such data requires the development of automated systems for efficient processing and interpretation. Early warning systems, designed to process satellite imagery to identify signs of impending eruptions and monitor eruptive activity in near real-time, are essential for hazard assessment and risk mitigation. Here, we propose a machine learning approach for the automatic classification of pixels in SEVIRI images to detect and characterize the eruptive activity of a volcano. In particular, we exploit a semi-supervised GAN (SGAN) model that retrieves the presence of thermal anomalies, volcanic ash plumes, and meteorological clouds in each SEVIRI pixel, allowing time series plots to be obtained showing the evolution of volcanic activity. The SGAN model was trained and tested using the huge amount of data available on Mount Etna (Italy). Then, it was applied to other volcanoes, specifically, Stromboli (Italy), Tajogaite (Spain), and Nyiragongo (Democratic Republic of the Congo), to assess the model’s ability to generalize. The validation of the model was performed through a visual comparison between the classification results and the corresponding SEVIRI images. Moreover, we evaluate the model performance by calculating three different metrics, namely the precision (correctness of positive predictions), the recall (ability to find all the positive instances), and the F1-score (general model’s accuracy), finding an average accuracy of 0.9. Our approach can be extended to other geostationary satellite data and applied worldwide to characterize volcanic activity, allowing the monitoring of even remote volcanoes that are difficult to reach from the ground. Full article
(This article belongs to the Special Issue Satellite Monitoring of Volcanoes in Near-Real Time)
Show Figures

Graphical abstract

35 pages, 4428 KiB  
Article
An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids
by Mohammad Mehdi Sharifi Nevisi, Mehrdad Shoeibi, Francisco Hernando-Gallego, Diego Martín and Sarvenaz Sadat Khatami
Energies 2025, 18(10), 2435; https://doi.org/10.3390/en18102435 - 9 May 2025
Viewed by 618
Abstract
The increasing complexity of modern smart power distribution systems (SPDSs) has made anomaly detection a significant challenge, as these systems generate vast amounts of heterogeneous and time-dependent data. Conventional detection methods often struggle with adaptability, generalization, and real-time decision-making, leading to high false [...] Read more.
The increasing complexity of modern smart power distribution systems (SPDSs) has made anomaly detection a significant challenge, as these systems generate vast amounts of heterogeneous and time-dependent data. Conventional detection methods often struggle with adaptability, generalization, and real-time decision-making, leading to high false alarm rates and inefficient fault detection. To address these challenges, this study proposes a novel deep reinforcement learning (DRL)-based framework, integrating a convolutional neural network (CNN) for hierarchical feature extraction and a recurrent neural network (RNN) for sequential pattern recognition and time-series modeling. To enhance model performance, we introduce a novel non-dominated sorting artificial bee colony (NSABC) algorithm, which fine-tunes the hyper-parameters of the CNN-RNN structure, including weights, biases, the number of layers, and neuron configurations. This optimization ensures improved accuracy, faster convergence, and better generalization to unseen data. The proposed DRL-NSABC model is evaluated using four benchmark datasets: smart grid, advanced metering infrastructure (AMI), smart meter, and Pecan Street, widely recognized in anomaly detection research. A comparative analysis against state-of-the-art deep learning (DL) models, including RL, CNN, RNN, the generative adversarial network (GAN), the time-series transformer (TST), and bidirectional encoder representations from transformers (BERT), demonstrates the superiority of the proposed DRL-NSABC. The proposed DRL-NSABC model achieved high accuracy across all benchmark datasets, including 95.83% on the smart grid dataset, 96.19% on AMI, 96.61% on the smart meter, and 96.45% on Pecan Street. Statistical t-tests confirm the superiority of DRL-NSABC over other algorithms, while achieving a variance of 0.00014. Moreover, DRL-NSABC demonstrates the fastest convergence, reaching near-optimal accuracy within the first 100 epochs. By significantly reducing false positives and ensuring rapid anomaly detection with low computational overhead, the proposed DRL-NSABC framework enables efficient real-world deployment in smart power distribution systems without major infrastructure upgrades and promotes cost-effective, resilient power grid operations. Full article
Show Figures

Figure 1

24 pages, 4656 KiB  
Article
CiTranGAN: Channel-Independent Based-Anomaly Detection for Multivariate Time Series Data
by Xiao Chen, Tongxiang Li, Zuozuo Ma, Jing Chen, Jingfeng Guo and Zhiliang Liu
Electronics 2025, 14(9), 1857; https://doi.org/10.3390/electronics14091857 - 2 May 2025
Viewed by 490
Abstract
Anomaly detection, as a critical task in time series data analysis, plays a pivotal role in ensuring industrial production safety, enhancing the precision of climate predictions and improving early warning for ocean disaster. However, due to the high dimensionality, redundancy, and non-stationarity inherent [...] Read more.
Anomaly detection, as a critical task in time series data analysis, plays a pivotal role in ensuring industrial production safety, enhancing the precision of climate predictions and improving early warning for ocean disaster. However, due to the high dimensionality, redundancy, and non-stationarity inherent in time series data, rapidly and accurately identifying anomalies presents a significant challenge. This paper proposes a novel model CiTranGAN, which integrates the advantages of Transformer architecture, generative adversarial networks, and channel-independence strategies. In this model, the channel-independent strategy eliminates cross-channel interference and mitigates distribution drift in high-dimensional data. To mitigate redundancy and enhance multi-scale temporal feature representation, we constructed a feature extraction module that integrates downsampling, convolution, and interaction learning. To overcome the limitations of the traditional attention mechanism in detecting local trend variations, a hybrid dilated causal convolution-based multi-scale self-attention mechanism is proposed. Finally, experiments were conducted on five real-world multivariate time series datasets. Compared with the baseline models, CiTranGAN achieves average improvements of 12.48% in F1-score and 7.89% in AUC. In the ablation studies, CiTranGAN outperformed the channel-independent mechanism, the downsampling–convolution–interaction learning module, and the multi-scale convolutional self-attention mechanism, with respective average increases in AUC of 1.63%, 2.16%, and 3.47%, and corresponding average improvements in F1-score of 1.70%, 4.33%, and 2.04%, respectively. These experimental results demonstrate the rationality and effectiveness of the proposed model. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 28786 KiB  
Article
Text-Conditioned Diffusion-Based Synthetic Data Generation for Turbine Engine Sensor Analysis and RUL Estimation
by Luis Pablo Mora-de-León, David Solís-Martín, Juan Galán-Páez and Joaquín Borrego-Díaz
Machines 2025, 13(5), 374; https://doi.org/10.3390/machines13050374 - 30 Apr 2025
Viewed by 834
Abstract
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into [...] Read more.
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into three components, mapped to the RGB channels of an image. These components, combined with engine identifiers and cycle information, form compact 19 × 19 × 3 pixel images, later scaled to 512 × 512 × 3 pixels. A variational autoencoder (VAE)-based diffusion model, fine-tuned on these images, leverages text prompts describing engine characteristics to generate high-quality synthetic samples. A reverse transformation pipeline reconstructs synthetic images back into time-series signals, preserving the original engine-specific attributes while removing padding artifacts. The quality of the synthetic data is assessed by training Remaining Useful Life (RUL) estimation models and comparing performance across original, synthetic, and combined datasets. Results demonstrate that synthetic data can be beneficial for model training, particularly in the early epochs when working with limited datasets. Compared to existing approaches, which rely on generative adversarial networks (GANs) or deterministic transformations, the proposed framework offers enhanced data fidelity and adaptability. This study highlights the potential of text-conditioned diffusion models for augmenting time-series datasets in industrial Prognostics and Health Management (PHM) applications. Full article
(This article belongs to the Section Turbomachinery)
Show Figures

Figure 1

20 pages, 8096 KiB  
Article
Simulating Intraday Electricity Consumption with ForGAN
by Ralf Korn and Laurena Ramadani
Algorithms 2025, 18(5), 256; https://doi.org/10.3390/a18050256 - 27 Apr 2025
Viewed by 496
Abstract
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German [...] Read more.
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German electricity consumption data. Electricity consumption time series have been selected due to their typical combination of (non-linear) seasonal behavior on different time scales and of local random effects. The primary objective is to demonstrate that ForGAN is able to capture such complicated seasonal figures and to generate data with the correct underlying conditional distribution without data preparation, such as de-seasonalization. In particular, ForGAN does so without assuming an underlying model for the evolution of the time series and is purely data-based. The training and validation procedures are described in great detail. Specifically, a long iteration process of the interplay between the generator and discriminator is required to obtain convergence of the parameters that determine the conditional distribution from which additional artificial data can be generated. Additionally, extensive quality assessments of the generated data are conducted by looking at histograms, auto-correlation structures, and further features comparing the real and the generated data. As a result, the generated data match the conditional distribution of the next consumption value of the training data well. Thus, the trained generator of ForGAN can be used to simulate additional time series of German electricity consumption. This can be seen as a kind of proof for the applicabilty of ForGAN. Through a detailed descriptions of the necessary steps of training and validation procedures, a detailed quality check before the actual use of the simulated data, and by providing the intuition and mathematical background behind ForGAN, this contribution aims to demystify the application of GANs to motivate both theorists and researchers in applied sciences to use them for data generation in similar applications. The proposed framework has laid out a plan for doing so. Full article
Show Figures

Figure 1

26 pages, 6012 KiB  
Article
High-Risk Test Scenario Generation for Autonomous Vehicles at Roundabouts Using Naturalistic Driving Data
by Dian Ren, Helai Huang, Ye Li and Jieling Jin
Appl. Sci. 2025, 15(8), 4505; https://doi.org/10.3390/app15084505 - 19 Apr 2025
Cited by 1 | Viewed by 1134
Abstract
While autonomous vehicles have the potential to mitigate risks associated with dangerous driving behaviors, the safety and stability of autonomous driving technology in real-world applications still require comprehensive tests. Scenario-based virtual simulation testing has emerged as a crucial approach for testing autonomous vehicles, [...] Read more.
While autonomous vehicles have the potential to mitigate risks associated with dangerous driving behaviors, the safety and stability of autonomous driving technology in real-world applications still require comprehensive tests. Scenario-based virtual simulation testing has emerged as a crucial approach for testing autonomous vehicles, especially in critical scenarios under complex environments, such as merging scenarios at roundabouts with unique traffic features. However, the lack of high-risk scenarios in the real-world traffic domain presents challenges for simulation tests. To address these challenges, this study proposes a scenario generation framework for complex roundabout environments, focusing on merging areas, which is driven by trajectory data and employs generative deep learning techniques to create adversarial hazardous scenarios. Specifically, leveraging real trajectory data from roundabouts, the framework utilizes a time series generative adversarial network (TimeGAN) to generate realistic safety-critical driving trajectories. By creating specific hazardous scenarios, this strategy broadens the library of test scenarios and speeds up the testing process for autonomous vehicles. The significance of the scenarios produced is proven using Simulation of Urban Mobility (SUMO) and CARLA simulation, confirming their necessity in autonomous driving testing. The TimeGAN model effectively captures the spatial–temporal features of merging scenarios, generating high-quality data that enhance the testing scenario library. Findings of this study contribute to solving the problem of scarcity of critical scenarios in virtual testing and accelerate the testing procedure for self-driving automobiles. Full article
Show Figures

Figure 1

Back to TopTop