Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (400)

Search Parameters:
Keywords = real load dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 9710 KiB  
Article
Early Detection of ITSC Faults in PMSMs Using Transformer Model and Transient Time-Frequency Features
by Ádám Zsuga and Adrienn Dineva
Energies 2025, 18(15), 4048; https://doi.org/10.3390/en18154048 - 30 Jul 2025
Viewed by 41
Abstract
Inter-turn short-circuit (ITSC) faults in permanent magnet synchronous machines (PMSMs) present a significant reliability challenge in electric vehicle (EV) drivetrains, particularly under non-stationary operating conditions characterized by inverter-driven transients, variable loads, and magnetic saturation. Existing diagnostic approaches, including motor current signature analysis (MCSA) [...] Read more.
Inter-turn short-circuit (ITSC) faults in permanent magnet synchronous machines (PMSMs) present a significant reliability challenge in electric vehicle (EV) drivetrains, particularly under non-stationary operating conditions characterized by inverter-driven transients, variable loads, and magnetic saturation. Existing diagnostic approaches, including motor current signature analysis (MCSA) and wavelet-based methods, are primarily designed for steady-state conditions and rely on manual feature selection, limiting their applicability in real-time embedded systems. Furthermore, the lack of publicly available, high-fidelity datasets capturing the transient dynamics and nonlinear flux-linkage behaviors of PMSMs under fault conditions poses an additional barrier to developing data-driven diagnostic solutions. To address these challenges, this study introduces a simulation framework that generates a comprehensive dataset using finite element method (FEM) models, incorporating magnetic saturation effects and inverter-driven transients across diverse EV operating scenarios. Time-frequency features extracted via Discrete Wavelet Transform (DWT) from stator current signals are used to train a Transformer model for automated ITSC fault detection. The Transformer model, leveraging self-attention mechanisms, captures both local transient patterns and long-range dependencies within the time-frequency feature space. This architecture operates without sequential processing, in contrast to recurrent models such as LSTM or RNN models, enabling efficient inference with a relatively low parameter count, which is advantageous for embedded applications. The proposed model achieves 97% validation accuracy on simulated data, demonstrating its potential for real-time PMSM fault detection. Additionally, the provided dataset and methodology contribute to the facilitation of reproducible research in ITSC diagnostics under realistic EV operating conditions. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Power and Energy Systems)
Show Figures

Figure 1

13 pages, 4474 KiB  
Article
Imaging on the Edge: Mapping Object Corners and Edges with Stereo X-Ray Tomography
by Zhenduo Shang and Thomas Blumensath
Tomography 2025, 11(8), 84; https://doi.org/10.3390/tomography11080084 - 29 Jul 2025
Viewed by 100
Abstract
Background/Objectives: X-ray computed tomography (XCT) is a powerful tool for volumetric imaging, where three-dimensional (3D) images are generated from a large number of individual X-ray projection images. However, collecting the required number of low-noise projection images is time-consuming, limiting its applicability to scenarios [...] Read more.
Background/Objectives: X-ray computed tomography (XCT) is a powerful tool for volumetric imaging, where three-dimensional (3D) images are generated from a large number of individual X-ray projection images. However, collecting the required number of low-noise projection images is time-consuming, limiting its applicability to scenarios requiring high temporal resolution, such as the study of dynamic processes. Inspired by stereo vision, we previously developed stereo X-ray imaging methods that operate with only two X-ray projections, enabling the 3D reconstruction of point and line fiducial markers at significantly faster temporal resolutions. Methods: Building on our prior work, this paper demonstrates the use of stereo X-ray techniques for 3D reconstruction of sharp object corners, eliminating the need for internal fiducial markers. This is particularly relevant for deformation measurement of manufactured components under load. Additionally, we explore model training using synthetic data when annotated real data is unavailable. Results: We show that the proposed method can reliably reconstruct sharp corners in 3D using only two X-ray projections. The results confirm the method’s applicability to real-world stereo X-ray images without relying on annotated real training datasets. Conclusions: Our approach enables stereo X-ray 3D reconstruction using synthetic training data that mimics key characteristics of real data, thereby expanding the method’s applicability in scenarios with limited training resources. Full article
Show Figures

Figure 1

26 pages, 6806 KiB  
Article
Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy
by Zhaohong Li, Wei Yang, Can Su, Hongcheng Zeng, Yamin Wang, Jiayi Guo and Huaping Xu
Remote Sens. 2025, 17(15), 2599; https://doi.org/10.3390/rs17152599 - 26 Jul 2025
Viewed by 258
Abstract
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional [...] Read more.
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional ship recognition conducted in focused image domains cannot process MEO SAR data efficiently. To address this issue, a multi-level focusing-classification strategy for MEO SAR ship recognition is proposed, which is applied to the range-compressed ship data domain. Firstly, global fast coarse-focusing is conducted to compensate for sailing motion errors. Then, a coarse-classification network is designed to realize major target category classification, based on which local region image slices are extracted. Next, fine-focusing is performed to correct high-order motion errors, followed by applying fine-classification applied to the image slices to realize final ship classification. Equivalent MEO SAR ship images generated by real LEO SAR data are utilized to construct training and testing datasets. Simulated MEO SAR ship data are also used to evaluate the generalization of the whole method. The experimental results demonstrate that the proposed method can achieve high classification precision. Since only local region slices are used during the second-level processing step, the complex computations induced by fine-focusing for the full image can be avoided, thereby significantly improving overall efficiency. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

20 pages, 2792 KiB  
Article
Capturing High-Frequency Harmonic Signatures for NILM: Building a Dataset for Load Disaggregation
by Farid Dinar, Sébastien Paris and Éric Busvelle
Sensors 2025, 25(15), 4601; https://doi.org/10.3390/s25154601 - 25 Jul 2025
Viewed by 220
Abstract
Advanced Non-Intrusive Load Monitoring (NILM) research is important to help reduce energy consumption. Very-low-frequency approaches have traditionally faced challenges in separating appliance uses due to low discriminative information. The richer signatures available in high-frequency electrical data include many harmonic orders that have the [...] Read more.
Advanced Non-Intrusive Load Monitoring (NILM) research is important to help reduce energy consumption. Very-low-frequency approaches have traditionally faced challenges in separating appliance uses due to low discriminative information. The richer signatures available in high-frequency electrical data include many harmonic orders that have the potential to advance disaggregation. This has been explored to some extent, but not comprehensively due to a lack of an appropriate public dataset. This paper presents the development of a cost-effective energy monitoring system scalable for multiple entries while producing detailed measurements. We will detail our approach to creating a NILM dataset comprising both aggregate loads and individual appliance measurements, all while ensuring that the dataset is reproducible and accessible. Ultimately, the dataset can be used to validate NILM, and we show through the use of machine learning techniques that high-frequency features improve disaggregation accuracy when compared with traditional methods. This work addresses a critical gap in NILM research by detailing the design and implementation of a data acquisition system capable of generating rich and structured datasets that support precise energy consumption analysis and prepare the essential materials for advanced, real-time energy disaggregation and smart energy management applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 317
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

38 pages, 9771 KiB  
Article
Global Research Trends in Biomimetic Lattice Structures for Energy Absorption and Deformation: A Bibliometric Analysis (2020–2025)
by Sunny Narayan, Brahim Menacer, Muhammad Usman Kaisan, Joseph Samuel, Moaz Al-Lehaibi, Faisal O. Mahroogi and Víctor Tuninetti
Biomimetics 2025, 10(7), 477; https://doi.org/10.3390/biomimetics10070477 - 19 Jul 2025
Viewed by 606
Abstract
Biomimetic lattice structures, inspired by natural architectures such as bone, coral, mollusk shells, and Euplectella aspergillum, have gained increasing attention for their exceptional strength-to-weight ratios, energy absorption, and deformation control. These properties make them ideal for advanced engineering applications in aerospace, biomedical devices, [...] Read more.
Biomimetic lattice structures, inspired by natural architectures such as bone, coral, mollusk shells, and Euplectella aspergillum, have gained increasing attention for their exceptional strength-to-weight ratios, energy absorption, and deformation control. These properties make them ideal for advanced engineering applications in aerospace, biomedical devices, and structural impact protection. This study presents a comprehensive bibliometric analysis of global research on biomimetic lattice structures published between 2020 and 2025, aiming to identify thematic trends, collaboration patterns, and underexplored areas. A curated dataset of 3685 publications was extracted from databases like PubMed, Dimensions, Scopus, IEEE, Google Scholar, and Science Direct and merged together. After the removal of duplication and cleaning, about 2226 full research articles selected for the bibliometric analysis excluding review works, conference papers, book chapters, and notes using Cite space, VOS viewer version 1.6.20, and Bibliometrix R packages (4.5. 64-bit) for mapping co-authorship networks, institutional affiliations, keyword co-occurrence, and citation relationships. A significant increase in the number of publications was found over the past year, reflecting growing interest in this area. The results identify China as the most prolific contributor, with substantial institutional support and active collaboration networks, especially with European research groups. Key research focuses include additive manufacturing, finite element modeling, machine learning-based design optimization, and the performance evaluation of bioinspired geometries. Notably, the integration of artificial intelligence into structural modeling is accelerating a shift toward data-driven design frameworks. However, gaps remain in geometric modeling standardization, fatigue behavior analysis, and the real-world validation of lattice structures under complex loading conditions. This study provides a strategic overview of current research directions and offers guidance for future interdisciplinary exploration. The insights are intended to support researchers and practitioners in advancing next-generation biomimetic materials with superior mechanical performance and application-specific adaptability. Full article
(This article belongs to the Special Issue Nature-Inspired Science and Engineering for Sustainable Future)
Show Figures

Figure 1

21 pages, 2594 KiB  
Article
Extraction of Basic Features and Typical Operating Conditions of Wind Power Generation for Sustainable Energy Systems
by Yongtao Sun, Qihui Yu, Xinhao Wang, Shengyu Gao and Guoxin Sun
Sustainability 2025, 17(14), 6577; https://doi.org/10.3390/su17146577 - 18 Jul 2025
Viewed by 198
Abstract
Accurate extraction of representative operating conditions is crucial for optimizing systems in renewable energy applications. This study proposes a novel framework that combines the Parzen window estimation method, ideal for nonparametric modeling of wind, solar, and load datasets, with a game theory-based time [...] Read more.
Accurate extraction of representative operating conditions is crucial for optimizing systems in renewable energy applications. This study proposes a novel framework that combines the Parzen window estimation method, ideal for nonparametric modeling of wind, solar, and load datasets, with a game theory-based time scale selection mechanism. The novelty of this work lies in integrating probabilistic density modeling with multi-indicator evaluation to derive realistic operational profiles. We first validate the superiority of the Parzen window approach over traditional Weibull and Beta distributions in estimating wind and solar probability density functions. In addition, we analyze the influence of key meteorological parameters such as wind direction, temperature, and solar irradiance on energy production. Using three evaluation metrics, the main result shows that a 3-day representative time scale offers optimal accuracy when determined through game theory methods. Validation with real-world data from Inner Mongolia confirms the robustness of the proposed method, yielding low errors in wind, solar, and load profiles. This study contributes a novel 3-day typical profile extraction method validated on real meteorological data, providing a data-driven foundation for optimizing energy storage systems under renewable uncertainty. This framework supports energy sustainability by ensuring realistic modeling under renewable intermittency. Full article
Show Figures

Figure 1

20 pages, 1550 KiB  
Article
Strategy for Precopy Live Migration and VM Placement in Data Centers Based on Hybrid Machine Learning
by Taufik Hidayat, Kalamullah Ramli and Ruki Harwahyu
Informatics 2025, 12(3), 71; https://doi.org/10.3390/informatics12030071 - 15 Jul 2025
Viewed by 381
Abstract
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a [...] Read more.
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a combined machine learning method that uses an MDP to choose which VMs to move, the RF method to sort the VMs according to load, and NSGA-III to achieve multiple optimization objectives, such as reducing downtime, improving SLA, and increasing energy efficiency. For this model, the GWA-Bitbrains dataset was used, on which it had a classification accuracy of 98.77%, a MAPE of 7.69% in predicting migration duration, and an energy efficiency improvement of 90.80%. The results of real-world experiments show that the hybrid machine learning strategy could significantly reduce the data center workload, increase the total migration time, and decrease the downtime. The results of hybrid machine learning affirm the effectiveness of integrating the MDP, RF method, and NSGA-III for providing holistic solutions in VM placement strategies for large-scale data centers. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

35 pages, 11934 KiB  
Article
A Data-Driven Approach for Generating Synthetic Load Profiles with GANs
by Tsvetelina Kaneva, Irena Valova, Katerina Gabrovska-Evstatieva and Boris Evstatiev
Appl. Sci. 2025, 15(14), 7835; https://doi.org/10.3390/app15147835 - 13 Jul 2025
Viewed by 325
Abstract
The generation of realistic electrical load profiles is essential for advancing smart grid analytics, demand forecasting, and privacy-preserving data sharing. Traditional approaches often rely on large, high-resolution datasets and complex recurrent neural architectures, which can be unstable or ineffective when training data are [...] Read more.
The generation of realistic electrical load profiles is essential for advancing smart grid analytics, demand forecasting, and privacy-preserving data sharing. Traditional approaches often rely on large, high-resolution datasets and complex recurrent neural architectures, which can be unstable or ineffective when training data are limited. This paper proposes a data-driven framework based on a lightweight 1D Convolutional Wasserstein GAN with Gradient Penalty (Conv1D-WGAN-GP) for generating high-fidelity synthetic 24 h load profiles. The model is specifically designed to operate on small- to medium-sized datasets, where recurrent models often fail due to overfitting or training instability. The approach leverages the ability of Conv1D layers to capture localized temporal patterns while remaining compact and stable during training. We benchmark the proposed model against vanilla GAN, WGAN-GP, and Conv1D-GAN across four datasets with varying consumption patterns and sizes, including industrial, agricultural, and residential domains. Quantitative evaluations using statistical divergence measures, Real-vs-Synthetic Distinguishability Score, and visual similarity confirm that Conv1D-WGAN-GP consistently outperforms baselines, particularly in low-data scenarios. This demonstrates its robustness, generalization capability, and suitability for privacy-sensitive energy modeling applications where access to large datasets is constrained. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

17 pages, 3854 KiB  
Article
Research on Signal Processing Algorithms Based on Wearable Laser Doppler Devices
by Yonglong Zhu, Yinpeng Fang, Jinjiang Cui, Jiangen Xu, Minghang Lv, Tongqing Tang, Jinlong Ma and Chengyao Cai
Electronics 2025, 14(14), 2761; https://doi.org/10.3390/electronics14142761 - 9 Jul 2025
Viewed by 224
Abstract
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise [...] Read more.
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise information, modal decomposition techniques that depend on empirical parameter optimization and are prone to modal aliasing, wavelet threshold functions that struggle to balance signal preservation with smoothness, and the high computational complexity of deep learning approaches—this paper proposes an ISSA-VMD-AWPTD denoising algorithm. This innovative approach integrates an improved sparrow search algorithm (ISSA), variational mode decomposition (VMD), and adaptive wavelet packet threshold denoising (AWPTD). The ISSA is enhanced through cubic chaotic mapping, butterfly optimization, and sine–cosine search strategies, targeting the minimization of the envelope entropy of modal components for adaptive optimization of VMD’s decomposition levels and penalty factors. A correlation coefficient-based selection mechanism is employed to separate target and mixed modes effectively, allowing for the efficient removal of noise components. Additionally, an exponential adaptive threshold function is introduced, combining wavelet packet node energy proportion analysis to achieve efficient signal reconstruction. By leveraging the rapid convergence property of ISSA (completing parameter optimization within five iterations), the computational load of traditional VMD is reduced while maintaining the denoising accuracy. Experimental results demonstrate that for a 200 Hz test signal, the proposed algorithm achieves a signal-to-noise ratio (SNR) of 24.47 dB, an improvement of 18.8% over the VMD method (20.63 dB), and a root-mean-square-error (RMSE) of 0.0023, a reduction of 69.3% compared to the VMD method (0.0075). The processing results for measured human blood flow signals achieve an SNR of 24.11 dB, a RMSE of 0.0023, and a correlation coefficient (R) of 0.92, all outperforming other algorithms, such as VMD and WPTD. This study effectively addresses issues related to parameter sensitivity and incomplete noise separation in traditional methods, providing a high-precision and low-complexity real-time signal processing solution for wearable devices. However, the parameter optimization still needs improvement when dealing with large datasets. Full article
Show Figures

Figure 1

21 pages, 9172 KiB  
Article
Spike-Driven Channel-Temporal Attention Network with Multi-Scale Convolution for Energy-Efficient Bearing Fault Detection
by JinGyo Lim and Seong-Eun Kim
Appl. Sci. 2025, 15(13), 7622; https://doi.org/10.3390/app15137622 - 7 Jul 2025
Viewed by 278
Abstract
Real-time bearing fault diagnosis necessitates highly accurate, computationally efficient, and energy-conserving models suitable for deployment on resource-constrained edge devices. To address these demanding requirements, we propose the Spike Convolutional Attention Network (SpikeCAN), a novel spike-driven neural architecture tailored explicitly for real-time industrial diagnostics. [...] Read more.
Real-time bearing fault diagnosis necessitates highly accurate, computationally efficient, and energy-conserving models suitable for deployment on resource-constrained edge devices. To address these demanding requirements, we propose the Spike Convolutional Attention Network (SpikeCAN), a novel spike-driven neural architecture tailored explicitly for real-time industrial diagnostics. SpikeCAN utilizes the inherent sparsity and event-driven processing capabilities of spiking neural networks (SNNs), significantly minimizing both computational load and power consumption. The SpikeCAN integrates a multi-dilated receptive field (MDRF) block and a convolution-based spike attention module. The MDRF module effectively captures extensive temporal dependencies from signals across various scales. Simultaneously, the spike-based attention mechanism dynamically extracts spatial-temporal patterns, substantially improving diagnostic accuracy and reliability. We validate SpikeCAN on two public bearing fault datasets: the Case Western Reserve University (CWRU) and the Society for Machinery Failure Prevention Technology (MFPT). The proposed model achieves 99.86% accuracy on the four-class CWRU dataset through five-fold cross-validation and 99.88% accuracy with a conventional 70:30 train–test random split. For the more challenging ten-class classification task on the same dataset, it achieves 97.80% accuracy under five-fold cross-validation. Furthermore, SpikeCAN attains a state-of-the-art accuracy of 96.31% on the fifteen-class MFPT dataset, surpassing existing benchmarks. These findings underscore a significant advancement in fault diagnosis technology, demonstrating the considerable practical potential of spike-driven neural networks in real-time, energy-efficient industrial diagnostic applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3925 KiB  
Article
Optimized Multiple Regression Prediction Strategies with Applications
by Yiming Zhao, Shu-Chuan Chu, Ali Riza Yildiz and Jeng-Shyang Pan
Symmetry 2025, 17(7), 1085; https://doi.org/10.3390/sym17071085 - 7 Jul 2025
Viewed by 343
Abstract
As a classical statistical method, multiple regression is widely used for forecasting tasks in power, medicine, finance, and other fields. The rise of machine learning has led to the adoption of neural networks, particularly Long Short-Term Memory (LSTM) models, for handling complex forecasting [...] Read more.
As a classical statistical method, multiple regression is widely used for forecasting tasks in power, medicine, finance, and other fields. The rise of machine learning has led to the adoption of neural networks, particularly Long Short-Term Memory (LSTM) models, for handling complex forecasting problems, owing to their strong ability to capture temporal dependencies in sequential data. Nevertheless, the performance of LSTM models is highly sensitive to hyperparameter configuration. Traditional manual tuning methods suffer from inefficiency, excessive reliance on expert experience, and poor generalization. Aiming to address the challenges of complex hyperparameter spaces and the limitations of manual adjustment, an enhanced sparrow search algorithm (ISSA) with adaptive parameter configuration was developed for LSTM-based multivariate regression frameworks, where systematic optimization of hidden layer dimensionality, learning rate scheduling, and iterative training thresholds enhances its model generalization capability. In terms of SSA improvement, first, the population is initialized by the reverse learning strategy to increase the diversity of the population. Second, the mechanism for updating the positions of producer sparrows is improved, and different update formulas are selected based on the sizes of random numbers to avoid convergence to the origin and improve search flexibility. Then, the step factor is dynamically adjusted to improve the accuracy of the solution. To improve the algorithm’s global search capability and escape local optima, the sparrow search algorithm’s position update mechanism integrates Lévy flight for detection and early warning. Experimental evaluations using benchmark functions from the CEC2005 test set demonstrated that the ISSA outperforms PSO, the SSA, and other algorithms in optimization performance. Further validation with power load and real estate datasets revealed that the ISSA-LSTM model achieves superior prediction accuracy compared to existing approaches, achieving an RMSE of 83.102 and an R2 of 0.550 during electric load forecasting and an RMSE of 18.822 and an R2 of 0.522 during real estate price prediction. Future research will explore the integration of the ISSA with alternative neural architectures such as GRUs and Transformers to assess its flexibility and effectiveness across different sequence modeling paradigms. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 2722 KiB  
Article
Predicting the Evolution of Capacity Degradation Histograms of Rechargeable Batteries Under Dynamic Loads via Latent Gaussian Processes
by Daocan Wang, Xinggang Li and Jiahuan Lu
Energies 2025, 18(13), 3503; https://doi.org/10.3390/en18133503 - 2 Jul 2025
Viewed by 266
Abstract
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form [...] Read more.
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form of high-dimensional histograms, rather than relying on point estimates. The model integrates Principal Component Analysis with GP regression to learn temporal degradation patterns from partial early-cycle data of a target cell, using a fully degraded reference cell. Experiments on the NASA dataset with randomized dynamic load profiles demonstrate that Latent GP enables full-lifecycle capacity distribution prediction using only early-cycle observations. Compared with standard GP, long short-term memory (LSTM), and Monte Carlo Dropout LSTM baselines, it achieves superior accuracy in terms of Kullback–Leibler divergence and mean squared error. Sensitivity analyses further confirm the model’s robustness to input noise and hyperparameter settings, highlighting its potential for practical deployment in real-world battery health prognostics. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

21 pages, 1476 KiB  
Article
AI-Driven Handover Management and Load Balancing Optimization in Ultra-Dense 5G/6G Cellular Networks
by Chaima Chabira, Ibraheem Shayea, Gulsaya Nurzhaubayeva, Laura Aldasheva, Didar Yedilkhan and Saule Amanzholova
Technologies 2025, 13(7), 276; https://doi.org/10.3390/technologies13070276 - 1 Jul 2025
Cited by 1 | Viewed by 1000
Abstract
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring [...] Read more.
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring seamless mobility and efficient resource allocation. Traditional handover and load balancing techniques, primarily designed for 4G systems, are no longer sufficient to address the complexity of heterogeneous network environments that incorporate millimeter-wave communication, Internet of Things (IoT) devices, and unmanned aerial vehicles (UAVs). The review focuses on how recent advances in artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), are being applied to improve predictive handover decisions and enable real-time, adaptive load distribution. AI-driven solutions can significantly reduce handover failures, latency, and network congestion, while improving overall user experience and quality of service (QoS). This paper surveys state-of-the-art research on these techniques, categorizing them according to their application domains and evaluating their performance benefits and limitations. Furthermore, the paper discusses the integration of intelligent handover and load balancing methods in smart city scenarios, where ultra-dense networks must support diverse services with high reliability and low latency. Key research gaps are also identified, including the need for standardized datasets, energy-efficient AI models, and context-aware mobility strategies. Overall, this review aims to guide future research and development in designing robust, AI-assisted mobility and resource management frameworks for next-generation wireless systems. Full article
Show Figures

Figure 1

18 pages, 1463 KiB  
Article
On Predicting Marine Engine Measurements with Synthetic Data in Scarce Dataset
by Sandi Baressi Šegota, Igor Poljak, Nikola Anđelić and Vedran Mrzljak
J. Mar. Sci. Eng. 2025, 13(7), 1289; https://doi.org/10.3390/jmse13071289 - 30 Jun 2025
Viewed by 239
Abstract
The scarcity of high-quality maritime datasets poses a significant challenge for machine learning (ML) applications in marine engineering, particularly in scenarios where real-world data collection is limited or impractical. This study investigates the effectiveness of synthetic data generation and cross-modeling in predicting operational [...] Read more.
The scarcity of high-quality maritime datasets poses a significant challenge for machine learning (ML) applications in marine engineering, particularly in scenarios where real-world data collection is limited or impractical. This study investigates the effectiveness of synthetic data generation and cross-modeling in predicting operational metrics of LNG carrier engines. A total of 38 real-world data points were collected from port and starboard engines, focusing on four target outputs: mechanical efficiency, fuel consumption, load, and effective power. CopulaGAN, a hybrid generative model combining statistical copulas and generative adversarial networks, was employed to produce synthetic datasets. These were used to train multilayer perceptron (MLP) regression models, which were optimized via grid search and validated through five-fold cross-validation. The results show that synthetic data can yield accurate models, with mean absolute percentage errors (MAPE) below 2% in most cases. The combined synthetic datasets consistently outperformed those generated from single-engine data. Cross-modeling was partially successful, as models trained on starboard data generalized well to port data but not vice versa. The engine load variable remained challenging to predict due to its narrow and low-range distribution. Overall, the study highlights synthetic data as a viable solution for enhancing the performance of ML models in data-scarce maritime applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop