Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (276)

Search Parameters:
Keywords = LSTM–transformer hybrid model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1055 KB  
Article
An Interpretable Multi-Dataset Learning Framework for Breast Cancer Prediction Using Clinical and Biomedical Tabular Data
by Muhammad Ateeb Ather, Abdullah, Zulaikha Fatima, José Luis Oropeza Rodríguez and Grigori Sidorov
Computers 2026, 15(2), 97; https://doi.org/10.3390/computers15020097 (registering DOI) - 2 Feb 2026
Abstract
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable [...] Read more.
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable diagnostic assistance tools that are quite effective in the prediction of the disease in its early stages. In our research, in addition to the proposed framework, a comprehensive comparative assessment of traditional machine learning, deep learning, and transformer-based models has been performed to predict breast cancer in a multi-dataset environment. For the purpose of improving diversity and reducing any possible biases in the datasets, our research combined three datasets: breast cancer biopsy morphological (WDBC), biochemical and metabolic properties (Coimbra), and cytological attributes (WBCO), intended to expose the model to heterogeneous feature domains and evaluate robustness under distributional variation. Based on the thorough process conducted in our research involving traditional machine learning models, deep learning models, and transformers, a proposed hybrid architecture referred to as the FT-Transformer-Attention-LSTM-SVM framework has been designed and developed in our research that is compatible and well-suited for the processing and analysis of the given tabular biomedical datasets. The proposed design in the research has an effective performance of 99.90% accuracy in the primary test environment, an average mean accuracy of 99.56% in the 10-fold cross-validation process, and an accuracy of 98.50% in the WBCO test environment, with a considerable margin of significance less than 0.0001 in the paired two-sample t-test comparison process. In our research, we have performed the importance assessment in conjunction with the SHAP and LIME techniques and have demonstrated that its decisions are based upon important attributes such as the values of the attributes of radius, concavity, perimeter, compactness, and texture. Additionally, the research has conducted the ablation test and has proved the importance of the designed FT-Transformer. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
Show Figures

Figure 1

63 pages, 6866 KB  
Review
Efficient Feature Extraction for EEG-Based Classification: A Comparative Review of Deep Learning Models
by Louisa Hallal, Jason Rhinelander, Ramesh Venkat and Aaron Newman
AI 2026, 7(2), 50; https://doi.org/10.3390/ai7020050 - 1 Feb 2026
Abstract
Feature extraction (FE) is an important step in electroencephalogram (EEG)-based classification for brain–computer interface (BCI) systems and neurocognitive monitoring. However, the dynamic and low-signal-to-noise nature of EEG data makes achieving robust FE challenging. Recent deep learning (DL) advances have offered alternatives to traditional [...] Read more.
Feature extraction (FE) is an important step in electroencephalogram (EEG)-based classification for brain–computer interface (BCI) systems and neurocognitive monitoring. However, the dynamic and low-signal-to-noise nature of EEG data makes achieving robust FE challenging. Recent deep learning (DL) advances have offered alternatives to traditional manual feature engineering by enabling end-to-end learning from raw signals. In this paper, we present a comparative review of 88 DL models published over the last decade, focusing on EEG FE. We examine convolutional neural networks (CNNs), Transformer-based mechanisms, recurrent architectures including recurrent neural networks (RNNs) and long short-term memory (LSTM), and hybrid models. Our analysis focuses on architectural adaptations, computational efficiency, and classification performance across EEG tasks. Our findings reveal that efficient EEG FE depends more on architectural design than model depth. Compact CNNs offer the best efficiency–performance trade-offs in data-limited settings, while Transformers and hybrid models improve long-range temporal representation at a higher computational cost. Thus, the field is shifting toward lightweight hybrid designs that balance local FE with global temporal modeling. This review aims to guide BCI developers and future neurotechnology research toward efficient, scalable, and interpretable EEG-based classification frameworks. Full article
Show Figures

Figure 1

50 pages, 8269 KB  
Article
A Hybrid Deep Learning Framework for Automated Dental Disorder Diagnosis from X-Ray Images
by A. A. Abd El-Aziz, Mohammed Elmogy, Mahmood A. Mahmood and Sameh Abd El-Ghany
J. Clin. Med. 2026, 15(3), 1076; https://doi.org/10.3390/jcm15031076 - 29 Jan 2026
Viewed by 83
Abstract
Background: Dental disorders, such as cavities, periodontal disease, and periapical infections, remain major global health issues, often resulting in pain, tooth loss, and systemic complications if not identified early. Traditional diagnostic methods rely heavily on visual inspection and manual interpretation of panoramic X-ray [...] Read more.
Background: Dental disorders, such as cavities, periodontal disease, and periapical infections, remain major global health issues, often resulting in pain, tooth loss, and systemic complications if not identified early. Traditional diagnostic methods rely heavily on visual inspection and manual interpretation of panoramic X-ray images by dental professionals, making them time-consuming, subjective, and less accessible in resource-limited settings. Objectives: Accurate and timely diagnosis is vital for effective treatment and prevention of disease progression, reducing healthcare costs and patient discomfort. Recent advances in deep learning (DL) have demonstrated remarkable potential to automate and improve the precision of dental diagnostics by objectively analyzing panoramic, periapical, and bitewing X-rays. Methods: In this research, a hybrid feature-fusion framework is proposed. It integrates handcrafted Histogram of Oriented Gradients (HOG) features with deep representations from DenseNet-201 and the Shifted Window (Swin) Transformer models. Sequential dependencies among the fused features were learned utilizing the Long Short-Term Memory (LSTM) classifier. The framework was evaluated on the Dental Radiography Analysis and Diagnosis (DRAD) dataset following preprocessing steps, including resizing, normalization, Contrast Limited Adaptive Histogram Equalization (CLAHE) enhancement, and image cropping. Results: The proposed LSTM-based hybrid model achieved 96.47% accuracy, 91.76% specificity, 94.92% precision, 91.76% recall, and 93.14% F1-score. Conclusions: The proposed framework offers flexibility, interpretability, and strong empirical performance, making it suitable for various image-based recognition applications and serving as a reproducible framework for future research on hybrid feature fusion and sequence-based classification. Full article
(This article belongs to the Special Issue Clinical Advances in Cancer Imaging)
24 pages, 1710 KB  
Article
Distributed Interactive Simulation Dead Reckoning Based on PLO–Transformer–LSTM
by Ke Yang, Songyue Han, Jin Zhang, Yan Dou and Gang Wang
Electronics 2026, 15(3), 596; https://doi.org/10.3390/electronics15030596 - 29 Jan 2026
Viewed by 156
Abstract
Distributed Interactive Simulation (DIS) systems are highly sensitive to temporal delays. Conventional Dead Reckoning (DR) algorithms suffer from limited prediction accuracy and are often inadequate in mitigating simulation latency. To address these issues, a heuristic hybrid prediction model based on Polar Lights Optimization [...] Read more.
Distributed Interactive Simulation (DIS) systems are highly sensitive to temporal delays. Conventional Dead Reckoning (DR) algorithms suffer from limited prediction accuracy and are often inadequate in mitigating simulation latency. To address these issues, a heuristic hybrid prediction model based on Polar Lights Optimization (PLO) is proposed. First, the Transformer architecture is modified by removing the decoder attention layer, and its temporal constraints are optimized to adapt to the one-way dependency of DR time series prediction. Then, a hybrid model integrating the modified Transformer and LSTM is designed, where Transformer captures global motion dependencies, and LSTM models local temporal details. Finally, the PLO algorithm is introduced to optimize the hyperparameters, which enhance global search capability and avoid premature convergence in PSO/GA. Furthermore, a closed-loop mechanism integrating error feedback and parameter updating is established to enhance adaptability. Experimental results for complex aerial target maneuvering scenarios show that the proposed model achieves a trajectory prediction R2 value exceeding 0.95, reduces the Mean Squared Error (MSE) by 42% compared with the results for the traditional Extended Kalman Filter (EKF) model, and decreases the state synchronization frequency among simulation nodes by 67%. This model significantly enhances the prediction accuracy of DR and minimizes simulation latency, providing a new technical solution for improving the temporal consistency of DIS. Full article
Show Figures

Figure 1

25 pages, 21414 KB  
Article
A Hybrid Variational Mode Decomposition, Transformer-For Time Series, and Long Short-Term Memory Framework for Long-Term Battery Capacity Degradation Prediction of Electric Vehicles Using Real-World Charging Data
by Chao Chen, Guangzhou Lei, Hao Li, Zhuo Chen and Jing Zhou
Energies 2026, 19(3), 694; https://doi.org/10.3390/en19030694 - 28 Jan 2026
Viewed by 106
Abstract
Considering the nonlinear trends, multi-scale variations, and capacity regeneration phenomena exhibited by battery capacity degradation under real-world conditions, accurately predicting its trajectory remains a critical challenge for ensuring the reliability and safety of electric vehicles. To address this, this study proposes a hybrid [...] Read more.
Considering the nonlinear trends, multi-scale variations, and capacity regeneration phenomena exhibited by battery capacity degradation under real-world conditions, accurately predicting its trajectory remains a critical challenge for ensuring the reliability and safety of electric vehicles. To address this, this study proposes a hybrid prediction framework based on Variational Mode Decomposition and a Transformer–Long Short-Term Memory architecture. Specifically, the proposed Variational Mode Decomposition–Transformer for Time Series–Long Short-Term Memory (VMD–TTS–LSTM) framework first decomposes the capacity sequence using Variational Mode Decomposition. The resulting modal components are then aggregated into high-frequency and low-frequency parts based on their frequency centroids, followed by targeted feature analysis for each part. Subsequently, a simplified Transformer encoder (Transformer for Time Series, TTS) is employed to model high-frequency fluctuations, while a Long Short-Term Memory (LSTM) network captures the long-term degradation trends. Evaluated on charging data from 20 commercial electric vehicles under a long-horizon setting of 20 input steps predicting 100 steps ahead, the proposed method achieves a mean absolute error of 0.9247 and a root mean square error of 1.0151, demonstrating improved accuracy and robustness. The results confirm that the proposed frequency-partitioned, heterogeneous modeling strategy provides a practical and effective solution for battery health prediction and energy management in real-world electric vehicle operation. Full article
(This article belongs to the Topic Electric Vehicles Energy Management, 2nd Volume)
Show Figures

Figure 1

40 pages, 2475 KB  
Review
Research Progress of Deep Learning in Sea Ice Prediction
by Junlin Ran, Weimin Zhang and Yi Yu
Remote Sens. 2026, 18(3), 419; https://doi.org/10.3390/rs18030419 - 28 Jan 2026
Viewed by 137
Abstract
Polar sea ice is undergoing rapid change, with recent record-low extents in both hemispheres, raising the demand for skillful predictions from days to seasons for navigation, ecosystem management, and climate risk assessment. Accurate sea ice prediction is essential for understanding coupled climate processes, [...] Read more.
Polar sea ice is undergoing rapid change, with recent record-low extents in both hemispheres, raising the demand for skillful predictions from days to seasons for navigation, ecosystem management, and climate risk assessment. Accurate sea ice prediction is essential for understanding coupled climate processes, supporting safe polar operations, and informing adaptation strategies. Physics-based numerical models remain the backbone of operational forecasting, but their skill is limited by uncertainties in coupled ocean–ice–atmosphere processes, parameterizations, and sparse observations, especially in the marginal ice zone and during melt seasons. Statistical and empirical models can provide useful baselines for low-dimensional indices or short lead times, yet they often struggle to represent high-dimensional, nonlinear interactions and regime shifts. This review synthesizes recent progress of DL for key sea ice prediction targets, including sea ice concentration/extent, thickness, and motion, and organizes methods into (i) sequential architectures (e.g., LSTM/GRU and temporal Transformers) for temporal dependencies, (ii) image-to-image and vision models (e.g., CNN/U-Net, vision Transformers, and diffusion or GAN-based generators) for spatial structures and downscaling, and (iii) spatiotemporal fusion frameworks that jointly model space–time dynamics. We further summarize hybrid strategies that integrate DL with numerical models through post-processing, emulation, and data assimilation, as well as physics-informed learning that embeds conservation laws or dynamical constraints. Despite rapid advances, challenges remain in generalization under non-stationary climate conditions, dataset shift, and physical consistency (e.g., mass/energy conservation), interpretability, and fair evaluation across regions and lead times. We conclude with practical recommendations for future research, including standardized benchmarks, uncertainty-aware probabilistic forecasting, physics-guided training and neural operators for long-range dynamics, and foundation models that leverage self-supervised pretraining on large-scale Earth observation archives. Full article
Show Figures

Figure 1

21 pages, 6374 KB  
Article
Identification of Microseismic Signals in Coal Mine Rockbursts Based on Hybrid Feature Selection and a Transformer
by Jizhi Zhang, Hongwei Wang and Tianwei Shi
Appl. Sci. 2026, 16(3), 1241; https://doi.org/10.3390/app16031241 - 26 Jan 2026
Viewed by 83
Abstract
Deep learning algorithms are pivotal in the identification and classification of microseismic signals in mines subjected to impact pressure. However, conventional machine learning techniques often struggle to balance interpretability, computational efficiency, and accuracy. To address these challenges, this paper presents a hybrid feature [...] Read more.
Deep learning algorithms are pivotal in the identification and classification of microseismic signals in mines subjected to impact pressure. However, conventional machine learning techniques often struggle to balance interpretability, computational efficiency, and accuracy. To address these challenges, this paper presents a hybrid feature selection and Transformer-based model for microseismic signal classification. The proposed model employs a hybrid feature selection method for data preprocessing, followed by an enhanced Transformer for signal classification. The study first outlines the underlying principles of the method, then extracts key seismic features—such as zero-crossing rate, maximum amplitude, and dominant frequency—from various microseismic signal types. These features undergo importance and correlation analyses to facilitate dimensionality reduction. Finally, a Transformer-based classification framework is developed and compared against several traditional deep learning models. The results reveal significant differences in the waveforms and spectra of different microseismic signal types. The selected feature parameters exhibit high representativeness and stability. The proposed model achieves an accuracy of 90.86%, outperforming traditional deep learning approaches such as CNN (85.2%) and LSTM (83.7%) by a considerable margin. This approach provides a reliable and efficient solution for the rapid identification of microseismic events in rockburst-prone mines. Full article
(This article belongs to the Special Issue Advanced Technology and Data Analysis in Seismology)
Show Figures

Figure 1

30 pages, 430 KB  
Article
An Hour-Specific Hybrid DNN–SVR Framework for National-Scale Short-Term Load Forecasting
by Ervin Čeperić and Kristijan Lenac
Sensors 2026, 26(3), 797; https://doi.org/10.3390/s26030797 - 25 Jan 2026
Viewed by 317
Abstract
Short-term load forecasting (STLF) underpins the efficient and secure operation of power systems. This study develops and evaluates a hybrid architecture that couples deep neural networks (DNNs) with support vector regression (SVR) for national-scale day-ahead STLF using Croatian load data from 2006 to [...] Read more.
Short-term load forecasting (STLF) underpins the efficient and secure operation of power systems. This study develops and evaluates a hybrid architecture that couples deep neural networks (DNNs) with support vector regression (SVR) for national-scale day-ahead STLF using Croatian load data from 2006 to 2022. The approach employs an hour-specific framework of 24 hybrid models: each DNN learns a compact nonlinear representation for a given hour, while an SVR trained on the penultimate layer activations performs the final regression. Gradient-boosting-based feature selection yields compact, informative inputs shared across all model variants. To overcome limitations of historical local measurements, the framework integrates global numerical weather prediction data from the TIGGE archive with load and local meteorological observations in an operationally realistic setup. In the held-out test year 2022, the proposed hybrid consistently reduced forecasting error relative to standalone DNN-, LSTM- and Transformer-based baselines, while preserving a reproducible pipeline. Beyond using SVR as an alternative output layer, the contributions are as follows: addressing a 17-year STLF task, proposing an hour-specific hybrid DNN–SVR framework, providing a systematic comparison with deep learning baselines under a unified protocol, and integrating global weather forecasts into a practical day-ahead STLF solution for a real power system. Full article
(This article belongs to the Section Cross Data)
Show Figures

Figure 1

30 pages, 3115 KB  
Article
HST–MB–CREH: A Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN for Rare-Event-Aware PV Power Forecasting
by Guldana Taganova, Jamalbek Tussupov, Assel Abdildayeva, Mira Kaldarova, Alfiya Kazi, Ronald Cowie Simpson, Alma Zakirova and Bakhyt Nurbekov
Algorithms 2026, 19(2), 94; https://doi.org/10.3390/a19020094 - 23 Jan 2026
Viewed by 167
Abstract
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The [...] Read more.
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The model combines a spatio-temporal transformer encoder with three convolutional neural network (CNN)/recurrent neural network (RNN) branches (CNN → long short-term memory (LSTM), LSTM → gated recurrent unit (GRU), CNN → GRU) and a dense pathway for tabular meteorological and calendar features. A multitask output head simultaneously performs the regression of PV power and binary classification of extremes defined above the 95th percentile. We evaluate HST–MB–CREH on the publicly available Renewable Power Generation and Weather Conditions dataset with hourly resolutions from 2017 to 2022, using a 5-fold TimeSeriesSplit protocol to avoid temporal leakage and to cover multiple seasons. Compared with tree ensembles (RandomForest, XGBoost), recurrent baselines (Stacked GRU, LSTM), and advanced hybrid/transformer models (Hybrid Multi-Branch CNN–LSTM/GRU with Dense Path and Extreme-Event Head (HMB–CLED) and Spatio-Temporal Multitask Transformer with Extreme-Event Head (STM–EEH)), the proposed architecture achieves the best overall trade-off between accuracy and rare-event sensitivity, with normalized performance of RMSE_z = 0.2159 ± 0.0167, MAE_z = 0.1100 ± 0.0085, mean absolute percentage error (MAPE) = 9.17 ± 0.45%, R2 = 0.9534 ± 0.0072, and AUC_ext = 0.9851 ± 0.0051 across folds. Knowledge extraction is supported via attention-based analysis and permutation feature importance, which highlight the dominant role of global horizontal irradiance, diurnal harmonics, and solar geometry features. The results indicate that hybrid spatio-temporal multitask architectures can substantially improve both the forecast accuracy and robustness to extremes, making HST–MB–CREH a promising building block for intelligent decision-support tools in smart grids with a high share of PV generation. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

26 pages, 6505 KB  
Article
Hybrid Wavelet–Transformer–XGBoost Model Optimized by Chaotic Billiards for Global Irradiance Forecasting
by Walid Mchara, Giovanni Cicceri, Lazhar Manai, Monia Raissi and Hezam Albaqami
J. Sens. Actuator Netw. 2026, 15(1), 12; https://doi.org/10.3390/jsan15010012 - 22 Jan 2026
Viewed by 145
Abstract
Accurate global irradiance (GI) forecasting is essential for improving photovoltaic (PV) energy management, stabilizing renewable power systems, and enabling intelligent control in solar-powered applications, including electric vehicles and smart grids. The highly stochastic and non-stationary nature of solar radiation, influenced by rapid atmospheric [...] Read more.
Accurate global irradiance (GI) forecasting is essential for improving photovoltaic (PV) energy management, stabilizing renewable power systems, and enabling intelligent control in solar-powered applications, including electric vehicles and smart grids. The highly stochastic and non-stationary nature of solar radiation, influenced by rapid atmospheric fluctuations and seasonal variability, makes short-term GI prediction a challenging task. To overcome these limitations, this work introduces a new hybrid forecasting architecture referred to as WTX–CBO, which integrates a Wavelet Transform (WT)-based decomposition module, an encoder–decoder Transformer model, and an XGBoost regressor, optimized using the Chaotic Billiards Optimizer (CBO) combined with the Adam optimization algorithm. In the proposed architecture, WT decomposes solar irradiance data into multi-scale components, capturing both high-frequency transients and long-term seasonal patterns. The Transformer module effectively models complex temporal and spatio-temporal dependencies, while XGBoost enhances nonlinear learning capability and mitigates overfitting. The CBO ensures efficient hyperparameter tuning and accelerated convergence, outperforming traditional meta-heuristics such as Particle Swarm Optimization (PSO) and Genetic Algorithms (GA). Comprehensive experiments conducted on real-world GI datasets from diverse climatic conditions demonstrate the outperformance of the proposed model. The WTX–CBO ensemble consistently outperformed benchmark models, including LSTM, SVR, standalone Transformer, and XGBoost, achieving improved accuracy, stability, and generalization capability. The proposed WTX–CBO framework is designed as a high-accuracy decision-support forecasting tool that provides short-term global irradiance predictions to enable intelligent energy management, predictive charging, and adaptive control strategies in solar-powered applications, including solar electric vehicles (SEVs), rather than performing end-to-end vehicle or photovoltaic power simulations. Overall, the proposed hybrid framework provides a robust and scalable solution for short-term global irradiance forecasting, supporting reliable PV integration, smart charging control, and sustainable energy management in next-generation solar systems. Full article
(This article belongs to the Special Issue AI and IoT Convergence for Sustainable Smart Manufacturing)
Show Figures

Figure 1

33 pages, 550 KB  
Article
Intelligent Information Processing for Corporate Performance Prediction: A Hybrid Natural Language Processing (NLP) and Deep Learning Approach
by Qidi Yu, Chen Xing, Yanjing He, Sunghee Ahn and Hyung Jong Na
Electronics 2026, 15(2), 443; https://doi.org/10.3390/electronics15020443 - 20 Jan 2026
Viewed by 192
Abstract
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard [...] Read more.
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard (BSC) framework into financial, customer, internal process, and learning and growth dimensions. Various machine learning and deep learning models—including k-nearest neighbors (KNNs), support vector machine (SVM), light gradient boosting machine (LightGBM), convolutional neural network (CNN), long short-term memory (LSTM), autoencoder, and transformer—were evaluated, with results showing that the inclusion of strategic textual data significantly enhanced prediction accuracy, precision, recall, area under the curve (AUC), and F1-score. Among individual models, the transformer architecture demonstrated superior performance in extracting context-rich semantic features. A soft-voting ensemble model combining autoencoder, LSTM, and transformer achieved the best overall performance, leading in accuracy and AUC, while the best single deep learning model (transformer) obtained a marginally higher F1 score, confirming the value of hybrid learning. Furthermore, analysis revealed that customer-oriented strategy disclosures were the most predictive among BSC dimensions. These findings highlight the value of integrating financial and narrative data using advanced NLP and artificial intelligence (AI) techniques to develop interpretable and robust corporate performance forecasting models. In addition, we operationalize information security narratives using a reproducible cybersecurity lexicon and derive security disclosure intensity and weight share features that are jointly evaluated with BSC-based strategic vectors. Full article
(This article belongs to the Special Issue Advances in Intelligent Information Processing)
Show Figures

Figure 1

34 pages, 7175 KB  
Article
Hybrid Unsupervised–Supervised Learning Framework for Rainfall Prediction Using Satellite Signal Strength Attenuation
by Popphon Laon, Tanawit Sahavisit, Supavee Pourbunthidkul, Sarut Puangragsa, Pattharin Wichittrakarn, Pattarapong Phasukkit and Nongluck Houngkamhang
Sensors 2026, 26(2), 648; https://doi.org/10.3390/s26020648 - 18 Jan 2026
Viewed by 263
Abstract
Satellite communication systems experience significant signal degradation during rain events, a phenomenon that can be leveraged for meteorological applications. This study introduces a novel hybrid machine learning framework combining unsupervised clustering with cluster-specific supervised deep learning models to transform satellite signal attenuation into [...] Read more.
Satellite communication systems experience significant signal degradation during rain events, a phenomenon that can be leveraged for meteorological applications. This study introduces a novel hybrid machine learning framework combining unsupervised clustering with cluster-specific supervised deep learning models to transform satellite signal attenuation into a predictive tool for rainfall prediction. Unlike conventional single-model approaches treating all atmospheric conditions uniformly, our methodology employs K-Means Clustering with the Elbow Method to identify four distinct atmospheric regimes based on Signal-to-Noise Ratio (SNR) patterns from a 12-m Ku-band satellite ground station at King Mongkut’s Institute of Technology Ladkrabang (KMITL), Bangkok, Thailand, combined with absolute pressure and hourly rainfall measurements. The dataset comprises 98,483 observations collected with 30-s temporal resolutions, providing comprehensive coverage of diverse tropical atmospheric conditions. The experimental platform integrates three subsystems: a receiver chain featuring a Low-Noise Block (LNB) converter and Software-Defined Radio (SDR) platform for real-time data acquisition; a control system with two-axis motorized pointing incorporating dual-encoder feedback; and a preprocessing workflow implementing data cleaning, K-Means Clustering (k = 4), Synthetic Minority Over-Sampling Technique (SMOTE) for balanced representation, and standardization. Specialized Long Short-Term Memory (LSTM) networks trained for each identified cluster enable capture of regime-specific temporal dynamics. Experimental validation demonstrates substantial performance improvements, with cluster-specific LSTM models achieving R2 values exceeding 0.92 across all atmospheric regimes. Comparative analysis confirms LSTM superiority over RNN and GRU. Classification performance evaluation reveals exceptional detection capabilities with Probability of Detection ranging from 0.75 to 0.99 and False Alarm Ratios below 0.23. This work presents a scalable approach to weather radar systems for tropical regions with limited ground-based infrastructure, particularly during rapid meteorological transitions characteristic of tropical climates. Full article
Show Figures

Figure 1

20 pages, 5606 KB  
Article
Heart Sound Classification for Early Detection of Cardiovascular Diseases Using XGBoost and Engineered Acoustic Features
by P. P. Satya Karthikeya, P. Rohith, B. Karthikeya, M. Karthik Reddy, Akhil V M, Andrea Tigrini, Agnese Sbrollini and Laura Burattini
Sensors 2026, 26(2), 630; https://doi.org/10.3390/s26020630 - 17 Jan 2026
Viewed by 259
Abstract
Heart sound-based detection of cardiovascular diseases is a critical task in clinical diagnostics, where early and accurate identification can significantly improve patient outcomes. In this study, we investigate the effectiveness of combining traditional acoustic features and transformer-based Wav2Vec embeddings with advanced machine learning [...] Read more.
Heart sound-based detection of cardiovascular diseases is a critical task in clinical diagnostics, where early and accurate identification can significantly improve patient outcomes. In this study, we investigate the effectiveness of combining traditional acoustic features and transformer-based Wav2Vec embeddings with advanced machine learning models for multi-class classification of five heart sound categories. Ten engineered acoustic features, i.e., Log Mel, MFCC, delta, delta-delta, chroma, discrete wavelet transform, zero-crossing rate, energy, spectral centroid, and temporal flatness, were extracted as regular features. Four model configurations were evaluated: a hybrid CNN + LSTM and XGBoost trained with either regular features or Wav2Vec embeddings. Models were assessed using a held-out test set with hyperparameter tuning and cross-validation. Results demonstrate that models trained on regular features consistently outperform Wav2Vec-based models, with XGBoost achieving the highest accuracy of 99%, surpassing the hybrid model at 98%. These findings highlight the importance of domain-specific feature engineering and the effectiveness of ensemble learning with XGBoost for robust and accurate heart sound classification, offering a promising approach for early detection and intervention in cardiovascular diseases. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

25 pages, 3269 KB  
Article
Dynamic Carbon-Aware Scheduling for Electric Vehicle Fleets Using VMD-BSLO-CTL Forecasting and Multi-Objective MPC
by Hongyu Wang, Zhiyu Zhao, Kai Cui, Zixuan Meng, Bin Li, Wei Zhang and Wenwen Li
Energies 2026, 19(2), 456; https://doi.org/10.3390/en19020456 - 16 Jan 2026
Viewed by 153
Abstract
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a [...] Read more.
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a hybrid forecasting model (VMD-BSLO-CTL) is constructed. By integrating Variational Mode Decomposition (VMD) with a CNN-Transformer-LSTM network optimized by the Blood-Sucking Leech Optimizer (BSLO), the model effectively captures multi-scale features. Validation on the UK National Grid dataset demonstrates its superior robustness against prediction horizon extension compared to state-of-the-art baselines. Second, a multi-objective Model Predictive Control (MPC) strategy is developed to guide EV charging. Applied to a real-world station-level scenario, the strategy navigates the trade-offs between user economy and grid stability. Simulation results show that the proposed framework simultaneously reduces economic costs by 4.17% and carbon emissions by 8.82%, while lowering the peak-valley difference by 6.46% and load variance by 11.34%. Finally, a cloud-edge collaborative deployment scheme indicates the engineering potential of the proposed approach for next-generation low-carbon energy management. Full article
Show Figures

Figure 1

23 pages, 5058 KB  
Article
Research on State of Health Assessment of Lithium-Ion Batteries Using Actual Measurement Data Based on Hybrid LSTM–Transformer Model
by Hanyu Zhang and Jifei Wang
Symmetry 2026, 18(1), 169; https://doi.org/10.3390/sym18010169 - 16 Jan 2026
Viewed by 252
Abstract
An accurate assessment of the state of health (SOH) of lithium-ion batteries (LIBs) is crucial for ensuring the safety and reliability of energy storage systems and electric vehicles. However, existing methods face challenges: physics-based models are computationally complex, traditional data-driven methods rely heavily [...] Read more.
An accurate assessment of the state of health (SOH) of lithium-ion batteries (LIBs) is crucial for ensuring the safety and reliability of energy storage systems and electric vehicles. However, existing methods face challenges: physics-based models are computationally complex, traditional data-driven methods rely heavily on manual feature engineering, and single models lack the ability to capture both local and global degradation patterns. To address these issues, this paper proposes a novel hybrid LSTM–Transformer model for LIB SOH estimation using actual measurement data. The model integrates Long Short-Term Memory (LSTM) networks to capture local temporal dependencies with the Trans-former architecture to model global degradation trends through self-attention mechanisms. Experimental validation was conducted using eight 18650 Nickel Cobalt Manganese (NCM) LIBs subjected to 750 charge–discharge cycles under room temperature conditions. Sixteen statistical features were extracted from voltage and current data during constant current–constant voltage (CC-CV) phases, with feature selection based on the Pearson correlation coefficient and maximum information coefficient analysis. The proposed LSTM–Transformer model demonstrated superior performance compared to the standalone LSTM and Transformer models, achieving a mean absolute error (MAE) as low as 0.001775, root mean square error (RMSE) of 0.002147, and mean absolute percentage error (MAPE) of 0.196% for individual batteries. Core features including cumulative charge (CC Q), charging time, and voltage slope during the constant current phase showed a strong correlation with the SOH (absolute PCC > 0.8). The hybrid model exhibited excellent generalization across different battery cells with consistent error distributions and nearly overlapping prediction curves with actual SOH trajectories. The symmetrical LSTM–Transformer hybrid architecture provides an accurate, robust, and generalizable solution for LIB SOH assessment, effectively overcoming the limitations of traditional methods while offering potential for real-time battery management system applications. This approach enables health feature learning without manual feature engineering, representing an advancement in data-driven battery health monitoring. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

Back to TopTop