Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (193)

Search Parameters:
Keywords = LSTM cells

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 1704 KB  
Article
AGF-HAM: Adaptive Gated Fusion Hierarchical Attention Model for Explainable Sentiment Analysis
by Mahander Kumar, Lal Khan, Mohammad Zubair Khan and Amel Ali Alhussan
Mathematics 2025, 13(24), 3892; https://doi.org/10.3390/math13243892 - 5 Dec 2025
Viewed by 336
Abstract
The rapid growth of user-generated content in the digital space has increased the necessity of properly and interpretively analyzing sentiment and emotion systems. This research paper presents a new hybrid model, HAM (Hybrid Attention-based Model), a Transformer-based contextual embedding model combined with deep [...] Read more.
The rapid growth of user-generated content in the digital space has increased the necessity of properly and interpretively analyzing sentiment and emotion systems. This research paper presents a new hybrid model, HAM (Hybrid Attention-based Model), a Transformer-based contextual embedding model combined with deep sequential modeling and multi-layer explainability. The suggested framework integrates the BERT/RoBERTa encoders, Bidirectional LSTM, and Graph Attention that can be used to embrace semantic and aspect-level sentiment correlation. Additionally, an enhanced Explainability Module, including Attention Heatmaps, Aspect-Level Interpretations, and SHAP/Integrated Gradients analysis, contributes to the increased model transparency and interpretive reliability. Four benchmark datasets, namely GoEmotions-1, GoEmotions-2, GoEmotions-3, and Amazon Cell Phones and Accessories Reviews, were experimented on in order to have a strong cross-domain assessment. The 28 emotion words of GoEmotions were merged into five sentiment-oriented classes to harmonize the dissimilarity in the emotional granularities to fit the schema of the Amazon dataset. The proposed HAM model had a highest accuracy of 96.4% and F1-score of 94.9%, which was significantly higher than the state-of-the-art baselines like BERT (89.8%), RoBERTa (91.7%), and RoBERTa+BiLSTM (92.5%). These findings support the idea that HAM is a better solution to finer-grained emotional details and is still interpretable as a vital move towards creating open, exposible, and domain-tailored sentiment intelligence systems. Future endeavors will aim at expanding this architecture to multimodal fusion, cross-lingual adaptability, and federated learning systems to increase the scalability, generalization, and ethical application of AI. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

18 pages, 2927 KB  
Article
Machine Learning-Based Discovery of Antimicrobial Peptides and Their Antibacterial Activity Against Staphylococcus aureus
by Yuetong Fu, Zeyu Yan, Jingtao Yuan, Yishuai Wang, Wenqiang Zhao, Ziguang Wang, Jingyu Pan, Jing Zhang, Yang Sun and Ling Jiang
Fermentation 2025, 11(12), 669; https://doi.org/10.3390/fermentation11120669 - 28 Nov 2025
Viewed by 567
Abstract
The escalating crisis of antibiotic resistance, particularly concerning foodborne pathogens such as Staphylococcus aureus and its biofilm contamination, has emerged as a major global challenge to food safety and public health. Biofilm formation significantly enhances the pathogen’s resistance to environmental stresses and disinfectants, [...] Read more.
The escalating crisis of antibiotic resistance, particularly concerning foodborne pathogens such as Staphylococcus aureus and its biofilm contamination, has emerged as a major global challenge to food safety and public health. Biofilm formation significantly enhances the pathogen’s resistance to environmental stresses and disinfectants, underscoring the urgent need for novel antimicrobial agents. In this study, we isolated Bacillus strain B673 from the saline–alkali environment of Xinjiang, conducted whole-genome sequencing, and applied antiSMASH analysis to identify ribosomally synthesized and post-translationally modified peptide (RiPP) gene clusters. By integrating an LSTM-Attention-BERT deep learning framework, we screened and predicted nine novel antimicrobial peptide sequences. Using a SUMO-tag fusion tandem strategy, we achieved efficient soluble expression in an E. coli system, and the purified products exhibited remarkable inhibitory activity against Staphylococcus aureus (MIC = 3.13 μg/mL), with inhibition zones larger than those of the positive control. Molecular docking and dynamic simulations demonstrated that the peptides can stably bind to MurE, a key enzyme in cell wall synthesis, with negative binding free energy, suggesting an antibacterial mechanism via MurE inhibition. This study provides promising candidate molecules for the development of anti-drug-resistant agents and establishes an integrated research framework for antimicrobial peptides, spanning gene mining, intelligent screening, efficient expression, and mechanistic elucidation. Full article
(This article belongs to the Special Issue Applied Microorganisms and Industrial/Food Enzymes, 2nd Edition)
Show Figures

Figure 1

16 pages, 2566 KB  
Article
Predictive Fermentation Control of Lactiplantibacillus plantarum Using Deep Learning Convolutional Neural Networks
by Chien-Chang Wu, Jung-Sheng Chen, Yu-Ching Lu, Jain-Shing Wu, Yu-Fen Huang and Chien-Sen Liao
Microorganisms 2025, 13(11), 2601; https://doi.org/10.3390/microorganisms13112601 - 15 Nov 2025
Viewed by 523
Abstract
The fermentation of Lactiplantibacillus plantarum is a complex bioprocess due to the nonlinear and dynamic nature of microbial growth. Traditional monitoring methods often fail to provide early and actionable insights into fermentation outcomes. This study proposes a deep learning-based predictive system using convolutional [...] Read more.
The fermentation of Lactiplantibacillus plantarum is a complex bioprocess due to the nonlinear and dynamic nature of microbial growth. Traditional monitoring methods often fail to provide early and actionable insights into fermentation outcomes. This study proposes a deep learning-based predictive system using convolutional neural networks (CNNs) to classify fermentation trajectories and anticipate final cell counts based on the first 24 h of process data. A total of 52 fermentation runs were conducted, during which real-time parameters, including pH, temperature, and dissolved oxygen, were continuously recorded and transformed into time-series feature vectors. After rigorous preprocessing and feature selection, the CNN was trained to classify fermentation outcomes into three categories: successful, semi-successful, and failed batches. The model achieved a classification accuracy of 97.87%, outperforming benchmark models such as LSTM and XGBoost. Validation experiments demonstrated the model’s practical utility: early predictions enabled timely manual interventions that effectively prevented batch failures or improved suboptimal fermentations. These findings suggest that deep learning provides a robust and scalable framework for real-time fermentation control, with significant implications for enhancing efficiency and reducing costs in industrial probiotic production. Full article
Show Figures

Figure 1

18 pages, 3122 KB  
Article
Performance Analysis of Offline Data-Driven Methods for Estimating the State of Charge of Metal Hydride Tanks
by Amina Yahia, Djafar Chabane, Salah Laghrouche, Abdoul N’Diaye and Abdesslem Djerdir
Energies 2025, 18(22), 5969; https://doi.org/10.3390/en18225969 - 13 Nov 2025
Viewed by 252
Abstract
This paper aims to propose an accurate method for estimating the state of charge (SoC) in metal hydride tanks (MHT) to enhance the energy management of hydrogen-powered fuel cell systems. Two data-driven prediction methods, Long Short-Term Memory (LSTM) networks and Support Vector Regression [...] Read more.
This paper aims to propose an accurate method for estimating the state of charge (SoC) in metal hydride tanks (MHT) to enhance the energy management of hydrogen-powered fuel cell systems. Two data-driven prediction methods, Long Short-Term Memory (LSTM) networks and Support Vector Regression (SVR), are developed and tested on experimental charge/discharge data from a dedicated MHT test bench. Three distinct LSTM architectures are evaluated alongside an SVR model to compare both generalization performance and computational overhead. Results demonstrate that the SVR approach achieves the lowest root mean square error (RMSE) of 0.0233% during discharge and 0.0283% during charge, while also requiring only 164 ms per inference step for both cycles. However, LSTM variants have a higher RMSE and significantly higher computational cost, which highlights the superiority of the SVR method. Full article
(This article belongs to the Special Issue Hydrogen Energy Generation, Storage, Transportation and Utilization)
Show Figures

Figure 1

17 pages, 5741 KB  
Article
An Explainable Fault Diagnosis Algorithm for Proton Exchange Membrane Fuel Cells Integrating Gramian Angular Fields and Gradient-Weighted Class Activation Mapping
by Xing Shu, Fengyan Yi, Jinming Zhang, Jiaming Zhou, Shuo Wang, Hongtao Gong and Shuaihua Wang
Electronics 2025, 14(22), 4401; https://doi.org/10.3390/electronics14224401 - 12 Nov 2025
Viewed by 321
Abstract
Reliable operation of proton exchange membrane fuel cells (PEMFCs) is crucial for their widespread commercialization, and accurate fault diagnosis is the key to ensuring their long-term stable operation. However, traditional fault diagnosis methods not only lack sufficient interpretability, making it difficult for users [...] Read more.
Reliable operation of proton exchange membrane fuel cells (PEMFCs) is crucial for their widespread commercialization, and accurate fault diagnosis is the key to ensuring their long-term stable operation. However, traditional fault diagnosis methods not only lack sufficient interpretability, making it difficult for users to trust their diagnostic decisions, but also one-dimensional (1D) feature extraction methods highly rely on manual experience to design and extract features, which are easily affected by noise. This paper proposes a new interpretable fault diagnosis algorithm that integrates Gramian angular field (GAF) transform, convolutional neural network (CNN), and gradient-weighted class activation mapping (Grad-CAM) for enhanced fault diagnosis and analysis of proton exchange membrane fuel cells. The algorithm is systematically validated using experimental data to classify three critical health states: normal operation, membrane drying, and hydrogen leakage. The method first converts the 1D sensor signal into a two-dimensional GAF image to capture the temporal dependency and converts the diagnostic problem into an image recognition task. Then, the customized CNN architecture extracts hierarchical spatiotemporal features for fault classification, while Grad-CAM provides visual explanations by highlighting the most influential regions in the input signal. The results show that the diagnostic accuracy of the proposed model reaches 99.8%, which is 4.18%, 9.43% and 2.46% higher than other baseline models (SVM, LSTM, and CNN), respectively. Furthermore, the explainability analysis using Grad-CAM effectively mitigates the “black box” problem by generating visual heatmaps that pinpoint the key feature regions the model relies on to distinguish different health states. This validates the model’s decision-making rationality and significantly enhances the transparency and trustworthiness of the diagnostic process. Full article
(This article belongs to the Special Issue Advances in Electric Vehicles and Energy Storage Systems)
Show Figures

Figure 1

35 pages, 811 KB  
Article
A Meta-Learning-Based Framework for Cellular Traffic Forecasting
by Xiangyu Liu, Yuxuan Li, Shibing Zhu, Qi Su and Changqing Li
Appl. Sci. 2025, 15(21), 11616; https://doi.org/10.3390/app152111616 - 30 Oct 2025
Viewed by 522
Abstract
The rapid advancement of 5G/6G networks and the Internet of Things has rendered mobile traffic patterns increasingly complex and dynamic, posing significant challenges to achieving precise cell-level traffic forecasting. Traditional deep learning models, such as LSTM and CNN, rely heavily on substantial datasets. [...] Read more.
The rapid advancement of 5G/6G networks and the Internet of Things has rendered mobile traffic patterns increasingly complex and dynamic, posing significant challenges to achieving precise cell-level traffic forecasting. Traditional deep learning models, such as LSTM and CNN, rely heavily on substantial datasets. When confronted with new base stations or scenarios with sparse data, they often exhibit insufficient generalisation capabilities due to overfitting and poor adaptability to heterogeneous traffic patterns. To overcome these limitations, this paper proposes a meta-learning framework—GMM-MCM-NF. This framework employs a Gaussian mixture model as a probabilistic meta-learner to capture the latent structure of traffic tasks in the frequency domain. It further introduces a multi-component synthesis mechanism for robust weight initialisation and a negative feedback mechanism for dynamic model correction, thereby significantly enhancing model performance in scenarios with small samples and non-stationary conditions. Extensive experiments on the Telecom Italia Milan dataset demonstrate that GMM-MCM-NF outperforms traditional methods and meta-learning baseline models in prediction accuracy, convergence speed, and generalisation capability. This framework exhibits substantial potential in practical applications such as energy-efficient base station management and resilient resource allocation, contributing to the advancement of mobile networks towards more sustainable and scalable operations. Full article
Show Figures

Figure 1

26 pages, 1854 KB  
Review
Machine Learning Techniques for Battery State of Health Prediction: A Comparative Review
by Leila Mbagaya, Kumeshan Reddy and Annelize Botes
World Electr. Veh. J. 2025, 16(11), 594; https://doi.org/10.3390/wevj16110594 - 28 Oct 2025
Cited by 1 | Viewed by 1630
Abstract
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such [...] Read more.
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such as error accumulation, high equipment requirements, and limited applicability across different conditions. These challenges have encouraged the use of machine learning (ML) methods, which can model nonlinear relationships and temporal degradation patterns directly from cycling data. This paper reviews four machine learning algorithms that are widely applied in SOH estimation: support vector regression (SVR), random forest (RF), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Their methodologies, advantages, limitations, and recent extensions are discussed with reference to the existing literature. To complement the review, MATLAB-based simulations were carried out using the NASA Prognostics Center of Excellence (PCoE) dataset. Training was performed on three cells (B0006, B0007, B0018), and testing was conducted on an unseen cell (B0005) to evaluate cross-battery generalisation. The results show that the LSTM model achieved the highest accuracy (RMSE = 0.0146, MAE = 0.0118, R2 = 0.980), followed by CNN and RF, both of which provided acceptable accuracy with errors below 2% SOH. SVR performed less effectively (RMSE = 0.0457, MAPE = 4.80%), reflecting its difficulty in capturing sequential dependencies. These outcomes are consistent with findings in the literature, indicating that deep learning models are better suited for modelling long-term battery degradation, while ensemble approaches such as RF remain competitive when supported by carefully engineered features. This review also identifies ongoing and future research directions, including the use of optimisation algorithms for hyperparameter tuning, transfer learning for adaptation across battery chemistries, and explainable AI to improve interpretability. Overall, LSTM and hybrid models that combine complementary methods (e.g., CNN-LSTM) show strong potential for deployment in battery management systems, where reliable SOH prediction is important for safety, cost reduction, and extending battery lifetime. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

47 pages, 36851 KB  
Article
Comparative Analysis of ML and DL Models for Data-Driven SOH Estimation of LIBs Under Diverse Temperature and Load Conditions
by Seyed Saeed Madani, Marie Hébert, Loïc Boulon, Alexandre Lupien-Bédard and François Allard
Batteries 2025, 11(11), 393; https://doi.org/10.3390/batteries11110393 - 24 Oct 2025
Viewed by 676
Abstract
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, [...] Read more.
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, we establish a leakage-averse, cross-battery evaluation framework encompassing 32 commercial LIBs (B5–B56) spanning diverse cycling histories and temperatures (≈4 °C, 24 °C, 43 °C). Models ranging from classical regressors to ensemble trees and deep sequence architectures were assessed under blocked 5-fold GroupKFold splits using RMSE, MAE, R2 with confidence intervals, and inference latency. The results reveal distinct stratification among model families. Sequence-based architectures—CNN–LSTM, GRU, and LSTM—consistently achieved the highest accuracy (mean RMSE ≈ 0.006; per-cell R2 up to 0.996), demonstrating strong generalization across regimes. Gradient-boosted ensembles such as LightGBM and CatBoost delivered competitive mid-tier accuracy (RMSE ≈ 0.012–0.015) yet unrivaled computational efficiency (≈0.001–0.003 ms), confirming their suitability for embedded applications. Transformer-based hybrids underperformed, while approximately one-third of cells exhibited elevated errors linked to noise or regime shifts, underscoring the necessity of rigorous evaluation design. Collectively, these findings establish clear deployment guidelines: CNN–LSTM and GRU are recommended where robustness and accuracy are paramount (cloud and edge analytics), while LightGBM and CatBoost offer optimal latency–efficiency trade-offs for embedded controllers. Beyond model choice, the study highlights data curation and leakage-averse validation as critical enablers for transferable and reliable SOH estimation. This benchmarking framework provides a robust foundation for future integration of ML models into real-world battery management systems. Full article
Show Figures

Figure 1

32 pages, 2758 KB  
Article
A Hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM)–Attention Model Architecture for Precise Medical Image Analysis and Disease Diagnosis
by Md. Tanvir Hayat, Yazan M. Allawi, Wasan Alamro, Salman Md Sultan, Ahmad Abadleh, Hunseok Kang and Aymen I. Zreikat
Diagnostics 2025, 15(21), 2673; https://doi.org/10.3390/diagnostics15212673 - 23 Oct 2025
Viewed by 1417
Abstract
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional [...] Read more.
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional Neural Networks (CNNs) have proven highly effective, particularly in medical image analysis and disease detection. Methods: To further enhance these capabilities, this research introduces MediVision, a hybrid DL-based model that integrates a vision backbone based on CNNs for feature extraction, capturing detailed patterns and structures essential for precise classification. These features are then processed through Long Short-Term Memory (LSTM), which identifies sequential dependencies to better recognize disease progression. An attention mechanism is then incorporated that selectively focuses on salient features detected by the LSTM, improving the model’s ability to highlight critical abnormalities. Additionally, MediVision utilizes a skip connection, merging attention outputs with LSTM outputs along with Grad-CAM heatmap to visualize the most important regions of the analyzed medical image and further enhance feature representation and classification accuracy. Results: Tested on ten diverse medical image datasets (including, Alzheimer’s disease, breast ultrasound, blood cell, chest X-ray, chest CT scans, diabetic retinopathy, kidney diseases, bone fracture multi-region, retinal OCT, and brain tumor), MediVision consistently achieved classification accuracies above 95%, with a peak of 98%. Conclusions: The proposed MediVision model offers a robust and effective framework for medical image classification, improving interpretability, reliability, and automated disease diagnosis. To support research reproducibility, the codes and datasets used in this study have been publicly made available through an open-access repository. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

55 pages, 5577 KB  
Article
Innovative Method for Detecting Malware by Analysing API Request Sequences Based on a Hybrid Recurrent Neural Network for Applied Forensic Auditing
by Serhii Vladov, Victoria Vysotska, Vitalii Varlakhov, Mariia Nazarkevych, Serhii Bolvinov and Volodymyr Piadyshev
Appl. Syst. Innov. 2025, 8(5), 156; https://doi.org/10.3390/asi8050156 - 21 Oct 2025
Viewed by 1015
Abstract
This article develops a method for detecting malware based on the multi-scale recurrent architecture (time-aware multi-scale LSTM) with salience gating, multi-headed attention, and a sequential statistical change detector (CUSUM) integration. The research aim is to create an algorithm capable of effectively detecting malicious [...] Read more.
This article develops a method for detecting malware based on the multi-scale recurrent architecture (time-aware multi-scale LSTM) with salience gating, multi-headed attention, and a sequential statistical change detector (CUSUM) integration. The research aim is to create an algorithm capable of effectively detecting malicious activities in behavioural data streams of executable files with minimal delay and ensuring interpretability of the results for subsequent use in forensic audit and cyber defence systems. To implement the task, deep learning methods (training LSTM models with dynamic consideration of time intervals and adaptive attention mechanisms) and sequence statistical analysis (CUSUM, Kulback–Leibler divergence, and Wasserstein distances), as well as regularisation approaches to improve the model stability and explainability, were used. Experimental evaluation demonstrates the proposed approaches’ high efficiency, with the neural network model achieving competitive indicators of accuracy, recall, and classification balance with a low level of false positives and an acceptable detection delay. Attention and salience profile analysis confirmed the possibility of interpreting signals and early detection of abnormal events, which reduces the experts’ workload and reduces the number of false positives. This study introduces the new hybrid architecture development that combines the advantages of recurrent and statistical methods, the theoretical properties formalisation of gated cells for long-term memory, and the proposal of a practical approach to the model solutions’ explainability. The developed method implementation, implemented in the specialised software product form, is shown in a forensic audit. Full article
Show Figures

Figure 1

20 pages, 3517 KB  
Article
On the Use of Machine Learning Methods for EV Battery Pack Data Forecast Applied to Reconstructed Dynamic Profiles
by Joaquín de la Vega, Jordi-Roger Riba and Juan Antonio Ortega-Redondo
Appl. Sci. 2025, 15(20), 11291; https://doi.org/10.3390/app152011291 - 21 Oct 2025
Viewed by 529
Abstract
Lithium-ion batteries are essential to electric vehicles, so it is crucial to continuously monitor and control their health. However, since today’s battery packs consist of hundreds or thousands of cells, monitoring all of them is challenging. Additionally, the performance of the entire battery [...] Read more.
Lithium-ion batteries are essential to electric vehicles, so it is crucial to continuously monitor and control their health. However, since today’s battery packs consist of hundreds or thousands of cells, monitoring all of them is challenging. Additionally, the performance of the entire battery pack is often limited by the weakest cell. Therefore, developing effective monitoring techniques that can reliably forecast the remaining time to depletion (RTD) of lithium-ion battery cells is essential for safe and efficient battery management. However, even in robust systems, this data can be lost due to electromagnetic interference, microcontroller malfunction, failed contacts, and other issues. Gaps in voltage measurements compromise the accuracy of data-driven forecasts. This work systematically evaluates how different voltage reconstruction methods affect the performance of recurrent neural network (RNN) forecast models trained to predict RTD through quantile regression. The paper uses experimental battery pack data based on the behavior of an electric vehicle under dynamic driving conditions. Artificial gaps of 500 s were introduced at the beginning, middle, and end of each discharge phase, resulting in over 4300 reconstruction cases. Four reconstruction methods were considered: a zero-order hold (ZOH), an autoregressive integrated moving average (ARIMA) model, a gated recurrent unit (GRU) model, and a hybrid unscented Kalman filter (UKF) model. The results presented here reveal that the UKF model, followed by the GRU model, outperform alternative reconstruction methods. These models minimize signal degradation and provide forecasts similar to the original past data signal, thus achieving the highest coefficient of determination and the lowest error indicators. The reconstructed signals were fed into LSTM and GRU RNNs to estimate RTD, which produced confidence intervals and median values for decision-making purposes. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

24 pages, 3231 KB  
Article
A Deep Learning-Based Ensemble Method for Parameter Estimation of Solar Cells Using a Three-Diode Model
by Sung-Pei Yang, Fong-Ruei Shih, Chao-Ming Huang, Shin-Ju Chen and Cheng-Hsuan Chiua
Electronics 2025, 14(19), 3790; https://doi.org/10.3390/electronics14193790 - 24 Sep 2025
Viewed by 436
Abstract
Accurate parameter estimation of solar cells is critical for early-stage fault diagnosis in photovoltaic (PV) power systems. A physical model based on three-diode configuration has been recently introduced to improve model accuracy. However, nonlinear and recursive relationships between internal parameters and PV output, [...] Read more.
Accurate parameter estimation of solar cells is critical for early-stage fault diagnosis in photovoltaic (PV) power systems. A physical model based on three-diode configuration has been recently introduced to improve model accuracy. However, nonlinear and recursive relationships between internal parameters and PV output, along with parameter drift and PV degradation due to long-term operation, pose significant challenges. To address these issues, this study proposes a deep learning-based ensemble framework that integrates outputs from multiple optimization algorithms to improve estimation precision and robustness. The proposed method consists of three stages. First, the collected data were preprocessed using some data processing techniques. Second, a PV power generation system is modeled using the three-diode structure. Third, several optimization algorithms with distinct search behaviors are employed to produce diverse estimations. Finally, a hybrid deep learning model combining convolutional neural networks (CNNs) and long short-term memory (LSTM) networks is used to learn from these results. Experimental validation on a 733 kW PV power generation system demonstrates that the proposed method outperforms individual optimization approaches in terms of prediction accuracy and stability. Full article
Show Figures

Figure 1

22 pages, 1206 KB  
Article
Genetic Algorithm-Based Hybrid Deep Learning Framework for Stability Prediction of ABO3 Perovskites in Solar Cell Applications
by Samad Wali, Muhammad Irfan Khan, Miao Zhang and Abdul Shakoor
Energies 2025, 18(19), 5052; https://doi.org/10.3390/en18195052 - 23 Sep 2025
Cited by 1 | Viewed by 963
Abstract
The intrinsic structural stability of ABO3 perovskite materials is a pivotal factor determining their efficiency and durability in photovoltaic applications. However, accurately predicting stability, commonly measured by the energy above hull metric, remains challenging due to the complex interplay of compositional, crystallographic, [...] Read more.
The intrinsic structural stability of ABO3 perovskite materials is a pivotal factor determining their efficiency and durability in photovoltaic applications. However, accurately predicting stability, commonly measured by the energy above hull metric, remains challenging due to the complex interplay of compositional, crystallographic, and electronic features. To address this challenge, we propose a streamlined hybrid machine learning framework that combines the sequence modeling capability of Long Short-Term Memory (LSTM) networks with the robustness of Random Forest regressors. A genetic algorithm-based feature selection strategy is incorporated to identify the most relevant descriptors and reduce noise, thereby enhancing both predictive accuracy and interpretability. Comprehensive evaluations on a curated ABO3 dataset demonstrate strong performance, achieving an R2 of 0.98 on training data and 0.83 on independent test data, with a Mean Absolute Error (MAE) of 8.78 for training and 21.23 for testing, and Root Mean Squared Error (RMSE) values that further confirm predictive reliability. These results validate the effectiveness of the proposed approach in capturing the multifactorial nature of perovskite stability while ensuring robust generalization. This study highlights a practical and reliable pathway for accelerating the discovery and optimization of stable perovskite materials, contributing to the development of more durable next-generation solar technologies. Full article
Show Figures

Figure 1

26 pages, 3973 KB  
Article
ViT-DCNN: Vision Transformer with Deformable CNN Model for Lung and Colon Cancer Detection
by Aditya Pal, Hari Mohan Rai, Joon Yoo, Sang-Ryong Lee and Yooheon Park
Cancers 2025, 17(18), 3005; https://doi.org/10.3390/cancers17183005 - 15 Sep 2025
Cited by 1 | Viewed by 987
Abstract
Background/Objectives: Lung and colon cancers remain among the most prevalent and fatal diseases worldwide, and their early detection is a serious challenge. The data used in this study was obtained from the Lung and Colon Cancer Histopathological Images Dataset, which comprises five different [...] Read more.
Background/Objectives: Lung and colon cancers remain among the most prevalent and fatal diseases worldwide, and their early detection is a serious challenge. The data used in this study was obtained from the Lung and Colon Cancer Histopathological Images Dataset, which comprises five different classes of image data, namely colon adenocarcinoma, colon normal, lung adenocarcinoma, lung normal, and lung squamous cell carcinoma, split into training (80%), validation (10%), and test (10%) subsets. In this study, we propose the ViT-DCNN (Vision Transformer with Deformable CNN) model, with the aim of improving cancer detection and classification using medical images. Methods: The combination of the ViT’s self-attention capabilities with deformable convolutions allows for improved feature extraction, while also enabling the model to learn both holistic contextual information as well as fine-grained localized spatial details. Results: On the test set, the model performed remarkably well, with an accuracy of 94.24%, an F1 score of 94.23%, recall of 94.24%, and precision of 94.37%, confirming its robustness in detecting cancerous tissues. Furthermore, our proposed ViT-DCNN model outperforms several state-of-the-art models, including ResNet-152, EfficientNet-B7, SwinTransformer, DenseNet-201, ConvNext, TransUNet, CNN-LSTM, MobileNetV3, and NASNet-A, across all major performance metrics. Conclusions: By using deep learning and advanced image analysis, this model enhances the efficiency of cancer detection, thus representing a valuable tool for radiologists and clinicians. This study demonstrates that the proposed ViT-DCNN model can reduce diagnostic inaccuracies and improve detection efficiency. Future work will focus on dataset enrichment and enhancing the model’s interpretability to evaluate its clinical applicability. This paper demonstrates the promise of artificial-intelligence-driven diagnostic models in transforming lung and colon cancer detection and improving patient diagnosis. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers: 2nd Edition)
Show Figures

Figure 1

26 pages, 8589 KB  
Article
Remaining Useful Life Prediction of PEMFC Based on 2-Layer Bidirectional LSTM Network
by Wenxu Niu, Xiaokang Li, Haobin Tian and Caiping Liang
World Electr. Veh. J. 2025, 16(9), 511; https://doi.org/10.3390/wevj16090511 - 11 Sep 2025
Viewed by 874
Abstract
Proton exchange membrane fuel cells (PEMFCs) are considered promising solutions to address global energy and environmental challenges. This is largely due to their high efficiency in energy transformation, low emission of pollutants, quick responsiveness, and suitable operating conditions. However, their widespread application is [...] Read more.
Proton exchange membrane fuel cells (PEMFCs) are considered promising solutions to address global energy and environmental challenges. This is largely due to their high efficiency in energy transformation, low emission of pollutants, quick responsiveness, and suitable operating conditions. However, their widespread application is limited by high cost, limited durability and system complexity. To maintain system reliability and optimize cost-effectiveness, it is essential to predict the remaining operational lifespan of PEMFC systems with precision. This study introduces a prediction framework integrating a dual-layer bidirectional LSTM architecture enhanced by an attention mechanism for accurately predicting the RUL of PEMFCs. Raw data is preprocessed, and important features are selected by the smoothing technique and random forest method to reduce manual intervention. To enhance model adaptability and predictive accuracy, the Optuna optimization framework is employed to automatically fine-tune hyperparameters. The proposed prediction model is benchmarked against several existing approaches using aging datasets from two separate PEMFC stacks. Experimental findings indicate that the proposed two-layer BiLSTM with attention mechanism surpasses other baseline models in performance. Notably, the designed prediction model demonstrates strong performance on both benchmark datasets and real-world data acquired through a custom-built experimental fuel cell platform. This research offers meaningful guidance for prolonging the service life of PEMFCs and enhancing the efficiency of maintenance planning. Full article
Show Figures

Graphical abstract

Back to TopTop