Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,071)

Search Parameters:
Keywords = bi-directional long short-term memory (Bi-LSTM)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3139 KB  
Article
Temporal-Spatial Waveform Fault Attention Design for PEMFC Fault Diagnosis via Permutation Feature Importance in Smart Terminal
by Jian Liu, Wenqiang Xie, Xiaolong Xiao, Ziran Guo and Xiaoxing Lu
Processes 2026, 14(1), 18; https://doi.org/10.3390/pr14010018 (registering DOI) - 19 Dec 2025
Abstract
Accurate and rapid fault diagnosis is paramount to stabilizing proton exchange membrane fuel cells (PEMFC). To achieve this, this study proposes a novel fault diagnosis method that integrates a convolutional neural network (CNN), a bi-directional long short-term memory network (BiLSTM), and a waveform [...] Read more.
Accurate and rapid fault diagnosis is paramount to stabilizing proton exchange membrane fuel cells (PEMFC). To achieve this, this study proposes a novel fault diagnosis method that integrates a convolutional neural network (CNN), a bi-directional long short-term memory network (BiLSTM), and a waveform fault attention (WFA) mechanism. In the proposed framework, data are classified into five distinct categories utilizing a hierarchical clustering algorithm. Additionally, data augmentation techniques are implemented to bolster model performance. The introduction of amplitude attention and temporal difference attention, in conjunction with the construction of WFA, enables the accurate extraction of temporal-spatial features, significantly improving the distinguishability of fault diagnosis. Furthermore, feature contribution is evaluated using permutation feature importance (PFI) to identify key features, enhancing the interpretability of the model. Experimental findings verify that the proposed method enables high-precision fault identification, with precision values spanning 97–100% and an average stability of 98.3%, demonstrating robust performance even when the volume of original sample data is limited. This performance markedly surpasses that of extant methodologies. The comprehensive approach augments the accuracy, reliability, and interpretability of PEMFC fault diagnosis, and introduces a novel research paradigm for feature extraction, thereby possessing significant theoretical and practical application value. Full article
(This article belongs to the Topic Advances in Power Science and Technology, 2nd Edition)
Show Figures

Figure 1

22 pages, 957 KB  
Article
A Hybrid Deep Learning Model Based on Local and Global Features for Amazon Product Reviews: An Optimal ALBERT-Cascade CNN Approach
by Israa Mustafa Abbas, İsmail Atacak, Sinan Toklu, Necaattin Barışçı and İbrahim Alper Doğru
Appl. Sci. 2026, 16(1), 25; https://doi.org/10.3390/app16010025 - 19 Dec 2025
Abstract
Natural Language Processing (NLP) is a valuable technology and business topic as it helps turn data into useful information with the spread of digital information. Nevertheless, there are some difficulties in its use, including the language’s complexity and the data quality. To address [...] Read more.
Natural Language Processing (NLP) is a valuable technology and business topic as it helps turn data into useful information with the spread of digital information. Nevertheless, there are some difficulties in its use, including the language’s complexity and the data quality. To address these challenges, in this study, the researchers first performed a series of ablation experiments on 14 models derived from various variations in Deep Learning (DL) methods, including A Lite BERT (ALBERT) together with Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Max Pooling layer, and attention mechanism. Subsequently, they proposed an ALBERT-cascaded CNN hybrid model as an effective method to overcome the related challenges by evaluating the performance results obtained from these models. In the proposed model, a transformer architecture with parallel processing capability for both word and subword tokenization is used in addition to creating contextualized word embeddings. Local and global feature extraction was also performed using two 1-D CNN blocks before classification to improve the model performance. The model was optimized using an advanced hyperparameter optimization tool called OPTUNA. The findings of the experiment conducted with the proposed model were obtained based on Amazon Fashion 2023 data under 5-fold cross-validation conditions. The experimental results demonstrate that the proposed hybrid model exhibits good performance with average scores of 0.9308 (accuracy), 0.9296 (F1 score), 0.9412 (precision), 0.9182 (recall), and 0.9797 (AUC) in the validation dataset, and scores of 0.9313, 0.9305, 0.9414, 0.9199, and 0.9800 in the test dataset. In addition, comparisons of the model with models in studies using similar datasets support the experimental results and reveal that it can be used as a competitive approach for solving the problems encountered in the NLP field. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence and Data Science)
Show Figures

Figure 1

40 pages, 5487 KB  
Communication
Physics-Informed Temperature Prediction of Lithium-Ion Batteries Using Decomposition-Enhanced LSTM and BiLSTM Models
by Seyed Saeed Madani, Yasmin Shabeer, Michael Fowler, Satyam Panchal, Carlos Ziebert, Hicham Chaoui and François Allard
World Electr. Veh. J. 2026, 17(1), 2; https://doi.org/10.3390/wevj17010002 - 19 Dec 2025
Abstract
Accurately forecasting the operating temperature of lithium-ion batteries (LIBs) is essential for preventing thermal runaway, extending service life, and ensuring the safe operation of electric vehicles and stationary energy-storage systems. This work introduces a unified, physics-informed, and data-driven temperature-prediction framework that integrates mathematically [...] Read more.
Accurately forecasting the operating temperature of lithium-ion batteries (LIBs) is essential for preventing thermal runaway, extending service life, and ensuring the safe operation of electric vehicles and stationary energy-storage systems. This work introduces a unified, physics-informed, and data-driven temperature-prediction framework that integrates mathematically governed preprocessing, electrothermal decomposition, and sequential deep learning architectures. The methodology systematically applies the governing relations to convert raw temperature measurements into trend, seasonal, and residual components, thereby isolating long-term thermal accumulation, reversible entropy-driven oscillations, and irreversible resistive heating. These physically interpretable signatures serve as structured inputs to machine learning and deep learning models trained on temporally segmented temperature sequences. Among all evaluated predictors, the Bidirectional Long Short-Term Memory (BiLSTM) network achieved the highest prediction fidelity, yielding an RMSE of 0.018 °C, a 35.7% improvement over the conventional Long Short-Term Memory (LSTM) (RMSE = 0.028 °C) due to its ability to simultaneously encode forward and backward temporal dependencies inherent in cyclic electrochemical operation. While CatBoost exhibited the strongest performance among classical regressors (RMSE = 0.022 °C), outperforming Random Forest, Gradient Boosting, Support Vector Regression, XGBoost, and LightGBM, it remained inferior to BiLSTM because it lacks the capacity to represent bidirectional electrothermal dynamics. This performance hierarchy confirms that LIB thermal evolution is not dictated solely by historical load sequences; it also depends on forthcoming cycling patterns and entropic interactions, which unidirectional and memoryless models cannot capture. The resulting hybrid physics-data-driven framework provides a reliable surrogate for real-time LIB thermal estimation and can be directly embedded within BMS to enable proactive intervention strategies such as predictive cooling activation, current derating, and early detection of hazardous thermal conditions. By coupling physics-based decomposition with deep sequential learning, this study establishes a validated foundation for next-generation LIB thermal-management platforms and identifies a clear trajectory for future work extending the methodology to module- and pack-level systems suitable for industrial deployment. Full article
(This article belongs to the Section Vehicle Management)
Show Figures

Figure 1

12 pages, 1420 KB  
Article
A Dual-Head Mixer-BiLSTM Architecture for Battery State of Charge Prediction
by Fatih Kara and İbrahim Yücedağ
Appl. Sci. 2025, 15(24), 13255; https://doi.org/10.3390/app152413255 - 18 Dec 2025
Abstract
State of charge (SOC) estimation is a key research topic for electric vehicles, with accurate SOC estimation being important for both range and safety. In this study, we present the Dual-Head Depth Directional Mixer (DH-DW-M) model for SOC estimation. The model is tested [...] Read more.
State of charge (SOC) estimation is a key research topic for electric vehicles, with accurate SOC estimation being important for both range and safety. In this study, we present the Dual-Head Depth Directional Mixer (DH-DW-M) model for SOC estimation. The model is tested using the BMW i3 dataset and its performance is evaluated using standard error measures from multiple perspectives. Furthermore, the results are compared with those of previous studies; specifically, DH-DW-M is compared with the Trend Flow-Mixer model, which has achieved the best results on this dataset in the literature to date. Notably, the proposed DH-DW-M model achieves the lowest overall estimation error value of 0.21%. Compared with the Trend Flow-Mixer model, DH-DW-M showed an 82% lower Root Mean Square Error (RMSE) when using the same input features. The model is also compared with well-known methods, with RMSE approximately 97%, 96%, and 95% lower when compared to those of Long Short-Term Memory (LSTM), Convolutional Neural Network–LSTM (CNN-LSTM), and Bidirectional LSTM with Attention (BiLSTM-AT) models, respectively. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2210 KB  
Article
Deep Transfer Learning for UAV-Based Cross-Crop Yield Prediction in Root Crops
by Suraj A. Yadav, Yanbo Huang, Kenny Q. Zhu, Rayyan Haque, Wyatt Young, Lorin Harvey, Mark Hall, Xin Zhang, Nuwan K. Wijewardane, Ruijun Qin, Max Feldman, Haibo Yao and John P. Brooks
Remote Sens. 2025, 17(24), 4054; https://doi.org/10.3390/rs17244054 - 17 Dec 2025
Abstract
Limited annotated data often constrain accurate yield prediction in underrepresented crops. To address this challenge, we developed a cross-crop deep transfer learning (TL) framework that leverages potato (Solanum tuberosum L.) as the source domain to predict sweet potato (Ipomoea batatas L.) [...] Read more.
Limited annotated data often constrain accurate yield prediction in underrepresented crops. To address this challenge, we developed a cross-crop deep transfer learning (TL) framework that leverages potato (Solanum tuberosum L.) as the source domain to predict sweet potato (Ipomoea batatas L.) yield using multi-temporal uncrewed aerial vehicle (UAV)-based multispectral imagery. A hybrid convolutional–recurrent neural network (CNN–RNN–Attention) architecture was implemented with a robust parameter-based transfer strategy to ensure temporal alignment and feature-space consistency across crops. Cross-crop feature migration analysis showed that predictors capturing canopy vigor, structure, and soil–vegetation contrast exhibited the highest distributional similarity between potato and sweet potato. In comparison, pigment-sensitive and agronomic predictors were less transferable. These robustness patterns were reflected in model performance, as all architectures showed substantial improvement when moving from the minimal 3 predictor subset to the 5–7 predictor subsets, where the most transferable indices were introduced. The hybrid CNN–RNN–Attention model achieved peak accuracy (R20.64 and RMSE ≈ 18%) using time-series data up to the tuberization stage with only 7 predictors. In contrast, convolutional neural network (CNN), bidirectional gated recurrent unit (BiGRU), and bidirectional long short-term memory (BiLSTM) baseline models required 11–13 predictors to achieve comparable performance and often showed reduced or unstable accuracy at higher dimensionality due to redundancy and domain-shift amplification. Two-way ANOVA further revealed that cover crop type significantly influenced yield, whereas nitrogen rate and the interaction term were not significant. Overall, this study demonstrates that combining robustness-aware feature design with hybrid deep TL model enables accurate, data-efficient, and physiologically interpretable yield prediction in sweet potato, offering a scalable pathway for applying TL in other underrepresented root and tuber crops. Full article
(This article belongs to the Special Issue Application of UAV Images in Precision Agriculture)
Show Figures

Figure 1

26 pages, 3486 KB  
Article
Optimal Operation Strategy of Virtual Power Plant Using Electric Vehicle Agent-Based Model Considering Operational Profitability
by Hwanmin Jeong and Jinho Kim
Sustainability 2025, 17(24), 11291; https://doi.org/10.3390/su172411291 - 16 Dec 2025
Viewed by 113
Abstract
Growing EV adoption is reshaping how Distributed Energy Resources (DERs) interact with the grid, playing a pivotal role in global decarbonization efforts and the transition towards a sustainable energy future. This study built a Virtual Power Plant (VPP) operation framework centered on EV [...] Read more.
Growing EV adoption is reshaping how Distributed Energy Resources (DERs) interact with the grid, playing a pivotal role in global decarbonization efforts and the transition towards a sustainable energy future. This study built a Virtual Power Plant (VPP) operation framework centered on EV behavioral dynamics, connecting individual driving and charging behaviors with the physical and economic layers of energy management. The EV behavioral dynamic model quantifies the stochastic travel, parking, and charging behaviors of individual EVs through an Agent-Based Trip and Charging Chain (AB-TCC) simulation, producing a Behavioral Flexibility Trace (BFT) that represents time-resolved EV availability and flexibility. The Forecasting Model employs a Bi-directional Long Short-Term Memory (Bi-LSTM) network trained on historical meteorological data to predict short-term renewable generation and represent physical variability. The two-stage optimization model integrates behavioral and physical information with market price signals to coordinate day-ahead scheduling and real-time operation, minimizing procurement costs and mitigating imbalance penalties. Simulation results indicate that the proposed framework yielded an approximately 15% increase in revenue over 7 days through EV-based flexibility utilization. These findings demonstrate that the proposed approach effectively leverages EV flexibility to manage renewable generation variability, thereby enhancing both the profitability and operational reliability of VPPs in local distribution systems. This facilitates greater penetration of intermittent renewable energy sources, accelerating the transition to a low-carbon energy system. Full article
(This article belongs to the Special Issue Sustainable Innovations in Electric Vehicle Technology)
Show Figures

Figure 1

17 pages, 875 KB  
Article
Predicting the Risk of Death for Cryptocurrencies Using Deep Learning
by Doğa Elif Konuk and Halil Altay Güvenir
J. Risk Financial Manag. 2025, 18(12), 716; https://doi.org/10.3390/jrfm18120716 - 15 Dec 2025
Viewed by 293
Abstract
The rapid rise in the popularity of cryptocurrencies has drawn increasing attention from investors, entrepreneurs, and the public in recent years. However, this rapid growth comes with risk: many coins fail early and become what are known as “dead coins”, defined by a [...] Read more.
The rapid rise in the popularity of cryptocurrencies has drawn increasing attention from investors, entrepreneurs, and the public in recent years. However, this rapid growth comes with risk: many coins fail early and become what are known as “dead coins”, defined by a lack of recorded activity for more than a year. This study applies deep learning techniques to estimate the short-term risk of a cryptocurrency’s death. Specifically, three Recurrent Neural Network architectures, Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Gated Recurrent Unit (GRU), were trained on 18-month time series of daily closing prices and trading volumes using a stratified five-fold cross-validation framework. The models’ predictive performances were compared across input windows ranging from 10 to 180 days. Using the previous 180 days of data as input, GRU achieved the highest point accuracy of 0.7134, whereas BiLSTM exhibited the best performance when evaluated across input sequence lengths varying from 10 to 180 days, reaching an average accuracy of 0.676. These findings show the ability of recurrent architectures to anticipate short-term failure risks in cryptocurrency markets. Theoretically, the study contributes to financial risk modeling by extending time series classification methods to cryptocurrency failure prediction. Practically, it provides investors and analysts with a data-driven early-warning tool to manage portfolio risk and reduce potential losses. Full article
(This article belongs to the Special Issue The Road towards the Future: Fintech, AI, and Cryptocurrencies)
Show Figures

Figure 1

22 pages, 3276 KB  
Article
Deep Neural Network-Based Inverse Identification of the Mechanical Behavior of Anisotropic Tubes
by Zied Ktari, Pedro Prates and Ali Khalfallah
J. Manuf. Mater. Process. 2025, 9(12), 410; https://doi.org/10.3390/jmmp9120410 - 14 Dec 2025
Viewed by 173
Abstract
Tube hydroforming is a versatile forming process widely used in lightweight structural applications, where accurate characterization of the hoop mechanical behavior is crucial for reliable design and simulation. The ring hoop tensile test (RHTT) provides valuable experimental data for evaluating the elastoplastic response [...] Read more.
Tube hydroforming is a versatile forming process widely used in lightweight structural applications, where accurate characterization of the hoop mechanical behavior is crucial for reliable design and simulation. The ring hoop tensile test (RHTT) provides valuable experimental data for evaluating the elastoplastic response of anisotropic tubes in the hoop direction, but frictional effects often distort the measured force–displacement response. This study proposes a deep learning-based inverse identification framework to accurately recover the true hoop stress–strain behavior from RHTT data. Convolutional and recurrent neural network architectures, including CNN, long short term memory (LSTM), gated recurrent unit (GRU), bidirectional GRU (BiGRU), bidirectional LSTM (BiLSTM) and ConvLSTM, were trained using numerically generated datasets from finite element simulations. Data augmentation and hyperparameter tuning were applied to generalization. The hybrid ConvLSTM model achieved superior performance, with a minimum mean absolute error (MAE) of 0.08 and a coefficient of determination (R2) value of approximately 0.97, providing a close match to the Hill48 yield criterion. The proposed approach demonstrates the potential of deep neural networks as an efficient and accurate alternative to traditional inverse methods for characterizing anisotropic tubular materials. Full article
(This article belongs to the Special Issue Innovative Approaches in Metal Forming and Joining Technologies)
Show Figures

Figure 1

22 pages, 1402 KB  
Article
Spacecraft Health Status Monitoring Method Based on Multidimensional Data Fusion
by Hanyu Liang, Chengrui Liu, Wenjing Liu, Wenbo Li and Yan Zhang
Machines 2025, 13(12), 1136; https://doi.org/10.3390/machines13121136 - 12 Dec 2025
Viewed by 111
Abstract
To address the difficulty of detecting on-orbit faults of spacecraft under complex operating conditions in time, rational monitoring and assessment of spacecraft health status are essential for ensuring its safe, stable, and reliable operation. Considering the complexity, coupling, and multidimensionality of telemetry data, [...] Read more.
To address the difficulty of detecting on-orbit faults of spacecraft under complex operating conditions in time, rational monitoring and assessment of spacecraft health status are essential for ensuring its safe, stable, and reliable operation. Considering the complexity, coupling, and multidimensionality of telemetry data, this paper proposes a method for monitoring the health status of spacecraft based on multidimensional data fusion for a key electromechanical component of a spacecraft control system. The method first extracts the explicit and implicit features of the multidimensional coupled telemetry parameters via physical feature formulas and a stacked autoencoder. Then, the extracted features are fused and filtered to obtain the health factor—a performance degradation trend described the evolution law of key component health status over runtime. Moreover, the different degradation stages are identified via an unsupervised clustering algorithm. Finally, a Bidirectional Long Short-Term Memory (Bi-LSTM) is used to construct a health status prediction model in stages. By taking Control Moment Gyroscopes (CMGs) as experimental verification subjects, the proposed method demonstrates significantly superior performance compared to other methods across prediction accuracy metrics including MSE, RMSE, and R2. This study provides robust technical support for health status monitoring of key spacecraft electromechanical components under specific fault modes. Full article
Show Figures

Figure 1

24 pages, 3346 KB  
Article
Smart Irrigation Scheduling for Crop Production Using a Crop Model and Improved Deep Reinforcement Learning
by Jiamei Liu, Fangle Chang, Xiujuan Wang, Mengzhen Kang, Caiyun Lu, Chao Wang, Shaopeng Hu, Yangyang Li, Longhua Ma and Hongye Su
Agriculture 2025, 15(24), 2569; https://doi.org/10.3390/agriculture15242569 - 11 Dec 2025
Viewed by 240
Abstract
In arid regions characterized by extreme water scarcity, it is important to synergistically optimize both crop yield and water use. Irrigation strategies based on empirical knowledge overlook crops’ dynamic water needs and may cause water waste and yield loss. To address this issue, [...] Read more.
In arid regions characterized by extreme water scarcity, it is important to synergistically optimize both crop yield and water use. Irrigation strategies based on empirical knowledge overlook crops’ dynamic water needs and may cause water waste and yield loss. To address this issue, this paper proposes an intelligent irrigation scheduling method based on a crop growth model and an improved deep reinforcement learning (DRL) agent. We construct a high-fidelity cotton growth environment using the Decision Support System for Agrotechnology Transfer (DSSAT) model. The model was calibrated with local data from the Shihezi region, Xinjiang, to provide a reliable simulation platform for DRL agent training. We developed a temporal state representation module based on a Bidirectional Long Short-Term Memory (BiLSTM) network and an attention mechanism. This module captures dynamic trends in historical environmental information to focus on critical decision factors. The Soft Actor–Critic (SAC) algorithm was improved by integrating a feature attention mechanism to enhance decision-making precision. A dynamic reward function was designed based on the critical growth stages of cotton to incorporate agronomic prior knowledge into the optimization objective. Simulation results demonstrate that our proposed method can improve water use efficiency (WUE) by 39.0% (with an 8.4% increase in yield and a 22.1% reduction in water consumption) compared to fixed-schedule irrigation strategies. An ablation study further confirms that each of our proposed modules—BiLSTM, the attention mechanism, and the dynamic reward—makes a significant contribution to the final performance. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 1172 KB  
Article
Research on Bo-BiLSTM-Based Synchronous Load Transfer Control Technology for Distribution Networks
by Cheng Long, Hua Zhang, Xueneng Su, Yiwen Gao and Wei Luo
Processes 2025, 13(12), 3999; https://doi.org/10.3390/pr13123999 - 11 Dec 2025
Viewed by 151
Abstract
The operational modes and fault characteristics of distribution networks incorporating distributed generation are becoming increasingly complex. This complexity increases the difficulty of predicting switch control action times and leads to scattered samples with data scarcity. Consequently, it imposes higher demands on rapid fault [...] Read more.
The operational modes and fault characteristics of distribution networks incorporating distributed generation are becoming increasingly complex. This complexity increases the difficulty of predicting switch control action times and leads to scattered samples with data scarcity. Consequently, it imposes higher demands on rapid fault isolation and load transfer control following system failures. To address this issue, this paper proposes a switch action time prediction and synchronous load transfer control method based on Bayesian optimization of bidirectional long short-term memory (Bo-BiLSTM) networks. A distribution network simulation model incorporating distributed generation was constructed using MATLAB/Simulink (R2023a). Three-phase voltage and current at the Point of Common Coupling (PCC) were extracted as feature parameters to establish a switch operation timing database. Bayesian optimization was employed to tune the BiLSTM hyperparameters, constructing the Bo-BiLSTM prediction model to achieve high-precision forecasting of switch operation times under fault conditions. Subsequently, a load-synchronized transfer control strategy was proposed based on the prediction results. A dynamic delay mechanism was designed to achieve “open first and then close” sequential coordinated control. Physical experiments verified that the time difference between opening and closing was controlled within 2–12 milliseconds (ms), meeting the engineering requirement of less than 20 ms. The results demonstrate that the proposed control method enhances switch operation time prediction accuracy while effectively supporting rapid fault isolation and seamless load transfer in distribution networks, thereby improving system reliability and control precision. Full article
Show Figures

Figure 1

29 pages, 4957 KB  
Article
Wind Power Prediction Method Based on Physics-Guided Fusion and Distribution Constraints
by Wenbin Zheng, Jiaojiao Yin, Zhiwei Wang, Huijie Sun and Letian Bai
Energies 2025, 18(24), 6479; https://doi.org/10.3390/en18246479 - 10 Dec 2025
Viewed by 227
Abstract
Accurate wind power prediction is of great significance for grid stability and renewable energy integration. Addressing the challenge of effectively integrating physical mechanisms with data-driven methods in wind power prediction, this paper innovatively proposes a two-stage deep learning prediction framework incorporating physics-guided fusion [...] Read more.
Accurate wind power prediction is of great significance for grid stability and renewable energy integration. Addressing the challenge of effectively integrating physical mechanisms with data-driven methods in wind power prediction, this paper innovatively proposes a two-stage deep learning prediction framework incorporating physics-guided fusion and distribution constraints, aiming to improve the prediction accuracy and physical authenticity of individual wind turbines. In the first stage, we construct a baseline model based on multi-branch multilayer perceptrons (MLP) that eschews traditional attempts to accurately reconstruct complex three-dimensional spatiotemporal wind fields, instead directly learning the power conversion characteristics of wind turbines under specific meteorological conditions from historical operational data, namely the power coefficient (Cp). This data-driven Cp fitting method provides a physically interpretable and robust benchmark for power prediction. In the second stage, targeting the prediction residuals from the baseline model, we design a bidirectional long short-term memory network (BiLSTM) for refined correction. The core innovation of this stage lies in introducing Maximum Mean Discrepancy (MMD) as a regularization term to constrain the predicted wind speed-power joint probability distribution. This constraint enforces the model-generated power predictions to remain statistically consistent with historical real data distributions, effectively preventing the model from producing predictions that deviate from physical reality, significantly enhancing the model’s generalization capability and reliability. Experimental results demonstrate that compared to traditional methods, the proposed method achieves significant improvements in Mean Absolute Error, Root Mean Square Error, and other metrics, validating the effectiveness of physical constraints in improving prediction accuracy. Full article
Show Figures

Figure 1

23 pages, 2768 KB  
Article
PSO–BiLSTM–Attention: An Interpretable Deep Learning Model Optimized by Particle Swarm Optimization for Accurate Ischemic Heart Disease Incidence Forecasting
by Ruihang Zhang, Shiyao Wang, Wei Sun and Yanming Huo
Bioengineering 2025, 12(12), 1343; https://doi.org/10.3390/bioengineering12121343 - 9 Dec 2025
Viewed by 238
Abstract
Ischemic heart disease (IHD) remains the predominant cause of global mortality, necessitating accurate incidence forecasting for effective prevention strategies. Existing statistical models inadequately capture nonlinear epidemiological patterns, while deep learning approaches lack clinical interpretability. We constructed an interpretable predictive framework combining particle swarm [...] Read more.
Ischemic heart disease (IHD) remains the predominant cause of global mortality, necessitating accurate incidence forecasting for effective prevention strategies. Existing statistical models inadequately capture nonlinear epidemiological patterns, while deep learning approaches lack clinical interpretability. We constructed an interpretable predictive framework combining particle swarm optimization (PSO), bidirectional long short-term memory (BiLSTM) networks, and a novel multi-scale attention mechanism. Age-standardized incidence rates (ASIRs) from the Global Burden of Disease (GBD) 2021 database (1990–2021) were stratified across 24 sex-age subgroups and processed through 10-year sliding windows with advanced feature engineering. SHapley Additive exPlanations (SHAP) provided a three-level interpretability analysis (global, local, and component). The framework achieved superior performance metrics: mean absolute error (MAE) of 0.0164, root mean squared error (RMSE) of 0.0206, and R2 of 0.97, demonstrating a 93.96% MAE reduction compared to ARIMA models and a 75.99% improvement over CNN–BiLSTM architectures. SHAP analysis identified females aged 60–64 years and males aged 85–89 years as primary predictive contributors. Architectural analysis revealed the residual connection captured 71.0% of the predictive contribution (main trends), while the BiLSTM–Attention pathway captured 29.0% (complex nonlinear patterns). This interpretable framework transforms opaque algorithms into transparent systems, providing precise epidemiological evidence for public health policy, resource allocation, and targeted intervention strategies for high-risk populations. Full article
Show Figures

Figure 1

16 pages, 2030 KB  
Article
Chinese Text Readability Assessment Based on the Integration of Visualized Part-of-Speech Information with Linguistic Features
by Chi-Yi Hsieh, Jing-Yan Lin, Chi-Wen Hsieh, Bo-Yuan Huang, Yi-Chi Huang and Yu-Xiang Chen
Algorithms 2025, 18(12), 777; https://doi.org/10.3390/a18120777 - 9 Dec 2025
Viewed by 248
Abstract
The assessment of Chinese text readability plays a significant role in Chinese language education. Due to the intrinsic differences between alphabetic languages and Chinese character representations, the readability assessment becomes more challenging in terms of the language’s inherent complexity in vocabulary, syntax, and [...] Read more.
The assessment of Chinese text readability plays a significant role in Chinese language education. Due to the intrinsic differences between alphabetic languages and Chinese character representations, the readability assessment becomes more challenging in terms of the language’s inherent complexity in vocabulary, syntax, and semantics. The article proposed the conceptual analogy between Chinese readability assessment and music’s rhythm and tempo patterns, in which the syntactic structures of the Chinese sentences could be transformed into an image. The Chinese Knowledge and Information Processing Tagger (CkipTagger) tool developed by Sinica-Taiwan is utilized to decompose the Chinese text into a set of tokens. These tokens are then refined through a user-defined token pool to retain meaningful units. An image with part-of-speech (POS) information will be generated by using the token versus syntax alignment. A discrete cosine transform (DCT) is then applied to extract the temporal characteristics of the text. Moreover, the study integrated four categories: linguistic features–type–token ratio, average sentence length, total word, and difficulty level of vocabulary for the readability assessment. Finally, these features were fed into the Support Vector Machine (SVM) network for the classifications. Furthermore, a bidirectional long short-term memory (Bi-LSTM) network is adopted for quantitative comparisons. In simulation, a total of 774 Chinese texts fitted with Taiwan Benchmarks for the Chinese Language were selected and graded by Chinese language experts, consisting of equal amounts of basic, intermediate, and advanced levels. The finding indicated the proposed POS with the linguistic features work well in the SVM network, and the performance matches with the more complex architectures like the Bi-LSTM network in Chinese readability assessments. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

25 pages, 2805 KB  
Article
Multi-Channel Physical Feature Convolution and Tri-Branch Fusion Network for Automatic Modulation Recognition
by Changkai Zhang, Junyi Luo, Kaibo Shi, Tao Liu and Chenyu Ling
Electronics 2025, 14(24), 4847; https://doi.org/10.3390/electronics14244847 - 9 Dec 2025
Viewed by 181
Abstract
Automatic modulation recognition (AMR) plays a critical role in intelligent wireless communication systems, particularly under conditions with a low signal-to-noise ratio (SNR) and complex channel environments. To address these challenges, this paper proposes a three-branch fusion network that integrates complementary features from the [...] Read more.
Automatic modulation recognition (AMR) plays a critical role in intelligent wireless communication systems, particularly under conditions with a low signal-to-noise ratio (SNR) and complex channel environments. To address these challenges, this paper proposes a three-branch fusion network that integrates complementary features from the time, frequency, and spatial domains to enhance classification performance. The model consists of three specialized branches: a multi-channel convolutional branch designed to extract discriminative local features from multiple signal representations; a bidirectional long short-term memory (BiLSTM) branch capable of capturing long-range temporal dependencies; and a vision transformer (ViT) branch that processes constellation diagrams to exploit global structural information. To effectively merge these heterogeneous features, a path attention module is introduced to dynamically adjust the contribution of each branch, thereby achieving optimal feature fusion and improved recognition accuracy. Extensive experiments on the two popular benchmarks, RML2016.10a and RML2018.01a, show that the proposed model consistently outperforms baseline approaches. These results confirm the effectiveness and robustness of the proposed approach and highlight its potential for deployment in next-generation intelligent modulation recognition systems operating in realistic wireless communication environments. Full article
Show Figures

Figure 1

Back to TopTop