Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (37,784)

Search Parameters:
Keywords = neural network models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2724 KB  
Article
Prediction of Apple Canopy Leaf Area Index Based on Near-Infrared Spectroscopy and Machine Learning
by Junkai Zeng, Wei Cao, Yan Chen, Mingyang Yu, Jiyuan Jiang and Jianping Bao
Agronomy 2026, 16(9), 875; https://doi.org/10.3390/agronomy16090875 (registering DOI) - 25 Apr 2026
Abstract
Traditional leaf area index (LAI) measurement methods are destructive, time-consuming, and labor-intensive. In this study, 282 four-year-old central-leader apple trees were used as research subjects. Canopy reflectance spectra in the range of 4000−10,000 cm−1 were collected, and the corresponding true LAI values [...] Read more.
Traditional leaf area index (LAI) measurement methods are destructive, time-consuming, and labor-intensive. In this study, 282 four-year-old central-leader apple trees were used as research subjects. Canopy reflectance spectra in the range of 4000−10,000 cm−1 were collected, and the corresponding true LAI values were measured destructively by harvesting all leaves from a representative branch of each tree using a leaf area meter. The dataset was randomly divided into training (70%) and testing (30%) sets. Eight spectral pretreatment methods were compared. The Competitive Adaptive Reweighted Sampling (CARS) algorithm was employed to extract characteristic wavelengths. Subsequently, both a BP neural network and a Support Vector Machine (SVM) model for LAI prediction were constructed. The optimal model was selected based on evaluation metrics including the coefficient of determination (R2), mean absolute error (MAE), mean bias error (MBE), and mean absolute percentage error (MAPE). The combined preprocessing of MSC and SD yielded the optimal results, screening out 26 characteristic wavelengths. The SVM linear kernel model (c = 5, g = 0.3) constructed based on MSC + SD preprocessing performed best, achieving a validation set R2 of 0.90, MAE of 0.2117, MBE of −0.1214, and MAPE of 16.09%. The performance on the training set and validation set was comparable, with no overfitting observed. The MSC + SD preprocessing combined with CARS feature screening and SVM linear kernel modeling enables rapid, non-destructive estimation of apple canopy LAI, providing an effective technical tool for precision orchard management. Full article
31 pages, 5682 KB  
Article
Developing Artificial Intelligence-Based Car-Following Models Using Improved Permutation Entropy Analysis Results
by Ali Muhssin Shahatha and İsmail Şahin
Appl. Sci. 2026, 16(9), 4224; https://doi.org/10.3390/app16094224 (registering DOI) - 25 Apr 2026
Abstract
Vehicle trajectories are time series, and entropy is a powerful tool for testing or quantifying the complexity of a given series. Entropy tools are often applied to variables such as velocity, acceleration, space headway, and time headway, but the local position data have [...] Read more.
Vehicle trajectories are time series, and entropy is a powerful tool for testing or quantifying the complexity of a given series. Entropy tools are often applied to variables such as velocity, acceleration, space headway, and time headway, but the local position data have not been addressed previously. The novelty of this study is that it uses the Improved Permutation Entropy (IPE) for the first time to analyze vehicle position data and convert those data into a limited range (0–0.3317), aiming to understand individual vehicle behavior during car-following and introduce a new prediction method for developing artificial intelligence-based car-following models. The study uses the IPE analysis results as a new input variable, in addition to existing input variables, to improve the prediction accuracy of these models. Three types of neural networks were adopted according to the development of artificial intelligence models: artificial neural networks (ANNs), long short-term memory networks (LSTMs), and Transformer models. The results indicate that all models using the proposed prediction method, which includes the IPE analysis result, outperformed those using the traditional prediction method. The Transformer & IPE model shows the best performance in prediction accuracy of the follower acceleration output; the statistically significant percentage improvements were 2.04%, 1.42%, 1.22%, and 2.62% for RMSE, MAE, MASE, and R2, in that order. Furthermore, the results indicate that all models using the proposed prediction method outperformed the benchmarking Intelligent Driver Model (IDM) for the follower acceleration output. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

24 pages, 3894 KB  
Article
Turbidity Prediction in a Large, Shallow Lake Using Machine Learning
by Nicholas von Stackelberg and Michael Barber
Water 2026, 18(9), 1026; https://doi.org/10.3390/w18091026 (registering DOI) - 25 Apr 2026
Abstract
Large, shallow lakes lacking rooted aquatic vegetation are susceptible to wind-induced wave action that results in increased shear stress on the lake bottom, sediment resuspension and poor water clarity. The relationship between meteorological, hydrographical and sediment characteristics, and sediment dynamics has implications for [...] Read more.
Large, shallow lakes lacking rooted aquatic vegetation are susceptible to wind-induced wave action that results in increased shear stress on the lake bottom, sediment resuspension and poor water clarity. The relationship between meteorological, hydrographical and sediment characteristics, and sediment dynamics has implications for internal phosphorus cycling and bioavailability, the frequency and duration of harmful cyanobacterial blooms, lake level management and restoration potential. In this study, a multi-parameter water quality sonde was deployed at various sites at the bottom of Utah Lake to measure water quality variables. Sediment cores were collected at each of the deployment sites and analyzed for common physical and chemical properties. Several machine learning regression techniques, including polynomial, decision tree, artificial neural network, and support vector machine, were applied to predict turbidity, a measure of water clarity and surrogate for sediment dynamics, using the observed explanatory variables wind speed and direction, fetch, water depth, sediment properties, algae, and cyanobacteria. The decision tree estimators, random forest and histogram-based gradient boosting had the best model performance, explaining 86–89% of the variability in turbidity when including all the explanatory variables. The artificial neural network estimator multi-layer perceptron and the polynomial regression models also performed well (81%), whereas the support vector machine estimator exhibited poor performance. Chlorophyll and phycocyanin, components of turbidity, were amongst the most important variables to the decision tree and artificial neural network models. Wind speed and water depth were also of high importance, which conforms with mechanistic explanations of sediment mobility caused by wave action and shear stress. Carbonate content was consistently a good predictor due to the calcareous nature of Utah Lake, whereas the importance of the other sediment properties was dependent on the machine learning technique applied. This case study demonstrated the potential for machine learning models to predict water clarity and has promise for more general applications to other shallow lakes and serves as a useful tool for lake management and restoration. Full article
Show Figures

Figure 1

33 pages, 1307 KB  
Article
The Influence of AI Competency and Soft Skills on Innovative University Competency: An Integrated SEM–Artificial Neural Network (SEM–ANN) Model
by Kittipol Wisaeng and Thongchai Kaewkiriya
Data 2026, 11(5), 95; https://doi.org/10.3390/data11050095 (registering DOI) - 25 Apr 2026
Abstract
This study addresses the growing necessity to understand how artificial intelligence (AI) competency and soft skills jointly influence organizational innovation and performance in the era of digital transformation. Despite the rapid adoption of AI technologies across industries, organizations continue to face significant challenges [...] Read more.
This study addresses the growing necessity to understand how artificial intelligence (AI) competency and soft skills jointly influence organizational innovation and performance in the era of digital transformation. Despite the rapid adoption of AI technologies across industries, organizations continue to face significant challenges in effectively integrating technical AI capabilities with essential human-centric soft skills such as communication, adaptability, and leadership. This gap often limits the realization of AI-driven value and sustainable competitive advantage. The primary challenge in this research area is the lack of comprehensive models that simultaneously examine AI competency and soft skills within a unified framework, particularly in emerging economies where digital maturity varies widely. Existing studies tend to focus either on technical competencies or behavioral factors in isolation, leading to fragmented insights. To address these challenges, this study proposes a novel integrated research model that examines the combined effects of AI competency and soft skills on innovation outcomes and organizational performance. The model is empirically validated using structural equation modeling (SEM), providing robust evidence of the interrelationships among key constructs. The findings reveal that both AI competency and soft skills significantly contribute to innovation capability, which in turn enhances organizational performance. The study offers important theoretical and practical implications by bridging the gap between technical and human dimensions of AI adoption, thereby providing a more holistic understanding of digital transformation success. Full article
52 pages, 2293 KB  
Review
From Model-Driven to AI-Native Physical Layer Design: Deep Learning Architectures and Optimization Paradigms for Wireless Communications
by Evelio Astaiza Hoyos, Héctor Fabio Bermúdez-Orozco and Nasly Cristina Rodriguez-Idrobo
Information 2026, 17(5), 410; https://doi.org/10.3390/info17050410 (registering DOI) - 25 Apr 2026
Abstract
The increasing complexity of next-generation wireless systems challenges the scalability and generalization capabilities of traditional model-driven physical layer (PHY) design, which relies on analytically derived channel models and optimization frameworks. This paper presents a comprehensive survey and critical review of deep learning (DL) [...] Read more.
The increasing complexity of next-generation wireless systems challenges the scalability and generalization capabilities of traditional model-driven physical layer (PHY) design, which relies on analytically derived channel models and optimization frameworks. This paper presents a comprehensive survey and critical review of deep learning (DL) architectures enabling the transition toward AI-native PHY design. A unified optimization perspective is developed in which all PHY tasks—including channel estimation, channel state information (CSI) feedback, massive MIMO processing, signal detection, channel coding, beamforming, resource allocation, and semantic-aware transmission—are formulated under a common empirical risk minimization (ERM) framework. Neural architectures such as autoencoders, convolutional and recurrent networks, transformers, and reinforcement learning models are examined through their underlying optimization formulations, loss functions, training methodologies, and representation learning mechanisms. The review compares model-driven and AI-native approaches in terms of performance metrics, computational complexity, robustness, generalization capability, and practical deployment constraints, including hardware limitations, energy efficiency, and real-time feasibility. The analysis highlights the conditions under which AI-native architectures provide adaptability and performance improvements while identifying trade-offs in complexity, latency, and interpretability. The study concludes by outlining prioritized research directions toward fully adaptive and self-optimizing wireless communication systems. Full article
(This article belongs to the Section Wireless Technologies)
17 pages, 6590 KB  
Article
Nanogroove-Induced Enhancement of Neural Spike Activity in Stem Cell-Derived Networks
by Rahman Sabahi-Kaviani, Marina A. Shiryaeva and Regina Luttge
Micromachines 2026, 17(5), 524; https://doi.org/10.3390/mi17050524 (registering DOI) - 25 Apr 2026
Abstract
Nanogrooves provide instructive cues to cells in culture. Several nanofabrication techniques have been developed to create biomimetic substrates, advancing our understanding of cell adhesion. Their integration into nervous system models highlights the critical role of the extracellular matrix (ECM) in developing functional tissue [...] Read more.
Nanogrooves provide instructive cues to cells in culture. Several nanofabrication techniques have been developed to create biomimetic substrates, advancing our understanding of cell adhesion. Their integration into nervous system models highlights the critical role of the extracellular matrix (ECM) in developing functional tissue constructs for in vitro platforms such as Brain-on-Chip (BoC) and Nervous System-on-Chip (NoC). This study presents a nanofabrication approach that integrates photolithography and microtransfer molding (μTM) to pattern nanogrooves using photocurable polymer NOA81 onto microelectrode array (MEA) plates. The resulting nanogrooves exhibited a pattern periodicity of 976 nm and a ridge width of 232 nm, as confirmed by scanning electron microscopy and atomic force microscopy. We assessed the biocompatibility and functional impact of these modified substrates using human induced pluripotent stem cell (hiPSC)-derived neuronal cultures. Neurons cultured on nanogroove-modified MEAs exhibited aligned neural processes due to the anisotropic surface features and expressed vivid spiking behavior and higher burst frequency compared to randomly cultured neuronal networks. In conclusion, the proposed fabrication technique integrates nanogrooves with commercial MEAs using a combination of microtransfer molding and photolithography, resulting in modified culture substrates that enhance spike activity and network organization, aiding in the development of more in vivo-like neural models. Full article
(This article belongs to the Special Issue Microfluidics in Biomedical Research)
Show Figures

Figure 1

23 pages, 9214 KB  
Article
Research on Load Identification and Prediction of Ship Propulsion Shafting Based on Digital–Physical Hybrid Models
by Junhui He, Jinlin Liu, Zheng Gu and Yunhe Wang
J. Mar. Sci. Eng. 2026, 14(9), 787; https://doi.org/10.3390/jmse14090787 (registering DOI) - 25 Apr 2026
Abstract
Shafting load directly reflects shafting alignment quality and is critical to ship safety and reliability, yet remains difficult to measure directly in engineering practice. To address this, we propose a load identification and prediction method based on a Digital–Physical hybrid model. This approach [...] Read more.
Shafting load directly reflects shafting alignment quality and is critical to ship safety and reliability, yet remains difficult to measure directly in engineering practice. To address this, we propose a load identification and prediction method based on a Digital–Physical hybrid model. This approach integrates shafting load inversion with the time-series dependency characteristics of LSTM networks to construct an interpretable framework comprising physical, data, and decision layers. Modal testing calibrates the finite element model, while Tikhonov regularization addresses the ill-posed nature of frequency response function inversion. Additionally, a weight allocation strategy is designed during preprocessing to enhance training data quality. Validation experiments for load identification and prediction are conducted using an optimized dataset fused from measured and simulation data. Results show that, compared with purely physical or purely simulation-based models, the proposed hybrid model reduces prediction errors (RMSE, MAE, MSE) by 32–48.4% and increases the goodness of fit of prediction curves by 4%. This demonstrates superior predictive capability and interpretability, providing a new avenue for the monitoring of shafting conditions and load prediction in complex mechanical structures. Full article
(This article belongs to the Section Ocean Engineering)
32 pages, 6033 KB  
Article
Hierarchical Classification of Erosion Gullies and Interpretation of Influencing Factors Based on Random Forest and SHAP
by Miao Wang, Fukun Wang, Mingwei Hai, Yong Liu, Chunjiao Wang and Fuhui Xiong
Appl. Sci. 2026, 16(9), 4215; https://doi.org/10.3390/app16094215 (registering DOI) - 25 Apr 2026
Abstract
This study aimed to enhance the accuracy and interpretability of erosion gully classification within black soil regions by focusing on Changxing Township, Xinxing District, Qitaihe City, Heilongjiang Province as the research site. Utilizing RTK (Real-Time Kinematic) surveying technology, three-dimensional topographic data were collected [...] Read more.
This study aimed to enhance the accuracy and interpretability of erosion gully classification within black soil regions by focusing on Changxing Township, Xinxing District, Qitaihe City, Heilongjiang Province as the research site. Utilizing RTK (Real-Time Kinematic) surveying technology, three-dimensional topographic data were collected for 139 actively developing erosion gullies. Key morphological parameters—including gully length, depth, gradient, average top width, average bottom width, and slope gradients on both sides—were extracted to construct interactive features. The variable set was refined through correlation analysis and variance inflation factor (VIF) diagnostics to mitigate multicollinearity. A random forest model was employed as the primary classification approach and benchmarked against logistic regression, support vector machines (SVM), decision trees, and backpropagation neural networks. To address class imbalance, a combination of class weighting, Synthetic Minority Over-sampling Technique (SMOTE), and undersampling methods was implemented. Model tuning and interpretability assessments were performed using cross-validation, grid search optimization, and SHapley Additive exPlanations (SHAP) analysis. The findings demonstrate that the random forest model achieved superior overall performance, with test set accuracy, macro-averaged F1 score, and balanced accuracy values of 0.9143, 0.8087, and 0.8427, respectively. Among imbalance handling techniques, class weighting yielded better results compared to oversampling and undersampling. Feature importance and SHAP analyses identified gully length, average crest width, and their interaction with gully depth as the principal determinants influencing gully grade classification. These results elucidate the synergistic developmental dynamics of gully longitudinal extension, vertical deepening, and lateral widening. The proposed methodology offers valuable technical support for the rapid surveying, classification, and management decision-making processes related to black soil erosion gullies. Full article
(This article belongs to the Special Issue Recent Research in Frozen Soil Mechanics and Cold Regions Engineering)
32 pages, 2995 KB  
Article
Self-Explaining Neural Networks for Transparent Parkinson’s Disease Screening
by Mahmoud E. Farfoura, Ahmad A. A. Alkhatib and Tee Connie
Sensors 2026, 26(9), 2671; https://doi.org/10.3390/s26092671 (registering DOI) - 25 Apr 2026
Abstract
Transparent clinical decision-making remains a critical barrier to deploying deep learning in medical diagnosis. Post hoc explanation methods approximate model behaviour after training but cannot guarantee that explanations faithfully reflect the underlying reasoning. This study proposes a Self-Explaining Neural Network (SENN) for Parkinson’s [...] Read more.
Transparent clinical decision-making remains a critical barrier to deploying deep learning in medical diagnosis. Post hoc explanation methods approximate model behaviour after training but cannot guarantee that explanations faithfully reflect the underlying reasoning. This study proposes a Self-Explaining Neural Network (SENN) for Parkinson’s Disease (PD) screening via Ground Reaction Force (GRF) gait analysis, enforcing intrinsic interpretability through learnable basis concepts and input-dependent relevance scores computed jointly with the prediction. The architecture combines a four-block residual CNN backbone with stochastic depth regularisation, a 16-concept encoder with diversity and stability constraints, and temperature-scaled probability calibration for reliable clinical operating points. Evaluated on the PhysioNet Gait in Parkinson’s Disease dataset (306 subjects, 16 GRF sensors per foot), SENN achieves a subject-level ROC-AUC of 0.916 [95% CI: 0.867–0.964], sensitivity of 0.913 [0.862–0.963], specificity of 0.671 [0.485–0.858], and Average Precision of 0.942 [0.918–0.967], reported across five independent random seeds. Comparative evaluation against four deep learning baselines—CNN-Residual, BiLSTM, CNN-LSTM, and CNN-Attention—confirms that the interpretability constraints impose no statistically significant reduction in discriminative performance, with all pairwise ROC-AUC confidence intervals overlapping. Concept-level analysis reveals that the three most discriminative concepts correspond to disrupted midfoot loading patterns, increased step-length variability, and bilateral cadence asymmetry—all established biomechanical hallmarks of parkinsonian gait—providing clinically grounded, patient-specific explanations without post hoc approximation. These findings demonstrate that rigorous intrinsic interpretability and competitive predictive accuracy are simultaneously achievable in deep gait analysis, supporting the clinical adoption of transparent diagnostic AI. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

24 pages, 1994 KB  
Article
Complex-Time Neural Networks: Geometric Temporal Access for Long-Range Reasoning
by Gerardo Iovane, Giovanni Iovane and Antonio De Rosa
Algorithms 2026, 19(5), 334; https://doi.org/10.3390/a19050334 (registering DOI) - 25 Apr 2026
Abstract
Most neural architectures model time as a one-dimensional real-valued variable, constraining temporal reasoning to sequential propagation along a single axis. We introduce Complex-Time Neural Networks (CTNN), a new class of architectures in which temporal coordinates are elements of the complex plane T = [...] Read more.
Most neural architectures model time as a one-dimensional real-valued variable, constraining temporal reasoning to sequential propagation along a single axis. We introduce Complex-Time Neural Networks (CTNN), a new class of architectures in which temporal coordinates are elements of the complex plane T = t + ∈ ℂ, where Re(T) preserves chronological ordering and Im(T) encodes an orthogonal experiential dimension. Within this geometry, Im(T) < 0 defines a memory domain enabling retrospective retrieval, Im(T) = 0 corresponds to present-moment computation, and Im(T) > 0 defines an imagination domain for prospective projection. We prove the Expressive Separation Theorem (Theorem 1), establishing that, within the temporally coupled function class GTCP and under explicit Assumptions A1–A4 (in particular the bounded projection Assumption A3), CTNN accesses temporally coupled functions at O(1) cost with respect to temporal distance Δ1, Δ2, while real-time architectures incur Ω1 + Δ2) sequential steps. For layered compositions, this yields an exponential composition gap within GTCP under A1–A4. These advantages hold under the stated assumptions and may not directly generalize to broader function classes or large-scale settings where A3 cannot be maintained. Therefore, Theorem 1 provides a formal separation result for GTCP, while CTNN more broadly defines a geometric framework for temporal computation. As the first concrete instantiation of this framework, we develop Complex-Time Convolutional Neural Networks (CTCNN). CTCNN achieves state-of-the-art performance on Something-Something V2 (70.2 ± 0.4%, +1.1% over VideoMAE v2, p < 0.01), strong performance on Kinetics-400 (78.4 ± 0.3%), and substantial gains on Long Range Arena Path-X (87.3% vs. 79.6%, +7.7%), using 3.4× fewer parameters than VideoMAE v2. Learnable angular parameters α and β provide computationally interpretable parameters related to memory-access span and prospection breadth, with values varying systematically across task families. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms (2nd Edition))
26 pages, 3163 KB  
Article
Neuro-Fuzzy Control of a Bidirectional DC-DC Converter Applied in the Powertrain of Electric Vehicles
by Erik Martínez-Vera, Pedro Bañuelos-Sánchez, Alfredo Rosado-Muñoz, Juan Manuel Ramirez-Cortes and Pilar Gomez-Gil
Algorithms 2026, 19(5), 335; https://doi.org/10.3390/a19050335 (registering DOI) - 25 Apr 2026
Abstract
Power converters are fundamental components in vehicle electrification systems. However, their inherently nonlinear and time-varying condition requires complex design procedures when conventional control strategies based on linear small-signal models are employed. This work proposes a simplified and hardware-oriented DC-DC converter control methodology that [...] Read more.
Power converters are fundamental components in vehicle electrification systems. However, their inherently nonlinear and time-varying condition requires complex design procedures when conventional control strategies based on linear small-signal models are employed. This work proposes a simplified and hardware-oriented DC-DC converter control methodology that combines fuzzy logic and Neural Networks in a sequential manner. A fuzzy logic fuzzy controller is first used to generate a dataset of control actions under closed-loop operation. A lightweight neural network is then trained using the obtained data to approximate this mapping and subsequently replace the fuzzy controller in real-time operation. To validate the approach, a bidirectional buck–boost DC-DC converter is designed for applications in the powertrain of electric vehicles with 500 kHz switching frequency and 13 kW power rating. The control algorithm is embedded in an FPGA to demonstrate its suitability for hardware deployment. The experimental results show a reduction in RMSE of 33.7% and a decrease in the settling time of at least 51.7% when compared with a benchmark PID control. Full article
19 pages, 4540 KB  
Article
The Development of a Data-Driven Surrogate Model for Enhancing Electric Vehicle Cabin Airflow Analysis
by Mirza Popovac, Thomas Bäuml, Dominik Dvorak and Dragan Šimić
Fluids 2026, 11(5), 107; https://doi.org/10.3390/fluids11050107 (registering DOI) - 25 Apr 2026
Abstract
This paper presents a data-driven surrogate model for predicting cabin airflow and its integration into system-level electric vehicle simulations for energy management analysis. The model employs a graph-based neural network with a mirror-symmetric predictor–corrector architecture and is trained on a dataset generated using [...] Read more.
This paper presents a data-driven surrogate model for predicting cabin airflow and its integration into system-level electric vehicle simulations for energy management analysis. The model employs a graph-based neural network with a mirror-symmetric predictor–corrector architecture and is trained on a dataset generated using computational fluid dynamics (CFD) covering a defined range of inlet velocities and temperatures. The surrogate appropriately reconstructs temperature fields and captures the dominant airflow structures at significantly lower computational cost than CFD. Quantitative evaluation shows high accuracy in passenger-relevant regions, while localized discrepancies remain confined mainly to shear-layer zones. The model enables near-real-time inference and is coupled with a system-level modeling framework for control-oriented simulations that are impractical with CFD. The study is tailored to a specific geometry and operating range, showing that targeted training strategies and physics-based extensions improve robustness, particularly under limited data conditions. Full article
Show Figures

Figure 1

25 pages, 4382 KB  
Article
Spatio-Temporal Joint Network for Coupler Anomaly Detection Under Complex Working Conditions Utilizing Multi-Source Sensors
by Zhirong Zhao, Zhentian Jiang, Qian Xiao, Long Zhang and Jinbo Wang
Sensors 2026, 26(9), 2661; https://doi.org/10.3390/s26092661 (registering DOI) - 24 Apr 2026
Abstract
Owing to the intricate mechanical coupling characteristics and the considerable difficulty in extracting synergistic spatio-temporal features from high-dimensional sensor data under fluctuating alternating loads, this study proposes a robust anomaly detection framework that combines Normalized Mutual Information (NMI) and Spatio-Temporal Graph Neural Networks [...] Read more.
Owing to the intricate mechanical coupling characteristics and the considerable difficulty in extracting synergistic spatio-temporal features from high-dimensional sensor data under fluctuating alternating loads, this study proposes a robust anomaly detection framework that combines Normalized Mutual Information (NMI) and Spatio-Temporal Graph Neural Networks (STGNN). First, NMI is utilized to quantify the nonlinear physical coupling intensity among multi-source sensors, thereby filtering out weakly correlated noise and reconstructing the spatial topological structure of the coupler system. Subsequently, a deep learning architecture incorporating Graph Convolutional Networks (GCN), Gated Recurrent Units (GRU), and one-dimensional convolutional residual connections is developed to capture the dynamic evolutionary characteristics of equipment states across both spatial interactions and temporal sequences. Finally, based on the model’s health-state predictions, a moving average algorithm is introduced to smooth the residual sequences, and an anomaly early-warning baseline is established in conjunction with the 3σ criterion. Experimental validation conducted using field service data from heavy-haul trains demonstrates that, compared to conventional serial CNN and Long Short-Term Memory (LSTM) models, the proposed method exhibits superior fitting performance and robustness against noise, effectively reducing the false alarm rate within normal working intervals. In a real-world case study, the method successfully identified variations in spatial linkage features induced by local damage and triggered timely alerts. Notably, the spatial alarm nodes were highly consistent with the fatigue crack initiation sites identified through on-site magnetic particle inspection. This study provides a viable data-driven analytical framework for the condition monitoring and anomaly identification of critical load-bearing components in heavy-haul trains. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
20 pages, 1256 KB  
Article
Semantic Classification of Railway Bridge Drawings Based on OCR and BP Neural Networks
by Wanqi Wang, Ze Guo, Liu Bao, Xing Yang, Yalong Xie, Ruichang Shi and Shuoyang Zhao
Appl. Sci. 2026, 16(9), 4206; https://doi.org/10.3390/app16094206 (registering DOI) - 24 Apr 2026
Abstract
Digital management of modern railway bridges, a substantial part of high-speed railway networks, is often hindered by manual interpretation of construction drawings for Building Information Modeling (BIM). While individual technologies like optical character recognition (OCR) and neural networks are well-established, their generic application [...] Read more.
Digital management of modern railway bridges, a substantial part of high-speed railway networks, is often hindered by manual interpretation of construction drawings for Building Information Modeling (BIM). While individual technologies like optical character recognition (OCR) and neural networks are well-established, their generic application often fails on complex engineering documents. To address this, a domain-adaptive automatic recognition and semantic interpretation framework is proposed for railway bridge construction drawings. The novelty of this work lies in a specialized hybrid data fusion strategy that intelligently merges vector CAD file parsing with morphology-denoised OCR, resolving spatial and semantic conflicts. Furthermore, a back-propagation (BP) neural network is explicitly adapted to classify the extracted text into specific engineering categories, overcoming the challenges of dense layouts and overlapping symbols. Finally, the framework achieves end-to-end integration by transforming these semantic entities directly into structured, IFC-compatible BIM parameters. Evaluated on 250 real-world drawings, the framework achieved an average F1-score of 91.0% in semantic classification and improved processing efficiency by 6.5 times compared to manual methods. Moreover, 93.8% of the extracted entities achieved strict BIM parameter correctness, defined as seamless mapping to Revit IFC attributes without manual intervention. Full article
38 pages, 6938 KB  
Article
DeepSense: An Adaptive Scalable Ensemble Framework for Industrial IoT Anomaly Detection
by Amir Firouzi and Ali A. Ghorbani
Sensors 2026, 26(9), 2662; https://doi.org/10.3390/s26092662 (registering DOI) - 24 Apr 2026
Abstract
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge [...] Read more.
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge the effectiveness of conventional security mechanisms. In this paper, we propose DeepSense, a hybrid and adaptive anomaly and intrusion detection framework specifically designed for resource-constrained and heterogeneous IIoT deployments. DeepSense integrates three complementary components: DataSense, a realistic data pipeline and experimental testbed supporting synchronized sensor and network data processing; RuleSense, a lightweight rule-based detection layer that provides fast, deterministic, and interpretable anomaly screening at the edge; and NeuroSense, a learning-driven detection module comprising an adaptive ensemble of 22 machine learning and deep learning models spanning classical, neural, hybrid, and Transformer-based architectures. NeuroSense operates as a second detection stage that validates suspicious events flagged by RuleSense and enables both coarse-grained and fine-grained attack classification. To support rigorous and practical assessment, this work further introduces a comprehensive performance evaluation framework that extends beyond accuracy-centric metrics by jointly considering detection quality, latency, resource efficiency, and detection coverage, alongside an optimization-based process for selecting Pareto-optimal model ensembles under realistic IIoT constraints. Extensive experiments across diverse detection scenarios demonstrate that DeepSense exhibits strong generalization, lower false positive rates, and robust performance under evolving attack behaviors. The proposed framework provides a scalable and efficient IIoT security solution that meets the operational requirements of Industry 4.0 and the resilience-oriented objectives of Industry 5.0. Full article
Back to TopTop