Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,930)

Search Parameters:
Keywords = deep-learning neural network model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1156 KB  
Article
Enhancing Graph Summarization Using Node Importance and Graph Attention Networks
by Krista Rizman Žalik, Domen Mongus and Mitja Žalik
Mathematics 2026, 14(8), 1283; https://doi.org/10.3390/math14081283 (registering DOI) - 12 Apr 2026
Abstract
As the scale of graph-structured data continues to grow, graph summarization has become an important technique for storage efficiency and high-level visualization. This study investigates a Node Importance (NI) approach to graph summarization that prioritizes structural integrity over simple size reduction. The NI [...] Read more.
As the scale of graph-structured data continues to grow, graph summarization has become an important technique for storage efficiency and high-level visualization. This study investigates a Node Importance (NI) approach to graph summarization that prioritizes structural integrity over simple size reduction. The NI approach selects super nodes by ranking vertices through centrality and propagation metrics. Experimental results demonstrate that the proposed NI method achieves compression rates comparable to or slightly lower than traditional Minimum Description Length (MDL) methods across various datasets while maintaining structural integrity. However, today, the high dimensionality and complexity of modern graph data are making deep learning techniques more popular. Great progress in deep learning summarization techniques is achieved with Graph Neural Networks (GNNs). This study investigates the structure and suitability of different GNN architectures for graph summarization using the NI approach. Graph Attention Networks (GATs) and their variants are discussed as a flexible, learned notion of node importance via attention. We present an examination of GATs, covering both diverse approaches and improvements. This study also discusses extensions that enhance the concept of node importance established by the GAT model, GAT variants for node importance estimation, and application-specific GAT research. Full article
Show Figures

Figure 1

25 pages, 41710 KB  
Article
A Machine Learning-Enhanced Tri-Objective Stowage Optimization Framework for Low-Carbon Finished Steel Maritime Supply Chains
by Bin Xu, Luyang Wang, Tingting Xiang and Rui Gu
Processes 2026, 14(8), 1233; https://doi.org/10.3390/pr14081233 (registering DOI) - 12 Apr 2026
Abstract
Decarbonizing downstream steel logistics remains underexplored in sustainable supply chain management. This study proposes a machine learning-enhanced tri-objective optimization framework for the ship stowage planning problem (SSPP). The framework handles heterogeneous finished steel products, including coils, plates, ingots, tubes, and sections. The model [...] Read more.
Decarbonizing downstream steel logistics remains underexplored in sustainable supply chain management. This study proposes a machine learning-enhanced tri-objective optimization framework for the ship stowage planning problem (SSPP). The framework handles heterogeneous finished steel products, including coils, plates, ingots, tubes, and sections. The model simultaneously maximizes deadweight utilization and minimizes a novel Adaptive Weighted Moment Balance (AWMB) index. It also minimizes voyage carbon emissions through a trim-and-heel resistance penalty. A spatial-to-sequential discretization strategy transforms the NP-hard placement problem into a tractable permutation optimization. A deep neural network (DNN) surrogate achieves a 3.57-fold speedup with only 1.52% hypervolume degradation. An improved NSGA-III algorithm with adaptive operators ensures Pareto front exploration. Embedded step-wise moment verification guarantees dynamic stability throughout loading and unloading. Validated on real data from a Chinese steel enterprise, the framework achieves 99.88% deadweight utilization, reduces transverse and longitudinal imbalance by 48.27% and 90.54%, and cuts CO2 emissions by 95.5% per voyage. SOLAS constraints, load line limits, and CII/FuelEU targets are addressed through embedded stability and capacity constraints. Multi-route and weather-dependent validation remains necessary before fleet-scale deployment. Full article
17 pages, 1688 KB  
Article
A Hybrid Deep Learning Model for Crop Yield Prediction Taking Weather Data Associated with Production Management Phases as Input
by Shu-Chu Liu, Yan-Jing Lin, Chih-Hung Chung and Hsien-Yin Wen
Sustainability 2026, 18(8), 3806; https://doi.org/10.3390/su18083806 (registering DOI) - 11 Apr 2026
Abstract
Accurate crop yield prediction is fundamental to sustainable agricultural management, enabling optimized resource allocation and informed decision-making. However, a critical gap exists in current prediction models: existing approaches overlook the temporal alignment between meteorological conditions and production management phases—defined as the intervals between [...] Read more.
Accurate crop yield prediction is fundamental to sustainable agricultural management, enabling optimized resource allocation and informed decision-making. However, a critical gap exists in current prediction models: existing approaches overlook the temporal alignment between meteorological conditions and production management phases—defined as the intervals between consecutive agronomic operations (e.g., sowing, fertilization, thinning). This oversight results in suboptimal predictive performance, as conventional whole-season weather aggregation fails to capture phase-sensitive crop–weather interactions. While machine learning (e.g., XGBoost) and deep learning approaches (e.g., CNN, LSTM) have been applied to yield prediction, these models typically treat weather variables as temporally homogeneous inputs, inadequately modeling the correlation between historical yields and phase-specific meteorological patterns. To address this gap, this study proposes CNN-LSTM-AM, an innovative hybrid deep learning model that integrates convolutional neural networks (CNNs), long short-term memory (LSTM), and attention mechanisms (AMs), utilizing weather data explicitly aligned with production management phases as input. The CNN component extracts cross-phase weather patterns, the LSTM captures sequential dependencies across growth stages, and the attention mechanism dynamically weights phase importance based on meteorological conditions. The proposed model is validated using a real-world case study of Bok choy production from an agricultural cooperative in Yunlin County, Taiwan, encompassing 1714 production cycles over eight years (2011–2019). Experimental results demonstrate that CNN-LSTM-AM achieves an RMSE of 1448.24 kg/ha, MAPE of 3.60%, and R2 of 0.98, outperforming five baseline models—CNN (RMSE = 2919.18), LSTM (RMSE = 2529.74), CNN-LSTM (RMSE = 1516.44), LSTM-AM (RMSE = 2284.64), and XGBoost (RMSE = 3452.47)—representing a notable reduction in prediction error (58% lower RMSE) compared to XGBoost. Furthermore, prediction accuracy improves progressively as harvest time approaches, and phase-specific weather encoding enhances accuracy by 16.5% compared to whole-season averaging. These findings underscore the critical importance of integrating agronomic domain knowledge into data-driven prediction frameworks. Full article
(This article belongs to the Special Issue AI for Sustainable Supply Chain-Driven Business Transformation)
21 pages, 8142 KB  
Article
Robust Deep Learning for Multiclass Power System Fault Diagnosis Using Edge Deployment
by Rakesh Sahu, Pratap Kumar Panigrahi, Deepak Kumar Lal, Rudranarayan Pradhan and Chandrakanta Mahanty
Algorithms 2026, 19(4), 299; https://doi.org/10.3390/a19040299 (registering DOI) - 11 Apr 2026
Abstract
This article introduces an intelligent framework using deep learning to recognize and classify different faults through the real-time detection of multiple faults in power distribution systems. A collection of data representing normal operating conditions, alongside various fault scenarios including line-to-ground (LG), line-to-line (LL), [...] Read more.
This article introduces an intelligent framework using deep learning to recognize and classify different faults through the real-time detection of multiple faults in power distribution systems. A collection of data representing normal operating conditions, alongside various fault scenarios including line-to-ground (LG), line-to-line (LL), double line-to-ground (LLG), and three-phase line (LLL) faults, was created using three phase current signals obtained from the Real-Time Digital Simulator (RTDS) microgrid test system. To properly model the system dynamics, a feature extraction method that integrates phase currents, differential currents, summation currents and magnitude results was developed. The temporal features of the fault signals were identified by using a sliding window approach to fit the data. A one-dimensional convolutional neural network (CNN) was developed to identify different types of faults. This model performed well, obtaining nearly 96.15% accuracy while testing. In order to evaluate the feasibility of the approach, the trained model was loaded on Raspberry Pi 5, NodeMCU, ESP32 and existing sensing devices. The fault classification performed in real-time was time-sensitive. The proposed intelligent framework is applicable to low-scale operation for smart grid fault monitoring and protection and it is an economically viable solution. Full article
Show Figures

Figure 1

18 pages, 606 KB  
Article
Information-Preserving Spiking for Accurate Time-Series Forecasting in Spiking Neural Networks
by Jiwoo Lee and Eun-Kyu Lee
Electronics 2026, 15(8), 1597; https://doi.org/10.3390/electronics15081597 - 10 Apr 2026
Abstract
Deep learning models have achieved high accuracy in forecasting problems, but at the cost of large computational energy demand. Brain-inspired spiking neural networks (SNNs) offer a promising, low-power alternative, yet their adoption for time-series forecasting has been limited by information loss from binary [...] Read more.
Deep learning models have achieved high accuracy in forecasting problems, but at the cost of large computational energy demand. Brain-inspired spiking neural networks (SNNs) offer a promising, low-power alternative, yet their adoption for time-series forecasting has been limited by information loss from binary spikes and degraded performance in deeper networks. This paper proposes a fully spiking framework that bridges this gap by improving both the encoding and propagation of information in SNNs. The framework introduces a hybrid Delta-Rate encoding mechanism that captures both abrupt changes and gradual trends in time-series data, and a Mem-Spike mechanism that transmits analog membrane potential values to preserve fine-grained information between spiking layers. We further employ residual membrane connections to maintain signal flow in deep spiking networks. Using two public energy load datasets, our enhanced SNNs consistently outperform conventional spiking models, improving prediction accuracy by up to 61.6% and mitigating degradation in multi-layer networks. Notably, it narrows the gap to the selected deep learning baseline (LSTM), achieving comparable accuracy in some settings while requiring only about 10% of the estimated inference energy of that baseline under a common operation-level model. These results show that, within the empirical scope considered here, enhanced conventional SNNs can improve time-series forecasting accuracy while retaining favorable estimated efficiency. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

30 pages, 939 KB  
Article
AI-Driven Financial Solutions for Climate Resilience and Geopolitical Risk Mitigation in Low- and Middle-Income Countries
by Abdelrahman Mohamed Mohamed Saeed and Muhammad Ali
Economies 2026, 14(4), 134; https://doi.org/10.3390/economies14040134 - 10 Apr 2026
Abstract
Climate change disproportionately threatens low- and middle-income countries, yet integrated assessments combining socio-economic fragility with physical hazards remain limited. This study quantifies multi-dimensional climate vulnerability and derives optimized adaptation policies for six representative nations (Bangladesh, Colombia, Kenya, Morocco, Pakistan, Vietnam) by fusing socio-economic [...] Read more.
Climate change disproportionately threatens low- and middle-income countries, yet integrated assessments combining socio-economic fragility with physical hazards remain limited. This study quantifies multi-dimensional climate vulnerability and derives optimized adaptation policies for six representative nations (Bangladesh, Colombia, Kenya, Morocco, Pakistan, Vietnam) by fusing socio-economic indicators with climate risk data (2000–2024). A computational framework integrating unsupervised learning, dimensionality reduction, and predictive modeling was employed. Principal Component Analysis synthesized eight indicators into a Compound Vulnerability Score (CVS), while K-Means and DBSCAN identified distinct vulnerability regimes. XGBoost quantified driver importance, and Graph Neural Networks captured systemic interconnections. XGBoost identified projected drought risk (31.2%), precipitation change (18.1%), and poverty headcount (14.3%) as primary drivers. Graph networks demonstrated significant risk amplification in African nations (Morocco SRS: 0.728–0.874; Kenya SRS: 0.504–0.641) versus damping in Asian countries. A Reinforcement Learning (RL) agent was trained using Deep Q-Networks with experience replay to optimize intervention portfolios under budget constraints. The RL policy achieved a 23% reduction in systemic risk compared to uniform allocation baselines, generating context-specific priorities: drought management for Morocco (score 50) and Pakistan (40); poverty alleviation for Kenya (40); coastal protection for Bangladesh (40); agricultural resilience for Vietnam (35); and institutional capacity building for Colombia (50). In conclusion, socio-economic fragility non-linearly amplifies climate hazards, with poverty and drought risk constituting critical vulnerability multipliers. The AI-driven framework demonstrates that targeted interventions in high-sensitivity systems maximize systemic risk reduction. This integrated approach provides a replicable, evidence-based foundation for strategic adaptation finance allocation in an increasingly uncertain climate future. Full article
(This article belongs to the Special Issue Energy Consumption, Financial Development and Economic Growth)
Show Figures

Figure 1

15 pages, 2413 KB  
Article
A Motion Intention Recognition Method for Lower-Limb Exoskeleton Assistance in Ultra-High-Voltage Transmission Tower Climbing
by Haoyuan Chen, Yalun Liu, Ming Li, Zhan Yang, Hongwei Hu, Xingqi Wu, Xingchao Wang, Hanhong Shi and Zhao Guo
Sensors 2026, 26(8), 2346; https://doi.org/10.3390/s26082346 - 10 Apr 2026
Viewed by 30
Abstract
Transmission tower climbing is a critical specialized operation in ultra-high-voltage power maintenance and communication infrastructure servicing. However, existing lower-limb exoskeletons used for tower climbing still suffer from insufficient motion intention recognition accuracy under complex operational environments. To address this issue, this study proposes [...] Read more.
Transmission tower climbing is a critical specialized operation in ultra-high-voltage power maintenance and communication infrastructure servicing. However, existing lower-limb exoskeletons used for tower climbing still suffer from insufficient motion intention recognition accuracy under complex operational environments. To address this issue, this study proposes an inertial measurement unit (IMU)-based bidirectional temporal deep learning method for motion intention recognition. First, a one-dimensional convolutional neural network (1D-CNN) is employed to extract local temporal features from multi-channel IMU signals. Subsequently, a bidirectional long short-term memory network (Bi-LSTM) is introduced to model the forward and backward temporal dependencies of motion sequences. Furthermore, a temporal attention mechanism is incorporated to emphasize discriminative features at critical movement phases, enabling the precise recognition of short-duration and transitional motions. Experimental results demonstrate that the proposed method outperforms traditional machine learning approaches and unidirectional temporal models in terms of accuracy, F1-score, and other evaluation metrics. In particular, this method demonstrates significant advantages in identifying the flexion/extension phases and transitional states. This study provides an offline method for analyzing movement intentions in lower-limb exoskeleton control for power transmission tower climbing scenarios and offers a reference for developing assistive control strategies for assisted climbing tasks in this specific context. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

22 pages, 4120 KB  
Article
Hybrid Deep Learning Method for Vibration-Based Gear Fault Diagnosis in Shearer Rocker Arm
by Joshua Fenuku, Hua Ding, Gertrude Selase Gosu, Xiaochun Sun and Ning Li
Electronics 2026, 15(8), 1587; https://doi.org/10.3390/electronics15081587 - 10 Apr 2026
Viewed by 35
Abstract
In underground coal mining, the gear of a shearer’s rocker arm endures extreme stress and environmental fluctuations. Failures in this vital component can pose serious safety hazards, cause prolonged operational downtime, and result in significant financial losses. Therefore, accurate gear fault diagnosis is [...] Read more.
In underground coal mining, the gear of a shearer’s rocker arm endures extreme stress and environmental fluctuations. Failures in this vital component can pose serious safety hazards, cause prolonged operational downtime, and result in significant financial losses. Therefore, accurate gear fault diagnosis is crucial. However, conventional diagnostic methods often struggle with limited feature extraction and poor performance when dealing with non-stationary, noisy signals typical of this environment. To address these challenges, a hybrid model consisting of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, and Markov Transition Model (MTM) is proposed. In this framework, the CNN is used to extract both global and local features related to gear fault. A time-distributed feature extractor is then integrated with the LSTM to capture the temporal progression of these features, aiding in effective modeling of fault evolution over time. Finally, the MTM further refines classification by incorporating probabilistic state transition between fault conditions, thereby improving diagnostic stability and robustness under noise. Experimental validation was done using vibration data from the Taizhong Coal Machinery rocker arm test platform and gear data from Southeast University and achieved up to 99.79% accuracy. These results show this proposed method outperformed other advanced diagnostic methods, offering dependable fault diagnosis and strong noise resistance even under extreme noise conditions of −5 dB SNR. Full article
(This article belongs to the Section Computer Science & Engineering)
26 pages, 1385 KB  
Article
Probabilistic Short-Term Sky Image Forecasting Using VQ-VAE and Transformer Models on Sky Camera Data
by Chingiz Seyidbayli, Soheil Nezakat and Andreas Reinhardt
J. Imaging 2026, 12(4), 165; https://doi.org/10.3390/jimaging12040165 - 10 Apr 2026
Viewed by 47
Abstract
Cloud cover significantly reduces the electrical power output of photovoltaic systems, making accurate short-term cloud movement predictions essential for reliable solar energy production planning. This article presents a deep learning framework that directly estimates cloud movement from ground-based all-sky camera images, rather than [...] Read more.
Cloud cover significantly reduces the electrical power output of photovoltaic systems, making accurate short-term cloud movement predictions essential for reliable solar energy production planning. This article presents a deep learning framework that directly estimates cloud movement from ground-based all-sky camera images, rather than predicting future production from past power data. The system is based on a three-step process: First, a lightweight Convolutional Neural Network segments cloud regions and produces probabilistic masks that represent the spatial distribution of clouds in a compact and computationally efficient manner. This allows subsequent models to focus on the geometry of clouds rather than irrelevant visual features such as illumination changes. Second, a Vector Quantized Variational Autoencoder compresses these masks into discrete latent token sequences, reducing dimensionality while preserving fundamental cloud structure patterns. Third, a GPT-style autoregressive transformer learns temporal dependencies in this token space and predicts future sequences based on past observations, enabling iterative multi-step predictions, where each prediction serves as the input for subsequent time steps. Our evaluations show an average intersection-over-union ratio of 0.92 and a pixel accuracy of 0.96 for single-step (5 s ahead) predictions, while performance smoothly decreases to an intersection-over-union ratio of 0.65 and an accuracy of 0.80 in 10 min autoregressive propagation. The framework also provides prediction uncertainty estimates through token-level entropy measurement, which shows positive correlation with prediction error and serves as a confidence indicator for downstream decision-making in solar energy forecasting applications. Full article
(This article belongs to the Special Issue AI-Driven Image and Video Understanding)
Show Figures

Figure 1

23 pages, 13020 KB  
Article
Identification of Key Osteoarthritis-Associated Genes Based on DNA Methylation
by Jian Zhao, Changwu Wu, Zhejun Kuang, Han Wang and Lijuan Shi
Int. J. Mol. Sci. 2026, 27(8), 3388; https://doi.org/10.3390/ijms27083388 - 9 Apr 2026
Viewed by 90
Abstract
Osteoarthritis (OA) is a complex degenerative joint disease for which early diagnosis and clear molecular characterization remain limited. DNA methylation has been increasingly recognized as an important regulatory factor in OA pathogenesis. In this study, we proposed an integrative computational framework combining statistical [...] Read more.
Osteoarthritis (OA) is a complex degenerative joint disease for which early diagnosis and clear molecular characterization remain limited. DNA methylation has been increasingly recognized as an important regulatory factor in OA pathogenesis. In this study, we proposed an integrative computational framework combining statistical analysis, machine learning, deep learning, and functional genomics to identify and validate OA-associated genes and methylation biomarkers for diagnostic and biological interpretation. Candidate CpG sites were obtained using two complementary strategies: differential methylation analysis and selection of loci located near transcription start sites of previously reported OA-related genes. Key features were further refined using support vector machine recursive feature elimination and random forest algorithms. Based on the selected loci, we developed a feature-fusion diagnostic model that combines Transformer and convolutional neural networks with adaptive weighting to capture both global dependency structures and local methylation patterns. A panel of 220 methylation sites demonstrated stable and reproducible diagnostic performance in an independent cohort. Functional annotation and pathway analysis highlighted several established OA-associated genes, including TGFBR2, SMAD3, PPARG, and MAPK3, and suggested INHBB as a potential novel effector gene, with additional support for AMH and INHBE involvement. Overall, this study presents a robust methylation-based framework for identifying key OA-associated genes and provides new insights into the epigenetic mechanisms underlying OA. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

33 pages, 3526 KB  
Review
A Comprehensive Survey of AI/ML-Driven Optimization, Predictive Control, and Innovative Solar Technologies
by Ali Alhazmi
Energies 2026, 19(8), 1847; https://doi.org/10.3390/en19081847 - 9 Apr 2026
Viewed by 113
Abstract
By 2024, global photovoltaic (PV) capacity exceeded 2000 GW, corresponding with a decline in levelized costs of approximately 90% since 2010. Artificial intelligence (AI) and machine learning (ML) are enabling novel approaches to solar energy system design and implementation. This survey offers a [...] Read more.
By 2024, global photovoltaic (PV) capacity exceeded 2000 GW, corresponding with a decline in levelized costs of approximately 90% since 2010. Artificial intelligence (AI) and machine learning (ML) are enabling novel approaches to solar energy system design and implementation. This survey offers a detailed evaluation of AI/ML methodologies utilized across the solar energy value chain, with a focus on solar irradiance forecasting, maximum power point tracking (MPPT), fault identification, and the expeditious discovery of system materials. The distinction between AI as the broader paradigm and ML as its data-driven subset is drawn and maintained throughout. The primary results cite forecasting improvements via deep learning architectures (LSTM, CNN, Transformer) of 10–40% over traditional methods, while hybrid numerical weather prediction and deep learning models achieve mean absolute error reductions of 15–25%. Reinforcement learning-based MPPT achieves tracking efficiencies in excess of 99% under partial shading, CNN-based fault classification reaches accuracies above 95%, and ML-based screening of materials accelerates perovskite optimization by a factor of 5–10×. Promising paradigms such as explainable AI, federated learning, digital twins, and physics-informed neural networks are evaluated alongside technical, economic, and regulatory constraints. This survey provides a consolidated reference and practical roadmap for the advancement of AI-driven solar energy technologies. Full article
Show Figures

Figure 1

31 pages, 3398 KB  
Article
Multimodal Smart-Skin for Real-Time Sitting Posture Recognition with Cross-Session Validation
by Giva Andriana Mutiara, Muhammad Rizqy Alfarisi, Paramita Mayadewi, Lisda Meisaroh and Periyadi
Multimodal Technol. Interact. 2026, 10(4), 39; https://doi.org/10.3390/mti10040039 - 9 Apr 2026
Viewed by 134
Abstract
Prolonged sitting with poor posture is associated with musculoskeletal disorders, reduced productivity, and long-term health risks. Many existing posture monitoring systems predominantly rely on single-modality sensing, such as pressure or vision-based approaches, limiting their ability to capture both static alignment and dynamic micro-movements. [...] Read more.
Prolonged sitting with poor posture is associated with musculoskeletal disorders, reduced productivity, and long-term health risks. Many existing posture monitoring systems predominantly rely on single-modality sensing, such as pressure or vision-based approaches, limiting their ability to capture both static alignment and dynamic micro-movements. This study proposes a multimodal smart-skin system integrating pressure, temperature, and vibration sensors for sitting posture recognition. A total of 42 sensors distributed across 14 anatomical locations were deployed, generating 15,037 samples collected over three independent sessions to evaluate cross-session temporal generalization across nine posture classes under controlled experimental conditions. Two deep learning architectures—Temporal Convolutional Networks with Attention (TCN + Attn) and Convolutional Neural Network–Long Short-Term Memory (CNN − LSTM)—were compared under Leave-One-Session-Out (LOSO) cross-validation. TCN + Attn achieved 85.23% LOSO accuracy, outperforming CNN − LSTM by 2.56 percentage points while reducing training time by 36.7% and inference latency by 33.9%. Ablation analysis revealed that temperature sensing was the most discriminative unimodal modality (71.5% accuracy), and full multimodal fusion improved LOSO accuracy by 22.93% compared to pressure-only configurations. These results demonstrate the feasibility of multimodal smart-skin sensing combined with temporal convolutional modeling for cross-session posture recognition and indicate potential for efficient real-time, privacy-preserving ergonomic monitoring. This study should be interpreted as a controlled, single-subject proof-of-concept, and further validation in multi-subject and real-world environments is required to establish broader generalizability. Full article
Show Figures

Figure 1

27 pages, 4791 KB  
Article
Combining Fast Orthogonal Search with Deep Learning to Improve Low-Cost IMU Signal Accuracy
by Jialin Guan, Eslam Mounier, Umar Iqbal and Michael J. Korenberg
Sensors 2026, 26(8), 2300; https://doi.org/10.3390/s26082300 - 8 Apr 2026
Viewed by 238
Abstract
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system [...] Read more.
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system identification technique, with deep Long Short-Term Memory (LSTM) neural networks to improve IMU signal accuracy in GNSS-denied navigation. The FOS algorithm efficiently models deterministic error patterns (such as bias drift and scale factor errors) using a small training dataset, while the LSTM learns the IMU’s complex time-dependent error dynamics from much longer training data. In the proposed method, FOS is first used to predict the output of a high-end IMU based on that of a low-end IMU, and the trained FOS model is then used to extend the training data for an LSTM-based predictor. We demonstrate the efficacy of this FOS–LSTM hybrid on real vehicular IMU data by training with a limited segment of high-precision reference measurements and testing on extended operation periods. The hybrid model achieves high predictive accuracy for predicting the high-end signal based on the low-end signal, with a mean squared error below 0.1% and yields more stable velocity estimates than models using FOS or LSTM alone. Although long-term position drift is not fully eliminated, the proposed method significantly reduces short-term uncertainty in the inertial solution. These results highlight a promising synergy between model-based system identification and data-driven learning for sensor error calibration in navigation systems. Key contributions include FOS-based pseudo-label bootstrapping for data-efficient LSTM training and a navigation-level evaluation illustrating how signal correction impacts dead reckoning drift. Full article
Show Figures

Figure 1

36 pages, 7325 KB  
Article
Intelligent Scheduling of Rail-Guided Shuttle Cars via Deep Reinforcement Learning Integrating Dynamic Graph Neural Networks and Transformer Model
by Fang Zhu and Shanshan Peng
Algorithms 2026, 19(4), 289; https://doi.org/10.3390/a19040289 - 8 Apr 2026
Viewed by 119
Abstract
With the rapid development of e-commerce and smart manufacturing, automated warehouse systems have become critical infrastructure for modern logistics. In China’s vast market, the dynamic scheduling of Rail-Guided Vehicles (RGVs) faces significant challenges due to complex task uncertainties, hierarchical supply chain structures, and [...] Read more.
With the rapid development of e-commerce and smart manufacturing, automated warehouse systems have become critical infrastructure for modern logistics. In China’s vast market, the dynamic scheduling of Rail-Guided Vehicles (RGVs) faces significant challenges due to complex task uncertainties, hierarchical supply chain structures, and real-time collision avoidance requirements. Traditional rule-based methods and static optimization models often fail to adapt to such dynamic environments. To address these issues, this paper proposes a novel hybrid deep reinforcement learning framework integrating a Dynamic Graph Neural Network (DGNN) and a Transformer model. The DGNN captures the spatiotemporal dependencies of the warehouse network topology, while the Transformer mechanism enhances long-range feature extraction for task prioritization. Furthermore, we design a centralized Deep Q-network (DQN) framework with parameterized action spaces to coordinate multiple RGVs collaboratively. While the system manages multiple physical vehicles, the learning architecture employs a single-agent global scheduler to avoid the non-stationarity issues inherent in multi-agent reinforcement learning. Experimental results based on real-world data from a large-scale electronics manufacturing warehouse demonstrate that our method reduces average task completion time by 18.5% and improves system throughput by 22.3% compared to state-of-the-art baselines. The proposed approach demonstrates potential for intelligent warehouse management in dynamic industrial scenarios. Full article
Show Figures

Figure 1

19 pages, 1991 KB  
Article
Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-Based Peptide Receptor Radionuclide Therapy
by Simon Baur, Tristan Ruhwedel, Ekin Böke, Zuzanna Kobus, Gergana Lishkova, Christoph Wetz, Holger Amthauer, Christoph Roderburg, Frank Tacke, Julian M. Rogasch, Wojciech Samek, Henning Jann, Jackie Ma and Johannes Eschrich
Cancers 2026, 18(8), 1194; https://doi.org/10.3390/cancers18081194 - 8 Apr 2026
Viewed by 218
Abstract
Background/Objectives: Peptide receptor radionuclide therapy (PRRT) is an established treatment for metastatic neuroendocrine tumors (NETs), yet long-term disease control occurs only in a subset of patients. Predicting progression-free survival (PFS) could support individualized treatment planning. This study evaluates laboratory, imaging, and multimodal [...] Read more.
Background/Objectives: Peptide receptor radionuclide therapy (PRRT) is an established treatment for metastatic neuroendocrine tumors (NETs), yet long-term disease control occurs only in a subset of patients. Predicting progression-free survival (PFS) could support individualized treatment planning. This study evaluates laboratory, imaging, and multimodal deep learning models for PFS prediction in PRRT-treated patients. Methods: In this retrospective, single-center study 116 patients with metastatic NETs undergoing [177Lu]Lu-DOTATOC were included. Clinical characteristics, laboratory values, and pretherapeutic somatostatin receptor positron emission tomography/computed tomographies (SR-PET/CTs) were collected. Seven models were trained to classify low- vs. high-PFS groups, including unimodal (laboratory, SR-PET, or CT) and multimodal fusion approaches. Performance was assessed via repeated 3-fold cross-validation with area under the receiver operating characteristic curve (AUROC) and area under the precision–recall curve (AUPRC). Explainability was evaluated by feature importance analysis and gradient based saliency maps. Results: Forty-two patients (36%) displayed short PFS (≤1 year) and 74 patients displayed long PFS (>1 year). Groups were similar in most characteristics, except for higher baseline chromogranin A (p = 0.003), elevated γ-GT (p = 0.002), and fewer PRRT cycles (p < 0.001) in short-PFS patients. The Random Forest model trained only on laboratory biomarkers reached an AUROC of 0.59 ± 0.02. Unimodal three-dimensional convolutional neural networks using SR-PET or CT performed worse (AUROC 0.42 ± 0.03 and 0.54 ± 0.01, respectively). A multimodal fusion model integrating laboratory values, SR-PET, and CT—augmented with a pretrained CT branch—achieved the best results (AUROC 0.72 ± 0.01, AUPRC 0.80 ± 0.01). Explainability analyses provided insights into model predictions, with explainability patterns in the fusion model appearing physiologically plausible and predominantly tumor-focused. Conclusions: Multimodal deep learning combining SR-PET, CT, and laboratory biomarkers outperformed unimodal approaches for PFS prediction after PRRT. Upon external validation, such models may support risk-adapted follow-up strategies. Full article
Show Figures

Figure 1

Back to TopTop